Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
Showing posts with label Consciousness. Show all posts
Showing posts with label Consciousness. Show all posts

Monday 26 May 2014

Why consciousness is unique to the animal kingdom

I’ve written a number of posts on consciousness over the last 7 years, or whenever it was I started blogging, so this is a refinement of what’s gone before, and possibly a more substantial argument. It arose from a discussion in New Scientist  24 May 2014 (Letters) concerning the evolution of consciousness and, in particular the question: ‘What need is there of actual consciousness?’ (Eric Kvaalen from France).

I’ve argued in a previous post that consciousness evolved early and it arose from emotions, not logic. In particular, early sentient creatures would have relied on fear, pain and desire, as these do pose an evolutionary advantage, especially if memory is also involved. In fact, I’ve argued that consciousness without memory is pretty useless, otherwise the organism (including humans) wouldn’t even know it was conscious (see my post on Afterlife, March 2014).

Many philosophers and scientists argue that AI (Artificial Intelligence) will become sentient. The interesting argument is that ‘we will know’ (referencing New Scientist Editorial, 2 April 2011) because we don’t know that anyone else is conscious either. In other words, the argument goes that if an AI behaves like it’s conscious or sentient, then it must be. However, I argue that AI entities don’t have emotions unless they are programmed artificially to behave like they do – i.e. simulated. And this is a major distinction, if one believes, as I do, that sentience arose from emotions (feelings) and not logic or reason.

But in answer to the question posed above, one only has to look at another very prevalent life form on this planet, which is not sentient, and the answer, I would suggest, becomes obvious. I’m talking about vegetation. And what is the fundamental difference? There is no evolutionary advantage to vegetation having sentience, or, more specifically, having feelings. If a plant was to feel pain or fear, how could it respond? Compared to members of the animal kingdom, it cannot escape the source, because it is literally rooted to the spot. And this is why I believe animals evolved consciousness (sentience by another name) and plants didn’t. Now, there may be degrees of consciousness in animals (we don’t know) but, if feelings were the progenitor of consciousness, we can understand why it is a unique attribute of the animal kingdom and not found in vegetation or machines.

Saturday 8 March 2014

Afterlife belief – a unique human condition

Recently I’ve been dreaming about having philosophical discussions, which is very strange, to say the least. And one of these was on the topic of the afterlife. My particular insight, from my dream, was that humans have the unique capacity to imagine a life and a world beyond death. It’s hard to imagine that any other creature, no matter its cognitive capacity, would be able to make the same leap. This is not a new insight for me; it’s one my dream reminded me of rather than initiated. Nevertheless, it’s a good starting point for a discussion on the afterlife.

It’s also, I believe, the reason humans came up with religion – it’s hard to dissociate one from the other.  Humans are more than capable of imagining fictional worlds – I’ve created a few myself as a sometime sci-fi writer. But imagining a life after death is to project oneself into an eternal continuity, a form of immortality. Someone once pointed out that death is the ultimate letting go of the ego, and I believe this is a major reason we find it so difficult to confront. The Buddhists talk about the ‘no-self’ and ‘no attachments’, and I believe this is what they’re referring to. We all form attachments during life, be it material or ideological or aspirational or through personal relationships, and I think that this is natural, even psychologically necessary for the most part. But death requires us to give all these up. In some cases people make sacrifices, where an ideal or another’s life takes precedent over one’s own ego. In effect, we may substitute someone else’s ego for our own.

Do I believe in an afterlife? Actually, I’m agnostic on that point, but I have no expectation, and, from what we know, it seems unlikely. I have no problem with people believing in an afterlife – as I say, it’s part of the human condition – but I have a problem when people place more emphasis on it than the current life they’re actually living. There are numerous stories of people ostracizing their children, on religious grounds, because seeking eternal paradise is more important than familial relationships. I find this perverse, as I do the idea of killing people with the promise of reaching heaven as a reward.

Personally, I think it’s more healthy to have no expectation when one dies. It’s no different to going to sleep or any other form of losing consciousness, only one never regains it. No one knows when they fall asleep or when they lose consciousness, and the same applies when one dies. It leaves no memory, so we don’t know when it happens. There is an oft asked question: why is there something rather than nothing? Well, consciousness plays a big role in that question, because, without consciousness, there might as well be nothing. ‘Something’ only exists for ‘you’ while you are alive.

Consciousness exists in a continuous present, and, in fact, without consciousness, the concepts of past present and future would have no meaning. But more than that, without memory, you would not even know you have consciousness. In fact, it is possible to be conscious or act conscious, whilst believing, in retrospect, that you were unconscious. It can happen when you come out of anaesthetic (it’s happened to me) or when you’re extremely intoxicated with alcohol or when you’ve been knocked unconscious by a blow. In these admittedly rare and unusual circumstances, one can be conscious and behave consciously, yet create no memories, so effectively be unconscious. In other words, without memory (short term memory) we would all be subjectively unconscious.

So, even if there is the possibility that one’s consciousness can leave behind the body that created it, after corporeal death, it would also leave behind all the memories that give us our sense of self. It’s only our memories that give us our sense of continuity, and hence our sense of self.

Then there is the issue of afterlife and infinity. Only mathematicians and cosmologists truly appreciate what infinity means. The point is that if you have an infinite amount of time and space than anything that can happen once can happen an infinite number of times. This means that, with infinity, in this world or any other, there would be an infinite number of you and me. But, not only am I not interested in an infinite number of me, I don’t believe anyone would want to live for infinity if they really thought about it.

At the start, I mentioned that I believe religion arose from a belief in the afterlife. Having said that, I think religion comes from a natural tendency to look for meaning beyond the life we live. I’ve made the point before, that if there is a purpose beyond the here and now, it’s not ours to know. And, if there is a purpose, we find it in the lives we live and not in an imagined reward beyond the grave.

Saturday 17 December 2011

Consciousness Unexplained

The Mysterious Flame by Colin McGinn, subtitled Conscious Minds in a Material World, was recommended to my by The Atheist Missionary (aka TAM) almost 2 years ago, and it’s taken me all this time to get around to reading it.

But it was well worth the effort, and I can only endorse the recommendation given by The New York Times, as quoted on the cover: “There is no better introduction to the problem of consciousness than this.” McGinn is Professor of Philosophy at Rutgers University, with a handful of other books credited to him. Mysterious Flame was written in 1999, yet it’s not dated by other books I’ve read on this subject, and I would go so far as to say that anyone with an interest in the mind-body problem should read this book. Even if you don’t agree with him, I’m sure he has something to offer that you didn’t consider previously. At the end of the book, he also has something to say about the discipline of philosophy in general: its history and its unique position in human thought.

Most significantly, McGinn calls himself a ‘mysterian’, who is someone, like myself, as it turns out, who believes that consciousness is a mystery which we may never solve. Right from the start he addresses the two most common philosophical positions on this subject: materialism and dualism; demonstrating how they both fail. They are effectively polar opposite positions: materialism arguing that consciousness is neuronal activity full stop; and dualism arguing that consciousness is separate to the brain, albeit connected, and therefore can exist independently of the brain.

Materialism is the default position taken by scientists and dualism is the default position taken by most people even if they’re not aware of it. Most people think that ‘I’ is an entity that exists inside their head, dependent on their brain yet separate from it somehow. Many people, who have had out-of-body experiences, argue this confirms their belief. On the other hand, scientists have demonstrated how we can fool the ‘mind’ into thinking it is outside the body. I have argued elsewhere (Subjectivity, June 2009) that ‘I think’ is a tautology, because ‘I’ is your thoughts and nothing else.

McGinn acknowledges that consciousness is completely dependent on the brain but this alone doesn’t explain it. He points out that consciousness evolved relatively early in evolution and is not dependent on intelligence per se. Being more intelligent doesn’t make us more sentient than other species who also ‘feel’. He attacks the commonly held belief in the scientific community that consciousness just arises from this ‘meat’ we call a brain, and to create consciousness we merely have to duplicate this biological machine. I agree with him on this point. Not so recently (April 2011), I challenged an editorial and an article written in New Scientist inferring that sentience is an axiomatic consequence of artificial intelligence (AI): 'it’s just a matter of time before we will be forced to acknowledge it'. However, the biological evidence suggests that making AI more intelligent won’t create sentience, yet that’s exactly what most AI exponents believe. As McGinn says: ‘…sentience in general does not involve symbolic manipulation’, which is what a computer algorithm does.

McGinn argues that the problem with consciousness is that it’s non-spatial and therefore could exist in another dimension. This is not as daft as it sounds, because, as he points out, an additional dimension could exist without us knowing it and he references Edwin A. Abbott’s famous book, Flatland, to make his point. I’ve similarly argued that quantum mechanics could be explained by imagining a hidden dimension, so I’m not dismissive of this hypothesis.

The most important point that McGinn makes, in my opinion, is a fundamental one of epistemology. We humans tend to think that there is nothing that exists that is beyond our ultimate comprehension, yet there is no legitimate cognitive reason to assume that. To quote: ‘We should have the humility, and plain good sense, to admit that some things may exist without being knowable by us.’

This came up recently in an online discussion I had with Emanuel Rutten (Trying to define God, Nov. 11) who argued the opposite based on an ‘all possible worlds’ scenario. And if there were an infinite number of worlds, then Rutten’s argument would be valid. However, projecting what is possibly knowable in an infinite number of worlds to our specific world is epistemological nonsense.

As McGinn points out, most species on our planet can’t comprehend gravity or how the stars stay up in the sky or that the Earth goes around the sun – it’s beyond their cognitive abilities. Likewise there could be phenomena that are beyond our cognitive abilities, and consciousness may be one.

Roger Penrose addresses this epistemological point in Chapter 1 of Road to Reality, where he admits a ‘personal prejudice’ that everything in the natural world is within our cognitive grasp, whilst acknowledging that others don’t share his prejudice. In particular, Penrose contends that there is a Platonic mathematical realm, which is theoretically available to us without constraint (except the time to explore it), and that this Platonic realm can explain the entire physical universe. Interestingly, McGinn makes no reference to the significance of mathematics in determining the epistemological limit of our knowledge, yet I contend that this is a true limit.

Therefore, I would argue, based on this hypothetical mathematically cognitive limit, that if consciousness can’t be determined mathematically then it will remain a mystery.

Even though McGinn discusses amnesia in reference to the ‘self’, he doesn’t specifically address the fact that, without memory, there would be no ‘self’. Which is why none of us have a sense of self in our early infancy because we create no memories of it. It is memory that specifically gives us a sense of continuity of self and allows us to believe that the ‘I’ we perceive ourselves to be as an adult is the same ‘I’ we were as children.

I’ve skipped over quite a lot of McGinn’s book, obviously, but he does give arguably the best description of John Searle’s famous Chinese Room thought experiment I’ve read, without telling the reader that it is John Searle’s Chinese Room thought experiment.

At the end of the book, he devotes a short chapter to ‘The Unbearable Heaviness of Philosophy’ where he explains how ‘natural philosophy’ diverged from science yet they are more complementary than dichotomous. To quote McGinn again:

‘Science asks answerable questions… eliminating false theories, reducing the area of human ignorance, while philosophy seems mired in controversy, perpetually worrying at the same questions, not making the kind of progress characteristic of science.’

Many people perceive and present philosophy as the poor orphan of science in the modern age, yet I’m unsure if they will ever be completely separated or become independent. Science reveals that nature’s mysteries are endless and whilst those mysteries persist then philosophy will continue to play its role.

Right at the end of the book, McGinn makes a pertinent observation: that our DNA code contains the answer to our mystery, because consciousness is a consequence of the genetic instructions that make every sentient creature. So our genes have the information to create consciousness that consciousness itself is unable to comprehend.

Friday 22 April 2011

Sentience, free will and AI

In the 2 April 2011 edition of New Scientist, the editorial was titled Rights for robots; We will know when it’s time to recognise artificial cognition. Implicit in the header and explicit in the text is the idea that robots will one day have sentience just like us. In fact they highlighted one passage: “We should look to the way people treat machines and have faith in our ability to detect consciousness.”

I am a self-confessed heretic on this subject because I don’t believe machine intelligence will ever be sentient, and I’m happy to stick my neck out in this forum so that one day I can possibly be proven wrong. One of the points of argument that the editorial makes is that ‘there is no agreed definition of consciousness’ and ‘there’s no way to tell that you aren’t the only conscious being in a world of zombies.’ In other words, you really don’t know if the person right next to you is conscious (or in a dream) so you’ll be forced to give a cognitive robot the same benefit of the doubt. I disagree.

Around the same time as reading this, I took part in a discussion on Rust Belt Philosophy about what sentience is. Firstly, I contend that sentience and consciousness are synonymous, and I think sentience is pretty pervasive in the animal kingdom. Does that mean that something that is unconscious is not sentient? Strictly speaking, yes, because I would define sentience as the ability to feel something, either emotionally or physically. Now, we often feel something emotionally when we dream, so arguably that makes one sentient when unconscious. But I see this as the exception that makes my definition more pertinent rather than the exception that proves me wrong.

In First Aid courses you are taught to squeeze someone’s fingers to see if they are conscious. So to feel something is directly correlated with consciousness and that’s also how I would define sentience. Much of the brain’s activity is subconscious even to the extent that problem-solving is often executed subliminally. I expect everyone has had the experience of trying to solve a puzzle, then leaving it for a period of time, only to solve it ‘spontaneously’ when they next encounter it. I believe the creative process often works in exactly the same way, which is why it feels so spontaneous and why we can’t explain it even after we’ve done it. This subconscious problem-solving is a well known cognitive phenomenon, so it’s not just a ‘folk theory’.

This complex subconscious activity observed in humans, I believe is quite different from the complex instinctive behaviour that we see in animals: birds building nests, bees building hives, spiders building webs, beavers building dams. These activities seem ‘hard-wired’, to borrow from the AI lexicon as we tend to do.

A bee does a complex dance to communicate where the honey is. No one believes that the bee cognitively works this out the way we would, so I expect it’s totally subconscious. So if a bee can perform complex behaviours without consciousness does that mean it doesn’t have consciousness at all? The obvious answer is yes, but let’s look at another scenario. The bee gets caught in a spider’s web and tries desperately to escape. Now I believe that in this situation the bee feels fear and, by my definition, that makes it sentient. This is an important point because it underpins virtually every other point I intend to make. Now, I don’t really know if the bee ‘feels’ anything at all, so it’s an assumption. But my assumption is that sentience, and therefore consciousness, started with feelings and not logic.

In last week’s issue of New Scientist, 16 April 2011, the cover features the topic, Free Will: The illusion we can’t live without. The article, written by freelance writer, Dan Jones, is headed The free will delusion. In effect, science argues quite strongly that free will is an illusion, but one we are reluctant to relinquish. Jones opens with a scenario in 2500 when free will has been scientifically disproved and human behaviour is totally predictable and deterministic. Now, I don’t think there’s really anything in the universe that’s totally predictable, including the remote possibility that Earth could one day be knocked off its orbit, but that’s the subject of another post. What’s more relevant to this discussion is Jones’ opening sentence where he says: ‘…neuroscientists know precisely how the hardware of the brain runs the software of the mind and dictates behaviour.’ Now, this is purely a piece of speculative fiction, so it’s not necessarily what Jones actually believes. But it’s the implicit assumption that the brain’s processes are identical to a computer’s that I find most interesting.

The gist of the article, by the way, is that when people really believe they have no free will, they behave very unempathetically towards others, amongst other aberrational behaviours. In other words, a belief in our ability to direct our own destiny is important to our psychological health. So, if the scientists are right, it’s best not to tell anyone. It’s ironic that telling people they have no free will makes them behave as if they don’t, when allowing them to believe they have free will gives their behaviour intentionality. Apparently, free will is a ‘state-of-mind’.

On a more recent post of Rust Belt Philosophy, I was reminded that, contrary to conventional wisdom, emotions play an important role in rational behaviour. Psychologists now generally believe that, without emotions, our decision-making ability is severely impaired. And, arguably, it’s emotions that play the key role in what we call free will. Certainly, it’s our emotions that are affected if we believe we have no control over our behaviour. Intentions are driven as much by emotion as they are by logic. In fact, most of us make decisions based on gut feelings and rationalise them accordingly. I’m not suggesting that we are all victims of our emotional needs like immature children, but that the interplay between emotions and rational thought are the key to our behaviours. More importantly, it’s our ability to ‘feel’ that not only separates us from machine intelligence in a physical sense, but makes our ‘thinking’ inherently different. It’s also what makes us sentient.

Many people believe that emotion can be programmed into computers to aid them in decision-making as well. I find this an interesting idea and I’ve explored it in my own fiction. If a computer reacted with horror every time we were to switch it off would that make it sentient? Actually, I don’t think it would, but it would certainly be interesting to see how people reacted. My point is that artificially giving AI emotions won’t make them sentient.

I believe feelings came first in the evolution of sentience, not logic, and I still don’t believe that there’s anything analogous to ‘software’ in the brain, except language and that’s specific to humans. We are the only species that ‘downloads’ a language to the next generation, but that doesn’t mean our brains run on algorithms.

So evidence in the animal kingdom, not just humans, suggests that sentience, and therefore consciousness, evolved from emotions, whereas computers have evolved from pure logic. Computers are still best at what we do worst, which is manipulate huge amounts of data. Which is why the human genome project actually took less time than predicted. And we still do best at what they do worst, which is make decisions based on a host of parameters including emotional factors as well as experiential ones.

Sunday 20 June 2010

What dreams are made of

Last week’s New Scientist (12 June 2010) had a very interesting article on dreams, in particular ‘lucid dreaming’, by Jessica Hamzelou. She references numerous people: Ursula Voss (University of Franfurt), Patrick McNamara (Boston University), Allan Hobson (Harvard Medical School), Eric Nofzinger (University of Pittsburgh) Victor Spoormaker (Utrecht University) and Michael Czisch (Max Planck Institute); so it’s a serious topic all over the world.

Ursula Voss argues that there are 2 states of consciousness, which she calls primary and secondary. ‘Primary’ being what most animals perceive: raw sensations and emotions; whereas ‘secondary’ is unique to humans, according to Voss, because only humans are “aware of being aware”. This in itself is an interesting premise.

I don’t agree with the well-known supposition that most animals don’t have a sense of ‘self’ because they don’t recognise themselves in a mirror. Even New Scientist reported on challenges to this view many years ago (before I started blogging). The lack of recognition of one’s own reflection is obviously a cognitive misperception, but it doesn’t axiomatically mean that the animal doesn’t have a sense of its own individuality relative to other members of its own species, which is how I would define a sense of self. In other words, a sense of self is the ability to differentiate one’s self from others. The fact that it mistakenly perceives its own reflection as an ‘other’, doesn’t imply the converse: that it can’t distinguish its self from a genuine other – in fact, if anything, it confirms that cognitive ability, albeit erroneously.

That’s a slight detour to the main topic, nevertheless it’s relevant, because I believe it’s not what Voss is referring to, which is our ability ‘to reflect upon ourselves and our feelings’. It’s hard to imagine that any animal can contemplate upon its own thoughts the way we do. What makes us unique, cognitively, is our ability to create concepts within concepts ad infinitum, which is why I can write an essay like this, but no other primate can. I always thought this was my own personal philosophical insight until I read Godel Escher Bach and realised that Douglas Hofstadter had reached it many years before. And, as Hofstadter would point out, it’s this very ability which allows us to look at ourselves almost objectively, just as we do others, that we call self-contemplation. If this is what Voss is referring to, when she talks about ‘secondary consciousness’, then I would probably agree with her premise.

So what has this to do with dreams? Well, one of the aspects of dreams, that distinguishes them from reality, is that they defy rational expectations, yet we seem totally acceptant of this. Voss contends that it’s because we lose our ‘secondary’ consciousness during dreaming that we lose our rational radar, so to speak (my turn of phrase, not hers).

The article argues that with lucid dreaming we can get our secondary consciousness back, and there is some neurological evidence to support this conjecture, but I’m getting ahead of myself. For those who haven’t come across the term before, lucid dreaming is the ability to take conscious control of one’s dream. In effect, one becomes aware that one is dreaming. Hamzelou even provides a 5-step procedure to induce lucid dreams.

Now, from personal experience, any time I’ve realised I’m dreaming, it immediately pops me out of the dream. Nevertheless, I believe I’ve experienced lucid dreaming, or at least, a form of it. According to Patrick McNamara (Boston University), our dream life goes down hill as we age, especially once we’ve passed adolescence. Well, I have a very rich dream life, virtually every night, but then I’ve learnt, from anecdotal evidence at least, that storytellers seem to dream more or recall them more than other people do. I’d be interested if there was any hard evidence to support this.

Certainly, storytellers understand the connection between story and dreaming, because, like stories, dreams put us in situations that we don’t face everyday. In fact, it has been argued that dreams evolutionary purpose was to remind us that the world can be a dangerous place. But I’m getting off the track again, because, as a storyteller, I believe that my stories come from the same place that my dreams do. In other words, in my dreams I meet all sorts of characters that I would never meet in real life, and have experiences that I would never have in real life. But I’ve long been aware that there are 2 parts to my dream: one part being generated by some unknown source and the other part being my subjective experience of it. In the dream, I behave as a conscious being, just as I would in the real world, and I wonder if this is what is meant by lucid dreaming. Likewise, when one is writing a story, there is often a sense that it comes from an unknown source, and you consciously inhabit the character who is experiencing it. Which is exactly what actors do, by the way, only the dream they are inhabiting is a movie set or a stage.

Neurological studies have shown that there is one area of the brain that shuts down during REM (Rapid Eye Movement) sleep which is the signature behavioural symptom of dreaming. The ‘dorsolateral prefrontal cortex (DLPFC) was remarkably subdued during REM sleep, compared with during wakefulness.’ Allan Hobson (Harvard) believes that this is our rationality filter (again, my term, not his) because its inactivity correlates with our acceptance of completely irrational and dislocated events. Neurological studies of lucid dreams have been difficult to capture, but one intriguing finding has been an increase in a specific brainwave at 40 hertz in the frontal regions. In fact, the neurological studies done so far, point to brain activity being somewhere in between normal REM sleep and full wakefulness. The studies aren’t sensitive enough to determine if the DLFPC plays a role in lucid dreams or not, but the 40 hertz brainwave is certainly more characteristic of wakefulness.

To me, dreams are what-if scenarios, and are opportunities to gain self-knowledge. I’ve long believed that one can learn from one’s dreams, not in a Jungian or Freudian sense, but more pragmatically. I’ve always believed that the way I behave in a dream simulates the way I would behave in real life. If I behave in a way that I’m not comfortable with, it makes me contemplate ways of self-improvement. Dreams allow us to face situations that we might not want to confront in reality. It’s our ability for self-reflection, that Voss calls secondary consciousness, that makes dreams valuable tools for self-knowledge. Stories often serve the same purpose. A story that really impacts on us, is usually one that confronts issues relevant to our lives, or makes us aware of issues we prefer to ignore. In this sense, both dreams and stories can be a good antidote for denial.

Saturday 3 April 2010

Consciousness explained (well, almost, sort of)

As anyone knows, who has followed this blog for any length of time, I’ve touched on this subject a number of times. It deals with so many issues, including the possibilities inherent in AI and the subject of free will (the latter being one of my earliest posts).

Just to clarify one point: I haven’t read Daniel C. Dennett’s book of the same name. Paul Davies once gave him the very generous accolade by referencing it as 1 of the 4 most influential books he’s read (in company with Douglas Hofstadter’s Godel, Escher, Bach). He said: “[It] may not live up to its claim… it definitely set the agenda for how we should think about thinking.” Then, in parenthesis, he quipped: “Some people say Dennett explained consciousness away.”

In an interview in Philosophy Now (early last year) Dennett echoed David Chalmers’ famous quote that “a thermostat thinks: it thinks it’s too hot, or it thinks it’s too cold, or it thinks the temperature is just right.” And I don’t think Dennett was talking metaphorically. This, by itself, doesn’t imbue a thermostat with consciousness, if one argues that most of our ‘thinking’ happens subconsciously.

I recently had a discussion with Larry Niven on his blog, on this very topic, where we to-and-fro’d over the merits of John Searle’s book, Mind. Needless to say, Larry and I have different, though mutually respectful, views on this subject.

In reference to Mind, Searle addresses that very quote by Chalmers by saying: “Consciousness is not spread out like jam on a piece of bread…” However, if one believes that consciousness is an ‘emergent’ property, it may very well be ‘spread out like jam on a piece of bread’, and evidence suggests, in fact, that this may well be the case.

This brings me to the reason for writing this post:New Scientist, 20 March 2010, pp.39-41; an article entitled Brain Chat by Anil Ananthaswarmy (consulting editor). The article refers to a theory proposed originally by Bernard Baars of The Neuroscience Institute in San Diego, California. In essence, Baars differentiated between ‘local’ brain activity and ‘global’ brain activity, since dubbed the ‘global workspace’ theory of consciousness.

According to the article, this has now been demonstrated by experiment, the details of which, I won’t go into. Essentially, it has been demonstrated that when a person thinks of something subconsciously, it is local in the brain, but when it becomes conscious it becomes more global: ‘…signals are broadcast to an assembly of neurons distributed across many different regions of the brain.’

One of the benefits, of this mechanism, is that if effectively filters out anything that’s irrelevant. What becomes conscious is what the brain considers important. What criterion the brain uses to determine this is not discussed. So this is not the explanation that people really want – it’s merely postulating a neuronal mechanism that correlates with consciousness as we experience it. Another benefit of this theory is that it explains why we can’t consider 2 conflicting images at once. Everyone has seen the duck/rabbit combination and there are numerous other examples. Try listening to a Bach contrapuntal fugue so that you listen to both melodies at once – you can’t. The brain mechanism (as proposed above) says that only one of these can go global, not both. It doesn’t explain, of course, how we manage to consciously ‘switch’ from one to the other.

However, both the experimental evidence and the theory, are consistent with something that we’ve known for a long time: a lot of our thinking happens subconsciously. Everyone has come across a puzzle that they can’t solve, then they walk away from it, or sleep on it overnight, and the next time they look at it, the solution just jumps out at them. Professor Robert Winston, demonstrated this once on TV, with himself as the guinea pig. He was trying to solve a visual puzzle (find an animal in a camouflaged background) and when he had that ‘Ah-ha’ experience, it showed up as a spike on his brain waves. Possibly the very signal of it going global, although I’m only speculating based on my new-found knowledge.

Mathematicians have this experience a lot, but so do artists. No artist knows where their art comes from. Writing a story, for me, is a lot like trying to solve a puzzle. Quite often, I have no better idea what’s going to happen than the reader does. As Woody Allen once said, it’s like you get to read it first. (Actually, he said it’s like you hear the joke first.) But his point is that all artists feel the creative act is like receiving something rather than creating it. So we all know that something is happening in the subconscious – a lot of our thinking happens where we’re unaware of it.

As I alluded to in my introduction, there are 2 issues that are closely related to consciousness, which are AI and free will. I’ve said enough about AI in previous posts, so I won’t digress, except to restate my position that I think AI will never exhibit consciousness. I also concede that one day someone may prove me wrong. It’s one aspect of consciousness that I believe will be resolved one day, one way or the other.

One rarely sees a discussion on consciousness that includes free will (Searle’s aforementioned book, Mind, is an exception, and he devotes an entire chapter to it). Science seems to have an aversion to free will (refer my post, Sep.07) which is perfectly understandable. Behaviours can only be explained by genes or environment or the interaction of the two – free will is a loose cannon and explains nothing. So for many scientists, and philosophers, free will is seen as a nice-to-have illusion.

Conciousness evolved, but if most of our thinking is subconscious, it begs the question: why? As I expounded on Larry’s blog, I believe that one day we will have AI that will ‘learn’; what Penrose calls ‘bottom-up’ AI. Some people might argue that we require consciousness for learning but insects demonstrate learning capabilities, albeit rudimentary compared to what we achieve. Insects may have consciousness, by the way, but learning can be achieved by reinforcement and punishment – we’ve seen it demonstrated in animals at all levels – they don’t have to be conscious of what they’re doing in order to learn.

So the only evolutionary reason I can see for consciousness is free will, and I’m not confining this to the human species. If, as science likes to claim, we don’t need, or indeed don’t have, free will, then arguably, we don’t need consciousness either.

To demonstrate what I mean, I will relate 2 stories of people reacting in an aggressive manner in a hostile situation, even though they were unconscious.

One case, was in the last 10 years, in Sydney, Australia (from memory) when a female security guard was knocked unconscious and her bag (of money) was taken from her. In front of witnesses, she got up, walked over to the guy (who was now in his car), pulled out her gun and shot him dead. She had no recollection of doing it. Now, you may say that’s a good defence, but I know of at least one other similar incident.

My father was a boxer, and when he first told me this story, I didn’t believe him, until I heard of other cases. He was knocked out, and when he came to, he was standing and the other guy was on the deck. He had to ask his second what happened. He gave up boxing after that, by the way.

The point is that both of those cases illustrate that humans can perform complicated acts of self-defence without being consciously cognisant of it. The question is: why is this the exception and not the norm?


Addendum: Nicholas Humphrey, whom I have possibly incorrectly criticised in the past, has an interesting evolutionary explanation: consciousness allows us to read other’s minds. Previously, I thought he authored an article in SEED magazine (2008) that argued that consciousness is an illusion, but I can only conclude that it must be someone else. Humphrey discovered ‘blindsight’ in a monkey (called Helen) with a surgically-removed visual cortex, which is an example of a subconscious phenomenon (sight) with no conscious correlation. (This specific phenomenon has since been found in humans as well, with damaged visual cortex.)


Addendum 2: I have since written a post called Consciousness unexplained in Dec. 2011 for those interested.

Saturday 20 June 2009

Subjectivity: The Mind’s I (Part I)

The title of this post is a direct steal from Douglas R. Hofstadter and Daniel C. Dennett. The Mind’s I is the title of a book they published in 1981, a collection of essays by various authors with the subtitle: Fantasies and Reflections on Self and Soul. I’ve added the prefix because subjectivity is a recurring theme, at least in Part I.

After each essay they give a little commentary, but it’s the essays themselves that stimulated me. I’ve already written a post on one: Is God a Taoist? by Raymond M. Smullyan (refer Socrates, Russell, Sartre, God and Taoism in May 09).

So I will provide here my most significant impressions, or resultant thoughts, that just 3 of these essays have provoked. These are just from Part I of the book (there are 6 Parts) so I may well continue this discussion in a later post.

Borges and I by Jorge Luis Borges is an essay where Borges attempts to discriminate between his subjective and objective self in an accessible and entertaining way. It highlights the point made by John Searle in his book, MiND, that what distinguishes consciousness from other phenomena, that we try to investigate and understand, is that it has a distinctly subjective element that can neither be ignored nor isolated - it defies objectification by its nature.

The Dalai Lama makes a similar point in his book on science and religion, The Universe in a Single Atom, where he contends that neurological investigations into consciousness, whilst extremely edifying and illuminating, are really not the whole story without taking subjective experience into account.

The essay also explores, in an indirect way, the difference between the way we perceive ourselves and the way others do. I've always maintained that the most psychologically healthy relationships (work, family or friendship) are where these 2 perceptions closely align.

In the next essay, extracts from D. E. Harding's On Having No Head, Harding starts with an epiphany he had whilst looking at the Himalayas:

‘Past and future dropped away. I forgot who and what I was, my name, my manhood, animalhood, all that could be called mine. It was as if I had been born that instant, brand new, mindless, innocent of all memories. There existed only the Now, that present moment and what was clearly given in it.’

This, in itself, is an interesting revelation, coming from a man who makes no claim to mysticism. This epitomises subjective experience in as much as it cannot be shared with another. It's like someone, who viewed the world in colour, trying to explain it to a population of people who only saw shades of grey.

Harding then goes on to describe a world in which his head doesn’t exist for him, though he acknowledges they exist for other people – a form of solipsism. What I find significant is that he is highlighting what I call the inner and outer world that we all have, which is central to my own philosophy. The metaphor of ‘having no head’ which he talks about ‘literally’ (even a mirror image is a hallucination) is the void that exists in one’s mind except one’s thoughts. We have senses, yes, of which sight is the most dominating, but, as he points out, there is no screen that we view, it is simply ‘I’ looking out – the inner world’s most tangible connection to the outer world.

In other posts (specifically, Artificial Intelligence and Consciousness, Feb. 09) I argue that AI will never have this subjective sense that we have. So whilst machines can, and will be built to, ‘sense’ their environments, they won’t ‘experience’ it the way we do, is my contention. Most philosophers and scientists (including Dennett and Hofstadter) disagree with me, but both Borge’s and Harding’s essays merely underline this distinction for me.

Rediscovering the Mind by Harold J. Morowitz takes a different tack altogether. Morowitz, I assume, is a psychologist, and he tackles both the biologist and the physicist, who take a reductionist view of the world, whereby they presume they can explain macro-phenomena via investigation of micro-phenomena. Central to Morowitz’s thesis is an epistemological loop created by the accepted interpretation of quantum mechanics that it requires macro intervention by a conscious mind to produce a measurable result. He quotes Nobel laureate, Eugene Wigner: “It was not possible to formulate the laws of quantum mechanics in a fully consistent way without reference to the consciousness.” Because the biological reductionist reduces mind to neurons, thus molecules, thus quantum phenomena, Morowitz argues that we have a quantum mechanical epistemological loop from mind to quantum phenomena to mind.

The best analogy for superposition of states is one of those pictures that have 2 images intertwined, like the famous duck and rabbit combination that Wittgenstein once referred to, and there is even a Dali painting that uses it. The most effective ones are those utilising 2 contrasting tones where the shadow reveals one image and the light reveals another. The point is that your mind can only perceive one image or the other but not both at the same time, and you can even ‘switch’ between them. Well, quantum superposition is a bit like that (especially the famous Schrodinger’s Cat thought experiment) but once you make the ‘measurement’ or the ‘observation’ you can’t switch back.

Hofstadter tackles this conundrum in his ‘Reflections’ of Morowitz’s essay by pointing out that the mysteries of consciousness and the mysteries of quantum physics are not the same. On this I would agree, but he hasn’t eliminated the conundrum or the epistemological loop. Hofstadter then explains quantum superposition of states, culminating in a description of Schrodinger’s (simultaneously live and dead cat) thought experiment, and a discussion on Hugh Everett III’s ‘many worlds interpretation’, which he describes as ‘this very bizarre theory’.

In fact, Hofstadter gives the best dismantling of Everett’s hypothesis that I’ve read, pointing out that there is a specific ‘subjective’ world that is the one you continue on in, that effectively eliminates all the other worlds. To quote Hofstadter: ‘The problem of how it feels subjectively is not treated; it is just swept under the rug.’ (Hofstadter’s italics)

I find it interesting that Hofstadter evokes ‘subjectivity’ to eliminate, in one stroke, Everett’s contentious interpretation. Having said that, Hofstadter expands on his theme, revealing, in prose I won’t attempt to replicate, how personal identity becomes meaningless in an ever bifurcating universe for each individual occupant.

But getting back to Morowitz, one of the salient points he makes is that the evolution of the universe is a series of discontinuities, starting with the Big Bang itself. A major jump in time, and the emergence of life is another discontinuity, followed by the emergence of consciousness. Morowitz even argues that humanity’s ability for inner reflection is another discontinuity again, though I’m sure many would contest this last hypothesis without necessarily contesting the previous ones.

But, also, one wonders if there is not a discontinuity between the quantum world and the so-called classical world, the organic and the inorganic, the sentient and the non-sentient. I think he has a point, when one looks at it from that perspective, ignoring the context of evolutionary time, that our reductionist philosophy, so prized by science in general, tends to ignore or brush aside.

I expect I will return to this subject in a later post.

Saturday 14 February 2009

Godel, Escher, Bach - Douglas Hofstadter's seminal tome

The original title of this post was Artificial Intelligence and Consciousness.

This is perhaps the hardest of subjects to tackle. I’ve just finished reading Douglas R. Hofstadter’s book, Godel, Escher, Bach: an Eternal Golden Braid, which attempts to address this very issue, even if in a rather unusual way.

Earlier in the same year (last year) I read Roger Penrose’s book, Shadows of the Mind, which addresses exactly the same issue. What is interesting is that, in both cases, the authors use Godel’s Incompleteness Theorem to support completely different, one could say, opposing, philosophical viewpoints. Both Penrose and Hofstadter are intellectual giants compared to me, but what I find interesting is that both apparently start with their philosophical viewpoints and then find arguments to support them, rather than the other way round. Hofstadter quotes, more than once, the Oxford philosopher, J.R. Lucas, whom he obviously respects, but philosophically disagrees with. Likewise, I found myself often in agreement with Hofstadter on many of his finer points, but still in disagreement with his overall thesis. I think it’s obvious from other posts on this blog, that I am much closer to Penrose’s philosophy in many respects, not just on AI.

Having said all that, this is a very complex and difficult subject, and I’m not at all sure I can do it justice. What goes hand in hand with the subject of AI, and Hofstadter doesn’t shy away from this, is the notion of consciousness. Can AI ever be conscious in the way we are? Hofstadter says yes, and Penrose, I believe, would say no. (Penrose effectively argues that algorithm-using machines – computers - will never think like humans.) Another person who has much to say on this subject is John Searle, and he would almost certainly say no, based on his famous ‘Chinese Room’ thought experiment. (I expound on this in my Apr.08 post: The Ghost in the Machine).

Larry Niven in one of his comments on his own blog, in response to one of my comments, made the observation that science hasn’t resolved the brain/mind conundrum, and gave it as an example of ‘…the impotence of scientific evidence to affect philosophical debates…’ (I’m sure if I’ve misinterpreted him, or quoted him out of context, he’ll let me know.)

To throw a googly into the mix, since Hofstadter first published the book 30 years ago, a lot of work has been done in this area, and one of the truly interesting ideas is the Bayesian model of the brain based on Bayesian probability, proposed by Karl Friston (New Scientist 31 May 08). In a nutshell, Friston proposes that the brain functions on the same principle at all levels, which is to make an initial assessment then modify it based on additional information. He claims this works even at the neuron level, as well as the cognitive level. (I report on this in my July 08 post titled, Epistemology; a discussion.) I even extrapolate this up the cognitive tree to include the scientific method, whereby we hypothesise, follow up with experimentation or observation, then modify the hypothesis accordingly.

Hofstadter makes a similar point about ‘default options’ that we use in everyday observations, like the way we use stereotypes. It’s only by evaluating a specific case in more detail that we can break away from a stereotypic interpretation of an event. This is also an employment of the Bayesian principle, but Hofstadter doesn’t say this because it hadn’t been proposed at the time he wrote it.

What Searle points out in his excellent book, Mind, is that consciousness is an experience, which is so subjective that we really don’t know if anyone else experiences it the way we do – we only assume they do. Stephen Law writes about this in his book, The Philosophy Gym, and I challenged him (by snail mail at the time) that this was a conceit on his part, because he obviously expected that people who read his book, could think like him, which means they must be conscious. It was a good natured jibe, even though I’m not sure he saw it that way at the time, but he was generous in his reply.

Descartes famous statement, ‘I think therefore I am’, has been pilloried over the centuries since he wrote it, but I would contend that ‘I think’ is a tautology, because ‘I’ is your thoughts and nothing else. This gets to the heart of Hofstadter’s thesis, that we, individually, are all ‘strange loops’. Hofstadter employs Godel’s Theorem in an unusual, analogous way to make this contention: we are ‘strange loops’. By strange loop, Hofstadter means that we can effectively look at all the levels of our thinking except the ground level, which is our neurons. In between we have symbols, which is language, which we can discuss and analyse in a dispassionate way, just like I’m doing now. I can talk about my own thoughts and ideas as if they weren’t mine at all. Consciousness, in Hofstadter’s model (for want of a better word) is the top level, and neurons are the hardware level. In between we have the software (symbols) which is effectively language.

I think language as software is a good metaphor but not necessarily a literal interpretation. Software means algorithms, which are effectively instructions. Whilst language obviously contains rules, I don’t see it as particularly algorithmic, though others, including Hofstadter, may disagree. On the other hand, I do see DNA as algorithmic in the way it creates organisms, and Hofstadter makes the same leap of interpretation.

The analogy with Godel’s Theorem is that, in any formal mathematical system, there will always exist a mathematical statement that expresses something about the system but can’t be found in the system, if I’ve got it right. In other words, there will always exist the possibility of a ‘correct’ mathematical statement that is not part of the original formal system, which is why it is called the Incompleteness Theorem – no mathematical formal system can ever be complete in that it will include all mathematical statements. In this analogy, the self or ‘I’ is like a Godelian entity that is a product of the system but not contained in it. Again, my interpretation may not be what Hofstadter intended, but it’s the best I can make of it. It exists at another level, I think is what Hofstadter would say.

In another part of the book, Hofstadter makes a direct ‘mapping’ which he calls a ‘dogmap’ (play on words for dogma) where he compares DOGMA I ‘Molecular Biology’ with DOGMA II ‘Mathematical Logic’, using Godel’s Theorem ‘self-referencing’ as directly comparable to DNA/RNA’s ‘self reproduction’. He admits this is an analogy but later acknowledges that the same mapping may be possible from Godel's Theorem to consciousness.

Even without this allusion by Hofstadter, and no Godelian analogy required, I see a direct comparison between the way DNA/RNA creates complex organisms and the way neurons create thoughts. In both cases there is a gulf of layers in between that makes one wonder how they could have evolved. Of course, this is grist for ID advocates and I’ve even come across a blogger (Sophie) who quotes Hofstadter to make this very point.

In one of my earliest posts on this blog (The Universe’s Interpreters, Sep. 07) I make the point that the universe consists of worlds within worlds, and the reason we can comprehend it to the extent that we do, is because we can conjure concepts within concepts ad infinitum. Hofstadter makes a similar point, though not in the same words, but at least 2 decades before I thought of it.

DNA/RNA exists at a level far removed from the end result, which is a living complex organism, yet there is a direct causal relationship. Neurons are cells that exist at a level far removed from the end result, which is consciousness, yet there is a direct causal relationship.

These 2 cases, DNA to complex organisms and neurons to consciousness, I think remain the 2 greatest mysteries of the natural world. To say that they can only be explained by invoking a ‘Designer’ (God) is to say we’ve uncovered everything we know about the universe at all of its levels of complexity and only God can explain everything else. I would call this the defeatist position if it was to be taken seriously. But, in effect, the ID advocates are saying that whilst any mysteries remain in our comprehension of the universe, there will always be a role for God. Once we find an explanation for these mysteries, there will be other mysteries, perhaps at other levels, that we can still employ God to explain. So the argument will never stop. Before Newton it was the orbits of the planets, and before Mendel it was the passing down of genetic traits. Now it is the origin of DNA. The mysteries may get deeper but past experience says that we will find an answer and the answer won’t be God (see my Dec .08 post: The God hypothesis; not).

As a caveat to the above argument, I've said elsewhere (Emergent phenomena, Oct. 08) that we may never understand consciousness as a direct mathematical relationship to neuron activity (although Penrose pins his hopes on quantum phenomena). And I'm unsure that we will ever be able to explain how it becomes an experience, and that's one of the reasons I'm sceptical that AI will ever have that experience. But this lack of understanding is not evidence of God; it's just evidence of our lack of understanding.

To quote Confucius: 'To realise that you know something when you do, and to realise that you do not know when you do not, this is knowing.' Or to quote his near contemporary, Socrates, who put it more succinctly: 'The height of wisdom is to know how thoroughly ignorant we are.'

My personal hypothesis, completely speculative with no scientific evidence at all, is that maybe there is a feedback mechanism that goes from the top level to the bottom level that we’ve yet to discover. They are both mysteries that most people don’t contemplate and it took Hofstadter’s book, written over 3 decades ago, to bring them fully home to me, and to appreciate how analogous they are: base level causally affects top level, yet complexity of one level seems independent to complexity of the other - there is no obvious 1 to 1 correlation. (Examples: it can take a combination of genes to express a single trait; there is not a specific 'home' in the brain for specific memories.)

I guess it’s this specific revelation that I personally take from Hofstadter’s book, but I really can’t do it justice. It is one of the best books I’ve read, even though I don’t agree with his overall thesis: machines will eventually think like humans, therefore they will have consciousness.

In my one and only published novel, ELVENE, there is an AI entity, Alfa, who plays an important role in the story. I was very careful in my construction of Alfa to demonstrate that he didn’t think like humans (yes, I gave him a gender and that’s explained) but that he was nevertheless extremely intelligent and able to converse with humans with cognitive ease. But I don’t believe Alfa was conscious albeit he may have given that impression (this is fiction, remember). I agree with Searle, in that simulated intelligence at a very high level will be achievable, but it will remain a simulation. AI uses algorithms and brains don’t – on this, I agree with Penrose. On the other hand, Hofstadter argues that we use rule-based software in the form of ‘symbols’, which we call language. I’m sure whoever reads this will have their own opinions.


Addendum 1: I've just read (today, 21 Feb.09) an article in Scientific American (January 2009) that tackles the subject: From Atoms to Traits. It points out that there is good correlation between genes and traits, and expounds on the latest knowledge in this area. In particular, it gives a good account (by examples) of how random changes 'feed' the natural selection 'engine' of evolution. I admit that there is still much to be learned, but, if you follow this topic at all, you will know that discoveries and insights are being made all the time. The mystery of how genes evolved, as opposed to the organisms that they create, is still unsolved in my view. Martin A. Nowak, a Harvard University mathematician and biologist, profiled in Scientific American (October 2008) believes the answer may lie in mathematics: Can mathematics solve the origin of life? An idea hypothesised by Gregory J. Chaitin in his book, Thinking about Godel and Turing, which I review in my Jan.08 post: Is mathematics evidence of a transcendental realm?

Addendum 2: I changed the title to more accurately reflect the content of the post.

Saturday 18 October 2008

Emergent phenomena

A couple of weeks ago in New Scientist (4 October 2008), there was one of those lesser featured articles that you could skip over if you were not alert enough, which to my surprise, both captured and elaborated on an aspect of the natural world that has long fascinated me. It was titled, ‘Why nature is not the sum of its parts’.

It referenced an idea or property of nature, first proposed apparently by physicist, Philip Anderson, in 1972, called ‘emergence’. To quote: ‘the notion that important kinds of organisation might emerge in systems of many interacting parts, but not follow in any way from the properties of those parts.’ As the author of the article, Mark Buchanan, points out: this has implications for science, which is reductionist by methodology, in that it may be impossible to reduce all phenomena to a set of known laws, as many scientists, and even laypeople, seem to believe.

The article specifically discusses the work of Mile Gu at the University of Queensland in Brisbane, Australia, who believes he may have proved Anderson correct by demonstrating that mathematical modeling of the magnetic forces in iron could not predict the pattern of atoms in a 3D lattice as one might expect. In other words, there should be a causal link between individual atoms and the overall effect, but it could not be determined mathematically. To quote Gu: “We were able to find a number of properties that were simply decoupled from the fundamental interactions.” To quote Buchanan quoting Gu: ‘This result, says Gu, shows that some of the models scientists use to simulate physical systems have properties that cannot be linked to the behaviour of their parts.’

Now, obviously, I’ve simplified the exposition from an already simplified exposition, and of course, others, like John Barrow from Cambridge University, challenge it as a definitive ‘proof’. But no one would challenge its implication if it was true: that the physics at one level of nature may be mathematically independent of the physics at another level, which is what we already find, and which I’ve commented on in previous posts (see The Universe’s Interpreters, Sep.07).

This is not dissimilar to arguments produced in some detail by Roger Penrose in Shadows of the Mind, concerning the limitations of formal mathematical reasoning. According to Penrose, there are mathematical ‘truths’ that may be ‘uncomputable’, which is a direct consequence of Godel’s ‘Incompleteness Theorem’ (refer my post, Is mathematics evidence of a transcendental realm? Jan.08). But Penrose’s book deals specifically with the enigma of consciousness, and this is where I believe Anderson and Gu’s ideas have particular relevance.

I would argue, as do many others (Paul Davies for one) that consciousness is an ‘emergent’ phenomenon. If science is purely reductionist in its methodology, as well as its philosophy, then arguably, consciousness will remain a mystery that can never be solved. Most scientists dispute this, including Penrose, but if Anderson and Gu are correct, then the ‘emergent’ aspect of consciousness, as opposed to its neurological underpinnings, may never be properly understood, or be reducible to fundamental laws of physics as most hope it to be.

Friday 11 April 2008

The Ghost in the Machine

One of my favourite Sci-Fi movies, amongst a number of favourites, is the Japanese anime, Ghost in the Shell, by Mamoru Oshii. Made in 1995, it’s a cult classic and appears to have influenced a number of sci-fi storytellers, particularly James Cameron (Dark Angel series) and the Wachowski brothers (Matrix trilogy). It also had a more subtle impact on a lesser known storyteller, Paul P. Mealing (Elvene). I liked it because it was not only an action thriller, but it had the occasional philosophical soliloquy by its heroine concerning what it means to be human (she's a cyborg). It had the perfect recipe for sci-fi, according to my own criterion: a large dose of escapism with a pinch of food-for-thought. 

But it also encapsulated two aspects of modern Japanese culture that are contradictory by Western standards. These are the modern Japanese fascination with robots, and their historical religious belief in a transmigratory soul, hence the title, Ghost in the Shell. In Western philosophy, this latter belief is synonymous with dualism, famously formulated by Rene Descartes, and equally famously critiqued by Gilbert Ryle. Ryle was contemptuous of what he called, ‘the dogma of the ghost in the machine’, arguing that it was a category mistake. He gave the analogy of someone visiting a university and being shown all the buildings: the library, the lecture theatres, the admin building and so on. Then the visitor asks, ‘Yes, that’s all very well, but where is the university?’ According to Ryle, the mind is not an independent entity or organ in the body, but an attribute of the entire organism. I will return to Ryle’s argument later. 

In contemporary philosophy, dualism is considered a non sequitur: there is no place for the soul in science, nor ontology apparently. And, in keeping with this philosophical premise, there are a large number of people who believe it is only a matter of time before we create a machine intelligence with far greater capabilities than humans, with no ghost required, if you will excuse the cinematic reference. Now, we already have machines that can do many things far better than we can, but we still hold the upper hand in most common sense situations. The biggest challenge will come from so called ‘bottom-up’ AI (Artificial Intelligence) which will be self-learning machines, computers, robots, whatever. But, most interesting of all, is a project, currently in progress, called the ‘Blue Brain’, run by Henry Markram in Lausanne, Switzerland. Markram’s stated goal is to eventually create a virtual brain that will be able to simulate everything a human brain can do, including consciousness. He believes this will be achieved in 10 years time or less (others say 30). According to him, it’s only a question of grunt: raw processing power. (Reference: feature article in the American science magazine, SEED, 14, 2008) 

For many people, who work in the field of AI, this is philosophically achievable. I choose my words carefully here, because I believe it is the philosophy that is dictating the goal and not the science. This is an area where the science is still unclear if not unknown. Many people will tell you that consciousness is one of the last frontiers of science. For some, this is one of 3 remaining problems to be solved by science; the other 2 being the origin of the universe and the origin of life. They forget to mention the resolution of relativity theory with quantum mechanics, as if it’s destined to be a mere footnote in the encyclopaedia of complete human knowledge. 

There are, of course, other philosophical points of view, and two well known ones are expressed by John Searle and Roger Penrose respectively. John Searle is most famously known for his thought experiment of the ‘Chinese Room’, in which you have someone sitting in an enclosed room receiving questions through an 'in box', in Chinese, and, by following specific instructions (in English in Searle's case), provides answers in Chinese that they issue through an 'out box'. The point is that the person behaves just like a processor and has no knowledge of Chinese at all. In fact, this is the perfect description of a ‘Turing machine’ (see my post, Is mathematics evidence of a transcendental realm?) only instead of tape going through a machine you have a person performing the instructions in lieu of a machine. 

The Chinese Room actually had a real world counterpart: not many people know that, before we had computers, small armies of people would be employed (usually women) to perform specific but numerous computations for a particular project with no knowledge of how their specific input fitted into the overall execution of said project. Such a group was employed at Bletchley Park during WWII to work on the decoding of enigma transmissions where Turing worked. These people were called ‘computers’ and Turing was instrumental in streamlining their analysis. However, according to Turing’s biographer, Andrew Hodges, Turing did not develop an electronic computer at Bletchley Park, as some people believe, and he did not invent the Colossus, or Colossi, that were used to break another German code, the Lorenz, ‘...but [Turing] had input into their purpose, and saw at first-hand their triumph.’ (Hodges, 1997). 

Penrose has written 3 books, that I’m aware of, addressing the question of AI (The Emperor’s New Mind, Shadows of the Mind and The Large, the Small and the Human Mind) and Turing’s work is always central to his thesis. In the last book listed, Penrose invites others to expound on alternative views: Abner Shimony, Nancy Cartwright and Stephen Hawking. Of the three books referred to, Shadows of the Mind is the most intellectually challenging, because he is so determined not to be misunderstood. I have to say that Penrose always comes across as an intellect of great ability, but also great humility – he rarely, if ever, shows signs of hubris. For this reason alone, I always consider his arguments with great respect, even if I disagree with his thesis. To quote the I Ching: ‘he possesses as if he possessed nothing.’ 

Penroses’s predominant thesis, based on Godel’s and Turing’s proof (which I discuss in more detail in my post, Is mathematics evidence of a transcendental realm?) is that the human mind, or any mind for that matter, cannot possibly run on algorithms, which are the basis of all Turing machines. So the human mind is not a Turing machine is Penrose’s conclusion. More importantly, in anticipation of a further development of this argument, algorithms are synonymous with software, and the original conceptual Turing machine, that Turing formulated in his ‘halting problem proof’, is really about software. The Universal Turing machine is software that can duplicate all other Turing machines, given the correct instructions, which is what software is. 

To return to Ryle, he has a pertinent point in regard to his analogy, that I referred to earlier, of the university and the mind; it’s to do with a generic phenomenon which is observed throughout many levels of nature, which we call ‘emergence’. The mind is an emergent property, or attribute, that arises from the activity of a large number of neurons (trillions) in the same way that the human body is an emergent entity that arises from a similarly large number of cells. Some people even argue that classical physics is an emergent property that arises from quantum mechanics (see my post on The Laws of Nature). In fact, Penrose contends that these 2 mysteries may be related (he doesn't use the term emergent), and he proposes a view that the mind is the result of a quantum phenomenon in our neurons. I won’t relate his argument here, mainly because I don’t have Penrose's intellectual nous, but he expounds upon it in both of his books: Shadows of the Mind and The Large, the Small and the Human Mind; the second one being far more accessible than the first. 

The reason that Markram, and many others in the AI field, believe they can create an artificial consciousness, is because, if it is an emergent property of neurons, then all they have to do is create artificial neurons and consciousness will follow. This is what Markram is doing, only his neurons are really virtual neurons. Markram has ‘mapped’ the neurons from a thin slice of a rat’s brain into a supercomputer, and when he ‘stimulates’ his virtual neurons with an electrical impulse it creates a pattern of ‘firing’ activity just like we would expect to find in a real brain. On the face of it, Markram seems well on his way to achieving his goal. 

But there are two significant differences between Markram’s model (if I understand it correctly) and the real thing. All attempts at AI, including Markram’s, require software, yet the human brain, or any other brain for that matter, appears to have no software at all. Some people might argue that language is our software, and, from a strictly metaphorical perspective, that is correct. But we don’t seem to have any ‘operational’ software, and, if we do, the brain must somehow create it itself. So, if we have a ‘software’, it’s self-generational from the neurons themselves. Perhaps this is what Markram expects to find in his virtual neuron brain, but his virtual neuron brain already is software (if I interpret the description given in SEED correctly). 

I tend to agree with some of his critics, like Thomas Nagel (quoted in SEED), that Markram will end up with a very accurate model of a brain’s neurons, but he still won’t have a mind. ‘Blue Brain’, from what I can gather, is effectively a software model of the neurons of a small portion of a rat’s brain running on 4 super computers comprising a total of 8,000 IBM microchips. And even if he can simulate the firing pattern of his neurons to duplicate the rat’s, I would suspect it would take further software to turn that simulation into something concrete like an action or an image. As Markram says himself, it would just be a matter of massive correlation, and using the super computer to reverse the process. So he will, theoretically, and in all probability, be able to create a simulated action or image from the firing of his virtual neurons, but will this constitute consciousness? I expect not, but others, including Markram, expect it will. He admits himself, if he doesn’t get consciousness after building a full scale virtual model of a human brain, it would beg the question: what is missing? Well, I would suggest that would be missing is life, which is the second fundamental difference that I alluded to in the preceding paragraph, but didn’t elaborate on. 

I contend that even simple creatures, like insects, have consciousness, so you shouldn’t need a virtual human brain to replicate it. If consciousness equals sentience, and I believe it does, then that covers most of the animal kingdom. 

So Markram seems to think that his virtual brain will not only be conscious, but also alive – it’s very difficult to imagine one without the other. And this, paradoxically, brings one back to the ghost in the machine. Despite all the reductionism and scientific ruminations of the last century, the mystery still remains. I’m sure many will argue that there is no mystery: when your neurons stop firing, you die – it’s that simple. Yes, it is, but why is life, consciousness and the firing of neurons so concordant and so co-dependent? And do you really think a virtual neuron model will also exhibit both these attributes? Personally, I think not. And to return to cinematic references: does that mean, as with Hal, in Arthur C. Clarke’s 2001, A Space Odyssey, that when someone pulls the plug on Markram’s 'Blue Brain', it dies? 

In a nutshell: nature demonstrates explicitly that consciousness is dependent on life, and there is no evidence that life can be created from software, unless, of course, that software is DNA.
 

Wednesday 22 August 2007

Self

This is merely a starting point, but it seems to be a starting point for many of my philosophical discussions. For each and every one of us there is an inner and outer world - it is the interaction of these 2 aspects of our experience that determines the self.

If one takes language as an example: we all think in a language, and without it, we would find it extremely difficult, probably impossible, to conceptualise, compare, manipulate and develop abstract ideas. This is such an internal and fundamental process that we tend to forget that we all gained our language from our external world. My point is that we underestimate the dependence of the self on the external world.

This also extends to relationships, because without our interaction with others the self would be sterile, unreflective and probably unexamined. So the self is not something that we can consider in isolation of our external world because it has an extension into that world which both receives and transmits information, energy, emotion and our very soul.

What do I mean by soul? My own interpretation is that it is an evolving process, tempered and moulded by life that we can learn to be comfortable with or we can learn to inwardly dislike. The latter experience can create depression, hatred and a perverse outlook on the world. I speak from experience, so this is part of my journey.

For further elaboration on this, refer my post on The Meaning of Life.