Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Sunday, 27 April 2008

Trust

Trust is the foundation stone for the future success of humanity. A pre-requisite for trust is honesty, and honesty must start with honesty to one’s self. A lot has been written about the evolutionary success that arises from our ability to lie, but I would argue that dishonesty to the self is the greatest poison one can imbibe, and is the starting point for many of the ills faced by individuals and societies alike.

No one is immune to lying – we’ve all lied for various reasons: some virtuous, some not. But it is when we lie to ourselves that we paradoxically lay the groundwork for a greater deception to the outside world. Look at the self-deception of some of the most notorious world leaders, who surround themselves with acolytes, so they can convince the wider world of the virtue of their actions.

When I was very young, 6 or 7 (50 years ago now), I learned my first lesson about lying that I can still remember. I was in a school playground when someone close by ended up with a bleeding nose – to this day I’ve no idea what actually happened. Naturally, a teacher was called, and she asked, ‘What happened?’ A girl nearby pointed at me and said, ‘He hit him.’ I was taken to the Head Mistress, who was a formidable woman. In those days, children were caned for less, though I had never been caned up to that point in my schooling. At that age, when I arrived home from school, my father sometimes asked me, ‘Did you get the cane today?’ It was always very important to me to be able to say ‘no’, as I hated to think of the inquisition that would have followed if I’d ever said ‘yes’.

Back to the Head Mistress; I remember our encounter well. The school classrooms were elevated with a verandah, and we sat outside looking down at the courtyard, which was effectively deserted – the playground, where the incident had occurred, was out of sight. Her first question may have been: ‘Why did you hit him?’ or it may have been: ‘Tell me what happened.’ It doesn’t really matter what she actually said, because the important thing was that I realised straightaway that the truth would be perceived as a lie. I had to tell her something that she would believe, so I told her a story that went something like this: ‘We were both running and I ran into him.’ Her response was something like: ‘That’s interesting, I wasn’t told you were running. You’re not supposed to run.’ I knew then, possibly by the tone of her voice, that I had got away with it.

What’s most incredible about this entire episode is that it’s so indelibly burned into my brain. I learned a very valuable lesson at a very early age: it’s easier to tell a lie that is wanted to be heard than the truth that is not. Politicians, all over the world, practice this every day, some more successfully than others. For example, if soldiers commit a massacre, the powers-that-be can often deny it with extraordinary success; for the very simple reason that ordinary people would much prefer to ‘know’ that the massacre never happened than to ‘know’ the truth. (Hugh Mackay, in his excellent book, Right & Wrong; how to decide for yourself, refers to this as 'telling people the lies they want to hear'.)

A worldwide survey was done sometime in the last decade on 'trust', within various societies, and it revealed a remarkable correlation. (I don’t know who commissioned it; I read about it in New Scientist.) They found that the degree of trust between individuals in business transactions was directly dependent on the degree of trust they had in their government. So trust starts at the top, which is why I opened this essay with the sentence I chose. Trust starts with world leaders, and the more powerful they are, the more important it is.

A very good barometer of the health of a democracy is its media. By this criterion, America is one of the healthiest democracies in the world. We all take pot shots at America, including me, but most of the criticism, and all the ammunition for the criticisms that I level at America, come from the American press themselves. The other emerging power in the 21st Century, China, and the re-emerging power, Russia, have quite a different view on what criticisms they tolerate, both internally and externally. In Russia, journalists have been assassinated, and China is 'the world's leading jailer of journalists' according to CPJ (Committee to Protect Journalists).

Without trust, there can be no negotiations, no security and no creativity for individuals; the world will be forced to conform to a parody of democracy, a façade and ultimately a farce. Whatever the political or economical outcomes of the 21st Century, there will be enormous pressure on humanity worldwide. Trust, on a global scale, will be requisite for a stable and sustainable future. It is only because of the media that debates can take place between groups and with an informed public. It is the role of the media to keep politicians honest, not only to themselves, but also to the rest of us. It is when politicians usurp this role that trust disappears. Everywhere.


Footnote: I wrote this almost immediately after I saw the U2 3D concert in a cinema. I came out of the theatre with the first sentence already in my head. So I had to write it down, and the rest just followed.

Clive James made the point in an interview last year, that democracy is not the norm, it's the exception; in the West, we take democracy for granted.

This issue is complementary to issues I discuss, in a different context, in my post entitled Human Nature (Nov. 07).

Friday, 11 April 2008

The Ghost in the Machine

One of my favourite Sci-Fi movies, amongst a number of favourites, is the Japanese anime, Ghost in the Shell, by Mamoru Oshii. Made in 1995, it’s a cult classic and appears to have influenced a number of sci-fi storytellers, particularly James Cameron (Dark Angel series) and the Wachowski brothers (Matrix trilogy). It also had a more subtle impact on a lesser known storyteller, Paul P. Mealing (Elvene). I liked it because it was not only an action thriller, but it had the occasional philosophical soliloquy by its heroine concerning what it means to be human (she's a cyborg). It had the perfect recipe for sci-fi, according to my own criterion: a large dose of escapism with a pinch of food-for-thought. 

But it also encapsulated two aspects of modern Japanese culture that are contradictory by Western standards. These are the modern Japanese fascination with robots, and their historical religious belief in a transmigratory soul, hence the title, Ghost in the Shell. In Western philosophy, this latter belief is synonymous with dualism, famously formulated by Rene Descartes, and equally famously critiqued by Gilbert Ryle. Ryle was contemptuous of what he called, ‘the dogma of the ghost in the machine’, arguing that it was a category mistake. He gave the analogy of someone visiting a university and being shown all the buildings: the library, the lecture theatres, the admin building and so on. Then the visitor asks, ‘Yes, that’s all very well, but where is the university?’ According to Ryle, the mind is not an independent entity or organ in the body, but an attribute of the entire organism. I will return to Ryle’s argument later. 

In contemporary philosophy, dualism is considered a non sequitur: there is no place for the soul in science, nor ontology apparently. And, in keeping with this philosophical premise, there are a large number of people who believe it is only a matter of time before we create a machine intelligence with far greater capabilities than humans, with no ghost required, if you will excuse the cinematic reference. Now, we already have machines that can do many things far better than we can, but we still hold the upper hand in most common sense situations. The biggest challenge will come from so called ‘bottom-up’ AI (Artificial Intelligence) which will be self-learning machines, computers, robots, whatever. But, most interesting of all, is a project, currently in progress, called the ‘Blue Brain’, run by Henry Markram in Lausanne, Switzerland. Markram’s stated goal is to eventually create a virtual brain that will be able to simulate everything a human brain can do, including consciousness. He believes this will be achieved in 10 years time or less (others say 30). According to him, it’s only a question of grunt: raw processing power. (Reference: feature article in the American science magazine, SEED, 14, 2008) 

For many people, who work in the field of AI, this is philosophically achievable. I choose my words carefully here, because I believe it is the philosophy that is dictating the goal and not the science. This is an area where the science is still unclear if not unknown. Many people will tell you that consciousness is one of the last frontiers of science. For some, this is one of 3 remaining problems to be solved by science; the other 2 being the origin of the universe and the origin of life. They forget to mention the resolution of relativity theory with quantum mechanics, as if it’s destined to be a mere footnote in the encyclopaedia of complete human knowledge. 

There are, of course, other philosophical points of view, and two well known ones are expressed by John Searle and Roger Penrose respectively. John Searle is most famously known for his thought experiment of the ‘Chinese Room’, in which you have someone sitting in an enclosed room receiving questions through an 'in box', in Chinese, and, by following specific instructions (in English in Searle's case), provides answers in Chinese that they issue through an 'out box'. The point is that the person behaves just like a processor and has no knowledge of Chinese at all. In fact, this is the perfect description of a ‘Turing machine’ (see my post, Is mathematics evidence of a transcendental realm?) only instead of tape going through a machine you have a person performing the instructions in lieu of a machine. 

The Chinese Room actually had a real world counterpart: not many people know that, before we had computers, small armies of people would be employed (usually women) to perform specific but numerous computations for a particular project with no knowledge of how their specific input fitted into the overall execution of said project. Such a group was employed at Bletchley Park during WWII to work on the decoding of enigma transmissions where Turing worked. These people were called ‘computers’ and Turing was instrumental in streamlining their analysis. However, according to Turing’s biographer, Andrew Hodges, Turing did not develop an electronic computer at Bletchley Park, as some people believe, and he did not invent the Colossus, or Colossi, that were used to break another German code, the Lorenz, ‘...but [Turing] had input into their purpose, and saw at first-hand their triumph.’ (Hodges, 1997). 

Penrose has written 3 books, that I’m aware of, addressing the question of AI (The Emperor’s New Mind, Shadows of the Mind and The Large, the Small and the Human Mind) and Turing’s work is always central to his thesis. In the last book listed, Penrose invites others to expound on alternative views: Abner Shimony, Nancy Cartwright and Stephen Hawking. Of the three books referred to, Shadows of the Mind is the most intellectually challenging, because he is so determined not to be misunderstood. I have to say that Penrose always comes across as an intellect of great ability, but also great humility – he rarely, if ever, shows signs of hubris. For this reason alone, I always consider his arguments with great respect, even if I disagree with his thesis. To quote the I Ching: ‘he possesses as if he possessed nothing.’ 

Penroses’s predominant thesis, based on Godel’s and Turing’s proof (which I discuss in more detail in my post, Is mathematics evidence of a transcendental realm?) is that the human mind, or any mind for that matter, cannot possibly run on algorithms, which are the basis of all Turing machines. So the human mind is not a Turing machine is Penrose’s conclusion. More importantly, in anticipation of a further development of this argument, algorithms are synonymous with software, and the original conceptual Turing machine, that Turing formulated in his ‘halting problem proof’, is really about software. The Universal Turing machine is software that can duplicate all other Turing machines, given the correct instructions, which is what software is. 

To return to Ryle, he has a pertinent point in regard to his analogy, that I referred to earlier, of the university and the mind; it’s to do with a generic phenomenon which is observed throughout many levels of nature, which we call ‘emergence’. The mind is an emergent property, or attribute, that arises from the activity of a large number of neurons (trillions) in the same way that the human body is an emergent entity that arises from a similarly large number of cells. Some people even argue that classical physics is an emergent property that arises from quantum mechanics (see my post on The Laws of Nature). In fact, Penrose contends that these 2 mysteries may be related (he doesn't use the term emergent), and he proposes a view that the mind is the result of a quantum phenomenon in our neurons. I won’t relate his argument here, mainly because I don’t have Penrose's intellectual nous, but he expounds upon it in both of his books: Shadows of the Mind and The Large, the Small and the Human Mind; the second one being far more accessible than the first. 

The reason that Markram, and many others in the AI field, believe they can create an artificial consciousness, is because, if it is an emergent property of neurons, then all they have to do is create artificial neurons and consciousness will follow. This is what Markram is doing, only his neurons are really virtual neurons. Markram has ‘mapped’ the neurons from a thin slice of a rat’s brain into a supercomputer, and when he ‘stimulates’ his virtual neurons with an electrical impulse it creates a pattern of ‘firing’ activity just like we would expect to find in a real brain. On the face of it, Markram seems well on his way to achieving his goal. 

But there are two significant differences between Markram’s model (if I understand it correctly) and the real thing. All attempts at AI, including Markram’s, require software, yet the human brain, or any other brain for that matter, appears to have no software at all. Some people might argue that language is our software, and, from a strictly metaphorical perspective, that is correct. But we don’t seem to have any ‘operational’ software, and, if we do, the brain must somehow create it itself. So, if we have a ‘software’, it’s self-generational from the neurons themselves. Perhaps this is what Markram expects to find in his virtual neuron brain, but his virtual neuron brain already is software (if I interpret the description given in SEED correctly). 

I tend to agree with some of his critics, like Thomas Nagel (quoted in SEED), that Markram will end up with a very accurate model of a brain’s neurons, but he still won’t have a mind. ‘Blue Brain’, from what I can gather, is effectively a software model of the neurons of a small portion of a rat’s brain running on 4 super computers comprising a total of 8,000 IBM microchips. And even if he can simulate the firing pattern of his neurons to duplicate the rat’s, I would suspect it would take further software to turn that simulation into something concrete like an action or an image. As Markram says himself, it would just be a matter of massive correlation, and using the super computer to reverse the process. So he will, theoretically, and in all probability, be able to create a simulated action or image from the firing of his virtual neurons, but will this constitute consciousness? I expect not, but others, including Markram, expect it will. He admits himself, if he doesn’t get consciousness after building a full scale virtual model of a human brain, it would beg the question: what is missing? Well, I would suggest that would be missing is life, which is the second fundamental difference that I alluded to in the preceding paragraph, but didn’t elaborate on. 

I contend that even simple creatures, like insects, have consciousness, so you shouldn’t need a virtual human brain to replicate it. If consciousness equals sentience, and I believe it does, then that covers most of the animal kingdom. 

So Markram seems to think that his virtual brain will not only be conscious, but also alive – it’s very difficult to imagine one without the other. And this, paradoxically, brings one back to the ghost in the machine. Despite all the reductionism and scientific ruminations of the last century, the mystery still remains. I’m sure many will argue that there is no mystery: when your neurons stop firing, you die – it’s that simple. Yes, it is, but why is life, consciousness and the firing of neurons so concordant and so co-dependent? And do you really think a virtual neuron model will also exhibit both these attributes? Personally, I think not. And to return to cinematic references: does that mean, as with Hal, in Arthur C. Clarke’s 2001, A Space Odyssey, that when someone pulls the plug on Markram’s 'Blue Brain', it dies? 

In a nutshell: nature demonstrates explicitly that consciousness is dependent on life, and there is no evidence that life can be created from software, unless, of course, that software is DNA.