Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Sunday, 27 April 2008

Trust

Trust is the foundation stone for the future success of humanity. A pre-requisite for trust is honesty, and honesty must start with honesty to one’s self. A lot has been written about the evolutionary success that arises from our ability to lie, but I would argue that dishonesty to the self is the greatest poison one can imbibe, and is the starting point for many of the ills faced by individuals and societies alike.

No one is immune to lying – we’ve all lied for various reasons: some virtuous, some not. But it is when we lie to ourselves that we paradoxically lay the groundwork for a greater deception to the outside world. Look at the self-deception of some of the most notorious world leaders, who surround themselves with acolytes, so they can convince the wider world of the virtue of their actions.

When I was very young, 6 or 7 (50 years ago now), I learned my first lesson about lying that I can still remember. I was in a school playground when someone close by ended up with a bleeding nose – to this day I’ve no idea what actually happened. Naturally, a teacher was called, and she asked, ‘What happened?’ A girl nearby pointed at me and said, ‘He hit him.’ I was taken to the Head Mistress, who was a formidable woman. In those days, children were caned for less, though I had never been caned up to that point in my schooling. At that age, when I arrived home from school, my father sometimes asked me, ‘Did you get the cane today?’ It was always very important to me to be able to say ‘no’, as I hated to think of the inquisition that would have followed if I’d ever said ‘yes’.

Back to the Head Mistress; I remember our encounter well. The school classrooms were elevated with a verandah, and we sat outside looking down at the courtyard, which was effectively deserted – the playground, where the incident had occurred, was out of sight. Her first question may have been: ‘Why did you hit him?’ or it may have been: ‘Tell me what happened.’ It doesn’t really matter what she actually said, because the important thing was that I realised straightaway that the truth would be perceived as a lie. I had to tell her something that she would believe, so I told her a story that went something like this: ‘We were both running and I ran into him.’ Her response was something like: ‘That’s interesting, I wasn’t told you were running. You’re not supposed to run.’ I knew then, possibly by the tone of her voice, that I had got away with it.

What’s most incredible about this entire episode is that it’s so indelibly burned into my brain. I learned a very valuable lesson at a very early age: it’s easier to tell a lie that is wanted to be heard than the truth that is not. Politicians, all over the world, practice this every day, some more successfully than others. For example, if soldiers commit a massacre, the powers-that-be can often deny it with extraordinary success; for the very simple reason that ordinary people would much prefer to ‘know’ that the massacre never happened than to ‘know’ the truth. (Hugh Mackay, in his excellent book, Right & Wrong; how to decide for yourself, refers to this as 'telling people the lies they want to hear'.)

A worldwide survey was done sometime in the last decade on 'trust', within various societies, and it revealed a remarkable correlation. (I don’t know who commissioned it; I read about it in New Scientist.) They found that the degree of trust between individuals in business transactions was directly dependent on the degree of trust they had in their government. So trust starts at the top, which is why I opened this essay with the sentence I chose. Trust starts with world leaders, and the more powerful they are, the more important it is.

A very good barometer of the health of a democracy is its media. By this criterion, America is one of the healthiest democracies in the world. We all take pot shots at America, including me, but most of the criticism, and all the ammunition for the criticisms that I level at America, come from the American press themselves. The other emerging power in the 21st Century, China, and the re-emerging power, Russia, have quite a different view on what criticisms they tolerate, both internally and externally. In Russia, journalists have been assassinated, and China is 'the world's leading jailer of journalists' according to CPJ (Committee to Protect Journalists).

Without trust, there can be no negotiations, no security and no creativity for individuals; the world will be forced to conform to a parody of democracy, a façade and ultimately a farce. Whatever the political or economical outcomes of the 21st Century, there will be enormous pressure on humanity worldwide. Trust, on a global scale, will be requisite for a stable and sustainable future. It is only because of the media that debates can take place between groups and with an informed public. It is the role of the media to keep politicians honest, not only to themselves, but also to the rest of us. It is when politicians usurp this role that trust disappears. Everywhere.


Footnote: I wrote this almost immediately after I saw the U2 3D concert in a cinema. I came out of the theatre with the first sentence already in my head. So I had to write it down, and the rest just followed.

Clive James made the point in an interview last year, that democracy is not the norm, it's the exception; in the West, we take democracy for granted.

This issue is complementary to issues I discuss, in a different context, in my post entitled Human Nature (Nov. 07).

Friday, 11 April 2008

The Ghost in the Machine

One of my favourite Sci-Fi movies, amongst a number of favourites, is the Japanese anime, Ghost in the Shell, by Mamoru Oshii. Made in 1995, it’s a cult classic and appears to have influenced a number of sci-fi storytellers, particularly James Cameron (Dark Angel series) and the Wachowski brothers (Matrix trilogy). It also had a more subtle impact on a lesser known storyteller, Paul P. Mealing (Elvene). I liked it because it was not only an action thriller, but it had the occasional philosophical soliloquy by its heroine concerning what it means to be human (she's a cyborg). It had the perfect recipe for sci-fi, according to my own criterion: a large dose of escapism with a pinch of food-for-thought. 

But it also encapsulated two aspects of modern Japanese culture that are contradictory by Western standards. These are the modern Japanese fascination with robots, and their historical religious belief in a transmigratory soul, hence the title, Ghost in the Shell. In Western philosophy, this latter belief is synonymous with dualism, famously formulated by Rene Descartes, and equally famously critiqued by Gilbert Ryle. Ryle was contemptuous of what he called, ‘the dogma of the ghost in the machine’, arguing that it was a category mistake. He gave the analogy of someone visiting a university and being shown all the buildings: the library, the lecture theatres, the admin building and so on. Then the visitor asks, ‘Yes, that’s all very well, but where is the university?’ According to Ryle, the mind is not an independent entity or organ in the body, but an attribute of the entire organism. I will return to Ryle’s argument later. 

In contemporary philosophy, dualism is considered a non sequitur: there is no place for the soul in science, nor ontology apparently. And, in keeping with this philosophical premise, there are a large number of people who believe it is only a matter of time before we create a machine intelligence with far greater capabilities than humans, with no ghost required, if you will excuse the cinematic reference. Now, we already have machines that can do many things far better than we can, but we still hold the upper hand in most common sense situations. The biggest challenge will come from so called ‘bottom-up’ AI (Artificial Intelligence) which will be self-learning machines, computers, robots, whatever. But, most interesting of all, is a project, currently in progress, called the ‘Blue Brain’, run by Henry Markram in Lausanne, Switzerland. Markram’s stated goal is to eventually create a virtual brain that will be able to simulate everything a human brain can do, including consciousness. He believes this will be achieved in 10 years time or less (others say 30). According to him, it’s only a question of grunt: raw processing power. (Reference: feature article in the American science magazine, SEED, 14, 2008) 

For many people, who work in the field of AI, this is philosophically achievable. I choose my words carefully here, because I believe it is the philosophy that is dictating the goal and not the science. This is an area where the science is still unclear if not unknown. Many people will tell you that consciousness is one of the last frontiers of science. For some, this is one of 3 remaining problems to be solved by science; the other 2 being the origin of the universe and the origin of life. They forget to mention the resolution of relativity theory with quantum mechanics, as if it’s destined to be a mere footnote in the encyclopaedia of complete human knowledge. 

There are, of course, other philosophical points of view, and two well known ones are expressed by John Searle and Roger Penrose respectively. John Searle is most famously known for his thought experiment of the ‘Chinese Room’, in which you have someone sitting in an enclosed room receiving questions through an 'in box', in Chinese, and, by following specific instructions (in English in Searle's case), provides answers in Chinese that they issue through an 'out box'. The point is that the person behaves just like a processor and has no knowledge of Chinese at all. In fact, this is the perfect description of a ‘Turing machine’ (see my post, Is mathematics evidence of a transcendental realm?) only instead of tape going through a machine you have a person performing the instructions in lieu of a machine. 

The Chinese Room actually had a real world counterpart: not many people know that, before we had computers, small armies of people would be employed (usually women) to perform specific but numerous computations for a particular project with no knowledge of how their specific input fitted into the overall execution of said project. Such a group was employed at Bletchley Park during WWII to work on the decoding of enigma transmissions where Turing worked. These people were called ‘computers’ and Turing was instrumental in streamlining their analysis. However, according to Turing’s biographer, Andrew Hodges, Turing did not develop an electronic computer at Bletchley Park, as some people believe, and he did not invent the Colossus, or Colossi, that were used to break another German code, the Lorenz, ‘...but [Turing] had input into their purpose, and saw at first-hand their triumph.’ (Hodges, 1997). 

Penrose has written 3 books, that I’m aware of, addressing the question of AI (The Emperor’s New Mind, Shadows of the Mind and The Large, the Small and the Human Mind) and Turing’s work is always central to his thesis. In the last book listed, Penrose invites others to expound on alternative views: Abner Shimony, Nancy Cartwright and Stephen Hawking. Of the three books referred to, Shadows of the Mind is the most intellectually challenging, because he is so determined not to be misunderstood. I have to say that Penrose always comes across as an intellect of great ability, but also great humility – he rarely, if ever, shows signs of hubris. For this reason alone, I always consider his arguments with great respect, even if I disagree with his thesis. To quote the I Ching: ‘he possesses as if he possessed nothing.’ 

Penroses’s predominant thesis, based on Godel’s and Turing’s proof (which I discuss in more detail in my post, Is mathematics evidence of a transcendental realm?) is that the human mind, or any mind for that matter, cannot possibly run on algorithms, which are the basis of all Turing machines. So the human mind is not a Turing machine is Penrose’s conclusion. More importantly, in anticipation of a further development of this argument, algorithms are synonymous with software, and the original conceptual Turing machine, that Turing formulated in his ‘halting problem proof’, is really about software. The Universal Turing machine is software that can duplicate all other Turing machines, given the correct instructions, which is what software is. 

To return to Ryle, he has a pertinent point in regard to his analogy, that I referred to earlier, of the university and the mind; it’s to do with a generic phenomenon which is observed throughout many levels of nature, which we call ‘emergence’. The mind is an emergent property, or attribute, that arises from the activity of a large number of neurons (trillions) in the same way that the human body is an emergent entity that arises from a similarly large number of cells. Some people even argue that classical physics is an emergent property that arises from quantum mechanics (see my post on The Laws of Nature). In fact, Penrose contends that these 2 mysteries may be related (he doesn't use the term emergent), and he proposes a view that the mind is the result of a quantum phenomenon in our neurons. I won’t relate his argument here, mainly because I don’t have Penrose's intellectual nous, but he expounds upon it in both of his books: Shadows of the Mind and The Large, the Small and the Human Mind; the second one being far more accessible than the first. 

The reason that Markram, and many others in the AI field, believe they can create an artificial consciousness, is because, if it is an emergent property of neurons, then all they have to do is create artificial neurons and consciousness will follow. This is what Markram is doing, only his neurons are really virtual neurons. Markram has ‘mapped’ the neurons from a thin slice of a rat’s brain into a supercomputer, and when he ‘stimulates’ his virtual neurons with an electrical impulse it creates a pattern of ‘firing’ activity just like we would expect to find in a real brain. On the face of it, Markram seems well on his way to achieving his goal. 

But there are two significant differences between Markram’s model (if I understand it correctly) and the real thing. All attempts at AI, including Markram’s, require software, yet the human brain, or any other brain for that matter, appears to have no software at all. Some people might argue that language is our software, and, from a strictly metaphorical perspective, that is correct. But we don’t seem to have any ‘operational’ software, and, if we do, the brain must somehow create it itself. So, if we have a ‘software’, it’s self-generational from the neurons themselves. Perhaps this is what Markram expects to find in his virtual neuron brain, but his virtual neuron brain already is software (if I interpret the description given in SEED correctly). 

I tend to agree with some of his critics, like Thomas Nagel (quoted in SEED), that Markram will end up with a very accurate model of a brain’s neurons, but he still won’t have a mind. ‘Blue Brain’, from what I can gather, is effectively a software model of the neurons of a small portion of a rat’s brain running on 4 super computers comprising a total of 8,000 IBM microchips. And even if he can simulate the firing pattern of his neurons to duplicate the rat’s, I would suspect it would take further software to turn that simulation into something concrete like an action or an image. As Markram says himself, it would just be a matter of massive correlation, and using the super computer to reverse the process. So he will, theoretically, and in all probability, be able to create a simulated action or image from the firing of his virtual neurons, but will this constitute consciousness? I expect not, but others, including Markram, expect it will. He admits himself, if he doesn’t get consciousness after building a full scale virtual model of a human brain, it would beg the question: what is missing? Well, I would suggest that would be missing is life, which is the second fundamental difference that I alluded to in the preceding paragraph, but didn’t elaborate on. 

I contend that even simple creatures, like insects, have consciousness, so you shouldn’t need a virtual human brain to replicate it. If consciousness equals sentience, and I believe it does, then that covers most of the animal kingdom. 

So Markram seems to think that his virtual brain will not only be conscious, but also alive – it’s very difficult to imagine one without the other. And this, paradoxically, brings one back to the ghost in the machine. Despite all the reductionism and scientific ruminations of the last century, the mystery still remains. I’m sure many will argue that there is no mystery: when your neurons stop firing, you die – it’s that simple. Yes, it is, but why is life, consciousness and the firing of neurons so concordant and so co-dependent? And do you really think a virtual neuron model will also exhibit both these attributes? Personally, I think not. And to return to cinematic references: does that mean, as with Hal, in Arthur C. Clarke’s 2001, A Space Odyssey, that when someone pulls the plug on Markram’s 'Blue Brain', it dies? 

In a nutshell: nature demonstrates explicitly that consciousness is dependent on life, and there is no evidence that life can be created from software, unless, of course, that software is DNA.
 

Thursday, 27 March 2008

The Laws of Nature

This is another posting arising from an intellectually stimulating read: Michael Frayn’s The Human Touch, subtitled, Our Part in The Creation of the Universe. The short essay below is in response to just one chapter, The Laws of Nature. I have to say at the outset that Frayn is far more widely read than I am, and his discussion includes commentary and ruminations by various scientists and philosophers: Popper, Kuhn, Einstein, Planck, Bohr, Born, von Neumann, Feynman, Gell-Mann, Deutsch, Taylor, Prigogine and Cartwright, amongst others. A number of these I have not read at all, but I find it strange that he does not include Penrose (except for one passing reference in his notes) or Davies, who have written extensively on this subject, and have well known philosophical views.

I haven’t finished reading Frayn’s text (though I’ve read his extensive notes on the subject) so I may have more to say later, and the following was originally written in the form of a ‘letter to the author’, which I never sent.

The heart of Frayn’s dissertation on the ‘laws of nature’ seems to be the interaction between the human intellect and the natural world. There are 2 antithetical views, both of which involve mathematics, because, in physics at least, the ‘laws of nature’ can only be expressed in mathematics. We may give descriptions in plain language, creating man-made categories in our attempts, but without mathematics we would not be able to validly call them ‘laws’, whether they be fiction or otherwise.

The first of these antithetical views is that we have invented mathematical methods, which have evolved into, sometimes complex, sometimes simple, mathematical models that we can apply to numerous phenomena we observe, and, in many cases, find a near-perfect fit. The second view is that the laws already exist in nature, and mathematics is the only means by which they can be revealed. I tend to subscribe to the second view. I disagree philosophically with Einstein, who contended, quite reasonably, that 'the series of integers is obviously an invention of the human mind', but agree with him that there is an underlying order in the machinations of the universe. (In regard to Einstein's contention, I discuss this argument in detail in 2 other postings; but, even if we invent the numbers, the relationships between them we do not.)

We humans puzzle over facts like the planets maintaining their orbits for millions of years, or the self-organising properties of galaxies, or of life for that matter, or the predictability of the effects of light shone through slits. We look for patterns, so we are told, and therefore we project patterns onto the things we observe. But science has demonstrated that there are not only patterns in nature, but relationships between events that can be described in mathematics to unreasonable degrees of accuracy. My own view is that the mathematical relationships found in nature are not projected, it’s just that the deeper we look the more unfamiliar the relationships become.

It seems to me the laws, for want of a better word, exist in layers, so that, at different scales different ones dominate. It follows from this that we haven’t discovered them all, and possibly we never will, but it doesn’t mean that the ones we have discovered are therefore false or meaningless. I have had correspondence with philosophers of science who believe that one day we will find the one governing law or set of laws that will make all current laws obsolete, which means the current ones are false and meaningless, but history would suggest that this goal is as mythical as the original Holy Grail.

Everyone posits Einstein’s theories of relativity making Newton’s laws obsolete as the prime example of this process, yet the same set of ‘everyone’ uses Newton’s equations over Einstein’s for most purposes, because they are simpler and just as accurate for their requirements. Einstein made the point (according to Frayn’s reference, Abraham Pais) that Newton’s mechanics were based on ‘fictional principles’, yet gave the same results as his own theories for many phenomena. (Frayn quotes Pais in his belief that Einstein thought all theories were fictions.) I believe the main ‘fictional principle’ inherent in Newton's theory (for gravity at least) arises from the fact that there is no force experienced in gravity if you are in free fall; there is only a force when you are stopped from falling. This is arguably the most significant conceptual difference between Newton's and Einstein's theories, and appears to be one of the key motivations for Einstein seeking a different mathematical interpretation for gravity.

Einstein’s theories are an example of how the laws of nature are not what they appear to be at the scale we experience them, specifically in regard to gravity, space, time and mass. His equations supersede Newton’s, in all respects, because they more accurately describe the universe on cosmological and atomic scales, but they reduce to Newton’s equations when certain parameters become negligible.

On the other hand, quantum mechanics appears to be another set of laws altogether that lie behind the classical laws (including relativity) that only become apparent at atomic and sub-atomic scales. I would suggest, however, that this dissociation between the quantum and classical worlds is a result of a gap in our knowledge, as contended by Roger Penrose, rather than evidence that the ‘laws of nature’ are all fictions. Assuming that this gap can be resolved in the future, new laws expressed in new or different mathematical relationships would be revealed. It’s not axiomatic that these future discoveries will make our current knowledge obsolete or irrelevant, but, hopefully, less mystifying.

I would make the same prediction concerning our knowledge of evolution. In the same way that Darwin proposed a theory of evolution based on natural selection, without any knowledge of genes or DNA, future generations will make discoveries revealing secrets of DNA development which may change our view on evolution. I’m not talking about ‘Intelligent Design’, but discoveries that will prove ID a non sequitur; as ID is currently a symptom of our ignorance, not an aid to future discoveries as claimed by its proponents. (See my Nov.07 post: Is evolution fact? Is creationism myth?)

There are deep, fundamental, inexplicable principles involved when one examines natural phenomena. Not-so-obvious examples are the principle of least time in refraction (referenced in Frayn’s text; intuited by the 17th Century mathematician genius, Pierre de Fermat) and the principle of maximum relativistic time in gravity, expounded brilliantly by Richard Feynman in Six Not-So-Easy Pieces. These principles, I would contend, are not inventions, but discoveries, and they reveal an underlying natural schema that we could never have predicted through speculation alone. In fact, were it not for our powers of intellect and observation, in combination with a predilection for mathematics, we would not even know they exist.

Footnote: James Gleick in his biography of Feynman, GENIUS, gives the impression that these 2 phenomena could be different manifestations of the same underlying 'principle of least action' that Feynman even employed in his examination of quantum mechanics. Anyone who is familiar with both these phenomena will appreciate the connection - it's like the light or the particles choose their own path - as Gleick expounds quite eruditely, but without the equations.


Addendum 1: Since I published this post, I've read Feynman's lecture series that he gave in New Zealand in 1983 published under the title, QED, The Strange Theory of Light and Matter. In his first lecture, he gives a brilliant exposition (in plain English) on how light reflected by a mirror 'follows the least time path' can be explained by quantum mechanics. I need to add the caveat that no one understands quantum mechanics, a point that Feynman is at pains to make himself, right at the start of his lectures.

Addendum 2: I wrote a later post on 'least action' which is more erudite.

Friday, 14 March 2008

Imagination

I first came across the term ‘intentionality’ as a philosophical term when I was reading John Searle’s Mind, and I had difficulties with it until I substituted the term imagination. I had forgotten about this until I read another account in The Oxford Companion to the Mind (edited by Richard L. Gregor, 1987), thinking I was going to read about intentionality as a mental purpose, as it would be used in ordinary language. Once again, forgetting all about my experience with John Searle, I was about half way through the discourse when I found myself substituting the term imagination, and then I realised: I had taken this mental journey before.

This is an example of how I believe we integrate new knowledge into existing knowledge. When we come across a new experience or phenomenon, or new information, we axiomatically look for something we are already familiar with that we can analogise it with. It’s also why metaphor is such a favoured form of description and is so readily adopted and understood without extraneous explanations. So, in the absence of anything better, I substitute imagination for ‘intentionality’ but the more I read the more I conclude that they are the same thing. According to The Oxford Companion to the Mind, intentionality is only evident in mental states and is about 'aboutness’. When I read Searle’s account and the examples he gave of someone being able to conceptualise a real event that had occurred in history or in another place or another time, or an event that had never occurred at all, then that’s imagination. Also I argue that this is not unique to humans. The fact that many species can plan and co-operate, especially when hunting, suggests that they can ‘imagine’ the outcome they are trying to achieve.

I once had a brief correspondence with Peter Watson, author of A Terrible Beauty (an extraordinarily erudite and comprehensive book of the ‘ideas and minds that shaped the 20th Century’), who contended that words like ‘imagine’ and ‘introspection’ have outlived their usefulness, and that they no longer fit in with our comprehension of our mental states, and, possibly, are even misleading. I had serious problems with this dismissal of our inner world, as I saw it. Also he talked about ‘imagination’ as if he really meant ‘creativity', which is an essential but limited aspect of how we imagine (more on that below). When I quizzed him on this, he explained that his real complaint was that he found words like ‘imagination’ vague; according to Watson, 'imagination' was even more vague than ‘mind’. (I must say in passing that I have the utmost respect for Peter Watson, even though we’ve never met, and he responded good-naturedly to all my criticisms.)

But I think the reason that people are uncomfortable with terms like these: imagination, introspection, mind; is that they defy objectivity by their very nature. You cannot talk with any validity about anyone's imagination, introspection or mind, except your own. Our inner world is subjectivity incarnate, yet, because we all have one, we can talk about it in a common language.

In my view, ordinary people know what we mean by ‘imagination’ and ‘introspection’ even if no one can explain how it happens, and they remain essential components of our psychological lives. In my posting, The Meaning of Life, I allude to Watson’s philosophical viewpoint by referring to an extreme position that considers our internal world to be so dependent on the external world, that it makes the inner world we all experience irrelevant (some people do take this view). In fact, Watson did make the point that our inner world is completely dependent on the external world – no one can really claim that anything is created independently of the outer world. And he said that this was his salient point: no one ever came up with a valid theory or idea by introspection alone, without considering external factors. I would agree with him on this, but it doesn’t mean that imagination and introspection have no role to play.

Also he has a point, regarding the dependence of our inner world on the outer world, when one considers that we all think in a language and we all gain our language from the external world (I make this point in my posting on Self). Language is one of the means, arguably the most important, but not the only one, that allows an interaction between the inner and outer world, and it goes both ways – we are not passive participants in the world. And yes, our imagination is fueled by external events, yet, without imagination there would be no art, in any form, and, in particular, no stories; not only for the creator, but also for the recipient.

Being a storyteller myself, this is something I can talk about with some experience. I find it interesting that a writer can compose a story that so engrosses the reader that he or she actually forgets they’re reading. How does one achieve this? It’s simple in principle, but very difficult in execution: one allows the reader to create an imaginary world that he or she inhabits so successfully, they become emotionally involved as if it was real, or as if they were in a dream. It's called suspension of disbelief - essential to the success of any story. And I think dreaming is the link, because writing a story is not unlike having a dream, only you consciously interfere with it, and that’s what ‘creating’ is really all about. I could elaborate on this, but this is not the place.

While it seems I’m getting off the track, I made a point in another posting, The Universe’s Interpreters, that the reason films, video and computer games have not made novels extinct (weakened yes, but not yet endangered) is because we can so readily and effortlessly create pictures in our minds. I contend, though I have no scientific evidence, that if we didn’t think in a language, we would think in images. The basis for this contention is that we dream in images and metaphor, and I believe that is our primal language. (Freudian yes, but without referencing Freud.) So much of imagination involves imagery – a point that Searle somehow misses when he discusses intentionality, yet it is obvious. (It occurred to me that Searle had the same aversion to the term that Watson revealed.) Searle does make the point, however, that intentionality can involve desires and beliefs, which, of themselves, can be manifested in sensory form (he gives the examples of hunger and thirst).

It’s only humans who create art, and it is often proposed that the emergence of art is the first indication of our evolutionary separation from other homo-related species. But imagination, along with the other conscious attributes we have, are not unique to humans, just our ability to exploit them and project them into the external world.

It’s not for nothing that Searle claims the problem of intentionality is as great as the problem of consciousness – I would contend they are manifestations of the same underlying phenomena – as though one is passive and the other active. Searle wrote his book, Mind, in part, to offer explanations for these phenomena (although he added the caveat that he had only scratched the surface), whereas I make no such attempt. That’s not to say that in the future we won’t know more, but I also think that our reductionist approach will find its own limitations – I predict we will uncover more knowledge only to reveal more mysteries, as we have done with quantum mechanics.

However, from this premise, I would say that imagination, or ‘intentionality’ (if I interpret it correctly) is a manifestation of mental activity, and one that we are unlikely to find in a machine, but that’s another topic for another day.

Sunday, 9 March 2008

What is Philosophy?

This is one of those topics that is possibly best introduced by discussing what it is not. Traditionally, Western philosophy is divided into categories: ontology, epistemology, ethics, aesthetics and logic. For the layperson, ontology is often described as the ‘nature of being’, epistemology is to do with ‘knowing’: theories of knowledge may be the best description, and could include aspects of language or linguistics. Ethics relates to moral philosophy, aesthetics relates to philosophies of art and beauty, and logic is related to, but not synonymous with, mathematics. One may also include theology as another category.

From my own essays published in this blog to date, one can see that I cover a variety of topics that infringe on a number of disparate fields. After all, one may see a connection between mathematics and science, science and psychology, psychology and morality, but what about mathematics and morality? And some people may conclude from this, that philosophy is some sort of umbrella discipline that covers all fields of human knowledge and learning. But this would be misleading: ontology is not religion, epistemology is not science, aesthetics is not art, ethics is not justice or politics, and logic is not mathematics. In other words, all these disparate disciplines are not just branches of philosophy.

So what is the relationship? I would argue that the relationship is dialectical: they feed each other. All these fields, of themselves, involve philosophy, even if it’s at a subconscious level. People who practice in these fields, if challenged to explain their motivations and methodologies, will give a philosophical answer. And the significance of this is that it will both agree and differ from others practicing in the same field, even if they have the same level of expertise. To give an example, relating to one of my own postings: Is mathematics evidence of a transcendental realm? I point out how Roger Penrose and Stephen Hawking, who have worked together at the frontiers of cosmology, are philosophically poles apart with regard to their mathematical viewpoints (refer The Large, the Small and the Human Mind by Penrose, Shimony, Cartwright and Hawking).

In that posting, I describe how Kurt Godel ‘proved’ a fundamental premise in mathematical thinking: one cannot derive all mathematics from a set of known axioms. Now some may conclude that this constitutes a ‘philosophical’ proof, but I would contend there are no ‘philosophical proofs’, only proofs that can support a philosophical point of view. You may argue: what is the difference? Well, the difference is that different people, quite commonly, use the same proof to support different philosophical points of view, and Godel’s proof is a case in point. Godel was a Platonist his entire life, while his very good friend, Albert Einstein, was not a Platonist at all. When they lived in Princeton they often took long walks together, which they both apparently enjoyed, able to talk on esoteric subjects at the same level of comprehension (refer A world Without Time by Palle Yourgrau). Obviously, they didn’t always agree, so what was the attraction. Well, based on my own experience, I believe they liked to challenge each other, and be challenged, and that is what practicing philosophy is all about. In a nutshell, philosophy is a point of view supported by rational argument. The corollary to this is that it requires argument to practice philosophy.

To give an entirely different example: many years ago I knew a family of Jehovah Witnesses, and we became good friends, keeping in contact for many years. Every now and then, when they were ‘witnessing’ (as a couple) they would come to my place (I lived alone) and we would have a very good argument. I always enjoyed those encounters because they made me dig very deep into my beliefs and I always felt invigorated afterwards, like one does after some vigorous exercise, like running. I always assumed that they felt the same, otherwise why would they do it? But, most importantly, I believe they came to me, not to be convinced, or even to convince me, but to be challenged, like an exercise.

To return to the categories listed in the introduction: in the 20th Century, academically at least, epistemology became the central pillar of modern philosophy, to the extent that, for some philosophers, epistemology and logic are philosophy – everything else is opinion or culture. In this context, many see philosophy as being subordinate to science - a mere footnote to empiricism. This is a philosophical viewpoint in itself, which, many would argue, originated with David Hume. Bertrand Russell, for example, acknowledged that Hume was the most influential philosopher he read. It was Hume who challenged some of our most important assumptions about cause and effect (he argued that we can never know for certain), and who founded the philosophical premise of empiricism which underpins all of science. (John Searle, in Mind, is one of the few I have read who successfully challenges Hume’s philosophy on cause and effect). So one can see the connection between epistemology and science: if science is empiricism and epistemology is knowledge – they go together hand in glove. But there are limits to science, at least in my view, and that’s another topic (see Does the Universe have a Purpose?).

Epistemology logically leads to a discussion on language, because we all think in a language, and all our knowledge acquisition is language based. This leads one to Ludwig Wittgenstein, who was arguably the most influential philosopher of the 20th Century. But before I discuss Wittgenstein, one can’t leave the discussion of epistemology without a reference to mathematics, especially where science is concerned. I’ve already written 2 postings on mathematics: Is mathematics invented or discovered? and Is mathematics evidence of a transcendental realm? So I will be succinct. Arguably, our knowledge of mathematics has provided us with more insight into the machinations of the Universe, at all levels, than any other endeavour. In keeping with the accepted interpretation of epistemology, many would argue that mathematics is just another language, albeit one that is never a first language. This is a philosophical viewpoint that is hard to defend, not least, because numbers (the fundamental elements of all mathematics) never relate to specific entities as descriptors, the way words do. So I would argue that mathematics is totally relevant to epistemology, but in a way that language is not.

Getting back to Wittgenstein, one of his most famous statements was: ‘Philosophy is a battle against the bewitchment of our intelligence by the means of language.' Notice how the statement is deliberately ambiguous, even contradictory: does language bewitch our intelligence or do we combat the bewitchment of our intelligence using language? This statement is more than just a clever wordplay, however, and really does encapsulate Wittgenstein's approach to philosophy. Another philosopher who places language centre stage is Umberto Eco. More famously known as a novelist, he is Professor of semiotics at the University of Bologna. Semiotics, according to my dictionary, is the study of words and signs and their relevance to ideas and the physical world. Eco’s book, Kant and the Platypus, is his attempt to convey to laypeople his philosophy of semiotics, but that is another discussion for another time perhaps.

The key to all this, from my viewpoint, is that language may not be unique to humans, but the manner in which we have exploited it is. We use language not only to describe objects in the real world but to embrace metaphysical ideas and concepts that we structure into arguments for discussion. So the link between language and philosophy goes beyond the mundane and the obvious - it is welded to meaning. Wittgenstein's legacy was that he probably understood this better than anyone else, and he made it his life's work to analyse and explore it in all its ramifications.

A lot could be written on Wittgenstein and the results of his exploration, but, even if I was more familiar with his work, I would not choose this context to do it. For some people however, especially in academia, Wittgenstein represents the culmination of Western philosophical thought from Socrates to the modern day.

This should finish the discussion, but I think there is another misconception that needs to be clarified about what constitutes philosophy and its relationship to other fields. In my introduction I made the point that one can discuss mathematics at a philosophical level as well as morality, as I have done more than once in this blog, but that doesn’t mean that I can support a moral philosophical viewpoint using a mathematical argument or vice versa. This example is obvious, but it’s more relevant when one considers the arguments that arise between science and religion.

There are 3 postings on this blog already that refer specifically to arguments between creationism and evolution, because it’s been a very hot topic in recent decades. It is only because philosophy allows a bridge between these 2 different aspects of human enquiry that this debate exists, yet this very aspect of the debate seems to be lost. I made the point in my posting: Is evolution fact? Is creationism myth? that there is an epistemological divide between science and religion that I’ve never seen considered, let alone discussed. Religion is a personal experience that is unique to the individual who has it, whereas science, being empirically based, requires repetition to make it valid. Science seeks universality and religion is intimately personal, albeit most religious arguments have a historical, cultural context. In the case of creationism versus evolution, the context is both historical and cultural, which is why the debate persists.

But there are other aspects to this debate that need to be aired. I remember reading C.S. Lewis’s account of why he could not accept evolution over a biblical interpretation. He consistently referred to evolution as a ‘story’, and since the biblical interpretation is also a story, it’s just a case of substituting one story for another. One story he believed and the other he didn’t – it was that simple. In philosophical parlance, this would be called a ‘category mistake’, though most category mistakes evoked in philosophical discussions are not so obvious. The point is, that people forget that this is a philosophical debate, and that doesn’t allow one to substitute religion for science or science for religion. Substituting a science theory with a biblical story doesn’t make it science, which is why creationists dress it up in different clothes, which I call deception, even fraud when it comes to passing it off as science education.

I’ve said elsewhere that science and religion can’t answer each other’s questions, and I’m not sure why they believe they should. Science and theism are not mutually incompatible but evolution and creationism are. A scientist who is a theist knows the limitations of their science and their beliefs. They know they can’t use science to prove that God exists, in whatever manifestation, and likewise they can’t use a belief in God to support a scientific theory.

In my posting, Does the Universe have a Purpose? there is a link to the John Templeton Foundation, where a group of philosophers, scientists and theologians give their responses to this question. Interestingly, but not surprisingly, both theists and atheists, use evidence from science to support their particular point of view. It’s not much different to Godel and Einstein disagreeing over the philosophical consequences of Godel’s own theorem.

Creationists exploit the gaps in our current knowledge of evolutionary theory to press their case, with the implication that, because evolution, like most scientific endeavours, is still a theory in progress, the entire theory can be replaced by a pseudo-theory lifted from the Bible, despite the fact that it’s been a hugely successful theory to date. I haven’t heard, or seen, a single creationist suggesting that we should scrap quantum mechanics, even though it defies explanation in plain language, and evolution is arguably no less successful in empirical evidence than quantum theory is.

So this is an area in philosophy where disciplines collide, but they collide in philosophy, not in science or religion. If people recognised this, and thankfully many theologians do, the debate would be more sane and less politically malleable.

Footnote: Some of the best philosophical arguments I've read against creationism have been written by theologians. For an example see link: www.cosmosmagazine.com/features/print/15/bad-faith

Friday, 8 February 2008

Left or Right

This is a letter I wrote to New Scientist in response to an article by Jim Giles. In a nutshell, 'twin studies' have revealed that personality traits like openness, conscientiousness and extroversion/introversion are inherited, and he argues that these traits indirectly affect one's acceptance of new ideas or tendency to resist change.

Reference: New Scientist, 2 February 2008, pp29-31.

The following text is my response, but the last two paragraphs on fundamentalism were added later.

The dichotomy described by the article ‘Are your genes left wing or right wing?’ goes beyond politics, albeit that is where it has its biggest impact. The human population seems divided between those who seek change and those who want to maintain the status quo, and I would argue we need both. While this division seems to be close to 50/50, it is probably more of a spectrum than a polarisation. There are arch-conservatives who want to turn back the clock, and arch-radicals who want change overnight, but most people have more tempered views.

In the 1960s, Carl Rogers commented on the correlation found by people who were certain about how far a point of light jiggles against a dark background with no point of reference, and their level of racial intolerance as determined by a questionnaire. (The point of light experiment is a well known illusion, even though it doesn’t move at all.) In effect, people who seek and believe in absolute certainties are more likely to be conservative in other respects as well, like resisting change to perceived stereotypes.

I’ve always found it curious, that, by and large, artists are more liberal than other sectors of society. But it’s not so surprising, if one considers that artists are most open to new ideas, are more empathetic to the eccentric and the outsider, and also lack discipline (I’m speaking from personal experience on the last attribute). Even amongst scientists, there are those who are more sceptical, more loyal to traditional ideas, and those who are more likely to entertain fringe concepts, even at the risk of criticism and sometimes ridicule. As I said, I believe we need both.

But the other thing, that history has demonstrated, is that, despite the enormous inertia to change that seems almost natural, change occurs anyway, which would suggest that there is a healthy interaction between these 2 ‘types’ over the long term. In politics, in particular, what was considered radical in the past becomes the norm in the present day, otherwise we would still have slavery and women would not be able to vote. So over the long term, change seems to occur for the better, but in such a way that the conservatives who want to maintain the status quo can accept it as well. It should not be surprising then, that the most radical changes are generational, whereby the new conservatives have new conservative values that were previously considered liberal.

What I have found, from my own experience, is that, despite the prejudices that seem to arise from this divide, qualities like honesty, loyalty and integrity appear to be neither monopolised nor decidedly lacking from either side.

Sometimes, of course, the change can go the other way, and I’m thinking specifically of fundamentalism, which provides an attractive refuge for anyone who feels they’re a lost soul, especially an alienated lost soul. It provides certainty in a world full of unknowns. Fundamentalism is the ultimate form of certainty: it provides an answer to all situations and all questions. Everything is black and white: there is no grey, no doubt and no need to wonder.

In a sense, fundamentalism is a radical form of conservatism, the end result being a complete intolerance of any other point of view. There is no greater conflict than that experienced between 2 or more fundamentalist groups, as we are currently witnessing on a global scale. Fundamentalism is always considered an ultra-conservative position, and the logical consequence is that, in the case of conflict, only moderates can broker a peace, which is rather ironic for the parties involved. As far as the fundamentalists are concerned, peace can only come with the annihilation of the other, which translates to conflict without end.

Footnote: In reference to the third last paragraph, there is an example, albeit a fictional one, in my novel, ELVENE. For anyone who has read the book, it's obvious that the character, Elvene, is liberal, and her immediate superior, Roger, is conservative. Yet both display qualities of loyalty and integrity, and both will buck the system if they feel morally compromised. I have witnessed this in real life.