Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Sunday, 1 June 2008

Is there a God?

This was the 'Question of the Month' posed in Philosophy Now, Issue 65 (January/February 2008).

To be put into perspective, this post should be read with one of my earlier posts, God, theism, atheism (Aug.07), and possibly also Does the Universe have a Purpose? (Oct.07) I need to add the significant caveat that I don’t expect others to believe what I believe. Religion is a very personal, even intimate, experience. As I said in that earlier posting, I believe atheism is a perfectly valid and honest point of view. I only have a problem with atheists when they insist they are axiomatically intellectually superior to theists, in the same way that some theists believe they are axiomatically morally superior to atheists. Both points of view are equally fallacious to me.


I’ve made the point previously that there is only one objective and honest religious truth: we don’t know. The corollary to this is that religion is an experience that is as subjective as consciousness itself.

Below is my submission to Philosophy Now.


To answer this question one must ask another: what is God? Even if the answer to the original question is in the negative then one must explain God away as a cultural artefact or a myth or a psychological phenomenon. I believe a good starting point is 19th century philosopher Ludwig Feuerbach’s statement: ‘God is the outward projection of man’s inner nature’. Yet it’s more complex than that, as it always manifests itself as an interaction. Firstly, I believe that God is an experience, and I readily concede that if a person has had no experience, that they would call God, then they would logically be atheists. I make this assertion from the simple pragmatic fact that the only experience we have of God is in our minds, not out there. Some people have this experience and some don’t. Those who don’t tend to think that those who do are either irrational or delusional. Those who do tend to rationalise their experience within a cultural context. So the answer to this question is very personal and very subjective. Whilst one can rationalise a ‘creator’ God based on the freakish laws of nature that culminated in our conscious existence, the experience of God is independent of any such rationalisation. 

For me, God is a response to introspection at the deepest level, that comes from one's sense of connection to humanity and even to other living things – in other words, to nature. If one considers that we live on a grain of sand on a beach amongst a shoreline of beaches separated by oceans from other shorelines of beaches, one gets the sense of our truly inconsequential existence, yet it also produces a great humility. It is a sense of a greater purpose that leads one to consider God, either as an entity, or a source, or perhaps a destination. It is only because we have the mind to stretch beyond our mortal existence, in this way, that we believe in its possibility. This perception of something far greater, beyond us, can create supreme humility or supreme egotism – it depends on the beholder.

Footnote: The best book I've read on this subject is Karen Armstrong's The History of God. As I mention in response to a comment below, The Unconscious God by Viktor Frankl gives another perspective again.

I also point out in my response to the same comment that I appreciate that different people have different ideas of what or who God is. I think that is an important, and often overlooked, point.

Addendum: I came across this quote in the I Ching, which seems appropriate.

"There, in the depths of the soul, one sees the Divine, the One... To know this One means to know oneself in relation to the cosmic forces. For this One is the ascending force of life in nature and man."

Thursday, 15 May 2008

Aristotle, Confucius, Ethics and Happiness

This is an essay I wrote whilst a student after I read Aristotle’s Ethics, one of the true classics. Anyone can buy it, even in paperback – it’s still in print, thanks to Penguin. It’s quite incredible that the ruminations of thinkers from 500 to 300BC are still relevant today, yet despite the façade that we present, is humanity any more civilized today than it was then? I think, that as long as hypocrisy dominates integrity in politics, civilization will struggle to achieve its unstated goal.

Even slavery still exists, though in a more insidious form. At least, back in Aristotle’s time, a slave was called a slave, whereas today they are called ‘illegals’ or ‘indentured’ in cases where it has been legalised. For the sake of clarity, I call slavery the practice of ‘bonding’ an ‘employee’ with a debt they can’t pay off, so they effectively work for nothing. It’s much more common than people realise, and it’s not just prostitutes or the underworld who are involved. I’m slightly off track, but it’s a detour that makes relevant my belief that, though history makes us more aware, it takes an unclouded eye to see the truth up close.

The essay originally had the title: What is the connection between happiness and moral behaviour? Those who have read my post on Human Nature will recognise that I’ve lifted the reference to Plato’s dialogue on the ‘just and unjust man’ straight from this essay.

This is not a comparison, by the way, between Aristotle and Confucius, which I understand has been done by others, though I don’t know who those others are. Nevertheless, both men saw themselves as teachers and both had an influence that spans well over 2,000 years. Leaving aside the world's three most famous mystics: Buddha, Jesus and Mohammed; arguably, only Pythagoras’s legacy has had a greater influence on the global cultural evolution of the past 2,500 years (read Kitty Ferguson’s The Music of Pythagoras). Below is the original essay (I've removed all the references, but most of the quotes are either from the Penguin edition of Ethics or, for Confucius, from Encyclopaedia Britannica).

Whilst no one would consider happiness and morality as mutually exclusive, there has been a tendency, both from our Christian traditions and Freud’s pleasure principle, to consider them as a necessary compromise. However as early as the 4th Century BC, both Aristotle, and to some extent Plato before him, challenged this most pervasive view of humanity. Leaving Plato’s arguments aside for the time being, Aristotle’s treatment in his Nicomachean Ethics is by far the most comprehensive and leads the way in developing a philosophical nexus for happiness and moral behaviour.

It is a central theme of Aristotle’s Ethics that happiness is the greatest ‘good’, and while this is discussed specifically in Chapter vii, Book I, it reoccurs in his discussions on Virtue, Friendship and Contemplation. Like most seminal works of the intellect, Aristotle’s Ethics is significant not only for what it contains, but for what one believes is missing. It is in filling in the gaps that one grasps the greatest insights and inspiration from his work. I will attempt to elucidate on what I perceive are the strengths of Aristotle’s arguments, as well as discuss the Ethics’ shortcomings in light of what others have contributed to the subject.

Firstly the word happiness is less than ideal as a translation for 'eudaimonia' as many point out, including Jonathan Barnes in his introduction to the Penguin edition. In fact many use the term: ‘the good life’, but it too is less than ideal. To quote Barnes: ‘... the eudaimon is the man who makes a success of his life and actions, who realises his aims and ambitions as a man, who fulfils himself.’ But later in the same passage he confuses us by saying: ‘It will not, of course, do to replace “happiness” by “success” or “fulfilment” as a translation of eudaimonia...’

But leaving Barnes comments aside, in the aforementioned Book I it is obvious when Aristotle is talking about happiness, he is talking about a lifelong event: it is in effect the sum of a person’s life. ‘One swallow does not make a summer... Similarly neither can one day, or a brief space of time, make a man blessed and happy.’ He makes it clear he is not talking about fleeting moments of pleasure that we all experience; he is talking about achieving our highest potential as human beings. ‘..happiness demands not only complete goodness but a complete life.’ In his concluding Book X, he goes further and gives this attribute an almost religious significance.

It seems to me that there are two aspects of Aristotle’s happiness or eudaimonia, and they are intrinsically related. One is to do with our day to day conduct and the pursuit of personal goals, and the other is to do with our interaction with others. It is obvious that these two facets of living cannot be separated, yet Aristotle fails to make this connection explicit.

The central tenet of Aristotle’s treatise on virtue is the much discussed ‘golden mean’. He gives examples, from how to manage money to bravery. A man too deficient in courage behaves cowardly, but the man too confident in his own abilities is foolhardy. I found Aristotle’s elaboration and exposition on ‘the golden mean’ longwinded to the point of being tiresome, but there is one brief passage which everyone can relate to, and which encapsulates the concept of eudaimonia as it arises in our everyday lives.

‘By virtue I mean moral virtue since it is this that is concerned with feelings and actions.... But to have these feelings at the right times on the right grounds towards the right people for the right motive and in the right way is to feel them to an intermediate, that is to the best, degree; and this is the mark of virtue.’

Aristotle considered this passage so important that he virtually repeats it in his summing up of Book II. The point about this passage, and its reiteration, is that it brings together both aspects of eudaimonia that I alluded to above: as a means of living one’s life and relating to others. Aristotle makes the additional point that this constitutes a moral virtue, but that is better understood when one reviews his thesis on friendship. It is in regard to friendship that I find the two aspects of eudaimonia most closely aligned.

The thrust of Aristotle’s discussion is that true friendship, as opposed to utilitarian friendship, is in itself a moral virtue, and that a friendship of this quality is dependent upon an individual’s moral character. Aristotle was aware that one cannot obtain a good friendship unless one is oneself a good person. In some respects, Aristotle used his particular concept of friendship as a measure of a person’s goodness or moral character. I believe this is the key to Aristotle’s philosophy, because living requires by necessity an interaction with others and the quality of that interaction by and large determines the quality of our lives. Whilst this is as much psychology as philosophy, it is the essence of both living a ‘good life’ and of being a ‘good person’.

If Aristotle’s discussion on friendship is his most accessible and most readily appreciated, his discussion on contemplation is probably the most vague and the most open to diverse interpretations. He concludes the Ethics with a discussion on contemplation, raising it as the highest goal for philosophy and life in general. In this regard it takes on religious significance. Barnes criticises Aristotle’s thesis because Aristotle argues that it is only acquired knowledge that is worth contemplating not research, but I think this misses the point completely. There are two other philosophers who can throw light on this subject: one who influenced Aristotle and one who did not.

Appendix A by Hugh Tredennick of the Penguin edition provides a synopsis of Pythagoras’s philosophy and influence with particular reference to his religious views. Pythagoras is best remembered as a mathematician who first perceived and quantified the relationship between mathematics and musical tones. But Tredennick points out that he was first a religious teacher, who believed in the transmigration of the soul ‘...his view of philosophy as a way of life, a contemplative activity for the emancipation of the soul’; shows the influence Pythagoras apparently obtained from his travels in the East (according to Encyclopaedia Britannica, 1989, but Ferguson, 2008, says that this belief came from Orphism, not Eastern influences as many believe). Tredennick also makes the point that this view no doubt influenced Plato and Aristotle. Here we have the idea of contemplation as a means of not only achieving virtue but of achieving immortality in a religious sense. But it is another philosopher who lived in Pythagoras’s time, whom I believe, provided a better exposition of contemplation as a form of self-realisation.

One cannot help but perceive similarities between Aristotle’s Ethics and the teachings of Confucius who lived approximately 2 centuries earlier, as both were concerned with the moral character of individuals and the application of ethics in political life. But Confucius’s ideas on contemplation are closer to contemporary ideas in psychology than Aristotle’s and therefore are more accessible. His view of contemplation is looking inward at the deepest inner self. Confucius (in Chinese, K’ung-fu-tzu) was a strong believer in self-knowledge and self-examination as a path to moral rectitude.

This view is probably best expressed in the modern idiom as soul-searching, whereby one attempts a higher degree of self-honesty, which is not only echoed in modern psychotherapy, but also in Sartre’s idea of ‘authenticity’. Confucius also understood the significance of our relationship with others in achieving enlightenment of the soul or self. From the annalects: ‘A man of humanity, wishing to establish himself, also establishes others, and wishing to enlarge himself also enlarges others. The ability to take as analogy of what is near at hand can be called the method of humanity.’(6:30)

But unlike Aristotle, Confucius would have argued that eudaimonia in the form of success and fulfilment is possible even when a man faces adversity and misfortune. Confucius knew this from personal experience. (He spent 12 years in self-imposed exile, and was unemployed and homeless, but during this period his circle of students increased and his reputation flourished.) It is also the theme of innumerable narratives, some fiction and some not, that continue to inspire us. But if we take either the Pythagorean or the Confucian view of contemplation, then Aristotle’s argument for making contemplation the best means for an individual to achieve the highest ‘good’ starts to acquire validity. It could be argued of course that Aristotle’s conclusion fails to make this clear, but I at least can see a valid argument even if I have to construct that argument myself.

As I intimated in my introduction, it is what I believe Aristotle left out of his Ethics that contributes most to a nexus of happiness and morality. This is best understood I believe by contemplating Plato’s dialogues on the ‘just’ and ‘unjust’ man. The basic argument is that the unjust man can get away with whatever he wants if he’s in a position of power or persuasion, and being unjust has no negative consequences for him, only for others. On the other hand, under these circumstances a just person can never be happy because he can simply never win and therefore will always be the unfortunate one. Plato’s argument is that the just man is temperate and can curb his appetites and desires with his rational abilities. There is a great deal that can be contended with this point of view, and it doesn’t address the issue of happiness and morality as being concordant, but to focus on this aspect of Plato’s dialogues would be a digression. The essential element missing from Plato’s argument, and unrevealed in Aristotle’s thesis, is the social consequence of being just or unjust.

To put the argument another way, one should consider rewards as a criteria for happiness or beneficence. What are the rewards for being just compared to the rewards for being unjust? Simply put, the rewards for being unjust are material rewards assuming one can get away with it. The rewards for being just are less tangible but they are related to the notion of a social contract. Not the social contract of the 19th Century but the more intimate social contract inherent in Aristotle’s friendship. The rewards for being just are friendship, loyalty and trust. It can be argued that these rewards also exist for the unjust man but they are contingent on his material possessions, wealth and power - in other words they are utilitarian. For the just man these rewards extend beyond immediate close associates and they are based on the man’s character, nothing else.

There is another negative effect resulting from being unjust which is more subtle. The unjust person must necessarily create a distorted perception of his or her world. The unjust man or woman suffers from a dishonesty to the self not unlike Sartre’s notion of mauvaise foi. The unjust person believes that his or her rewards are justifiably earned and the fate of those less fortunate are self-inflicted. Even Hitler believed that what he was doing was for the betterment of our world. The unjust person often believes, contrary to the perceptions of others, that his or her view of the world is completely just. This psychological component of the just and unjust person is not considered by Plato, or Aristotle for that matter, possibly because of the distorted perceptions that existed within their own society. After all, no one at that time, no matter how enlightened, would have taken into consideration the plight of slaves in a discussion of what was just and unjust, or of what constituted a ‘good person’.

Sunday, 27 April 2008

Trust

Trust is the foundation stone for the future success of humanity. A pre-requisite for trust is honesty, and honesty must start with honesty to one’s self. A lot has been written about the evolutionary success that arises from our ability to lie, but I would argue that dishonesty to the self is the greatest poison one can imbibe, and is the starting point for many of the ills faced by individuals and societies alike.

No one is immune to lying – we’ve all lied for various reasons: some virtuous, some not. But it is when we lie to ourselves that we paradoxically lay the groundwork for a greater deception to the outside world. Look at the self-deception of some of the most notorious world leaders, who surround themselves with acolytes, so they can convince the wider world of the virtue of their actions.

When I was very young, 6 or 7 (50 years ago now), I learned my first lesson about lying that I can still remember. I was in a school playground when someone close by ended up with a bleeding nose – to this day I’ve no idea what actually happened. Naturally, a teacher was called, and she asked, ‘What happened?’ A girl nearby pointed at me and said, ‘He hit him.’ I was taken to the Head Mistress, who was a formidable woman. In those days, children were caned for less, though I had never been caned up to that point in my schooling. At that age, when I arrived home from school, my father sometimes asked me, ‘Did you get the cane today?’ It was always very important to me to be able to say ‘no’, as I hated to think of the inquisition that would have followed if I’d ever said ‘yes’.

Back to the Head Mistress; I remember our encounter well. The school classrooms were elevated with a verandah, and we sat outside looking down at the courtyard, which was effectively deserted – the playground, where the incident had occurred, was out of sight. Her first question may have been: ‘Why did you hit him?’ or it may have been: ‘Tell me what happened.’ It doesn’t really matter what she actually said, because the important thing was that I realised straightaway that the truth would be perceived as a lie. I had to tell her something that she would believe, so I told her a story that went something like this: ‘We were both running and I ran into him.’ Her response was something like: ‘That’s interesting, I wasn’t told you were running. You’re not supposed to run.’ I knew then, possibly by the tone of her voice, that I had got away with it.

What’s most incredible about this entire episode is that it’s so indelibly burned into my brain. I learned a very valuable lesson at a very early age: it’s easier to tell a lie that is wanted to be heard than the truth that is not. Politicians, all over the world, practice this every day, some more successfully than others. For example, if soldiers commit a massacre, the powers-that-be can often deny it with extraordinary success; for the very simple reason that ordinary people would much prefer to ‘know’ that the massacre never happened than to ‘know’ the truth. (Hugh Mackay, in his excellent book, Right & Wrong; how to decide for yourself, refers to this as 'telling people the lies they want to hear'.)

A worldwide survey was done sometime in the last decade on 'trust', within various societies, and it revealed a remarkable correlation. (I don’t know who commissioned it; I read about it in New Scientist.) They found that the degree of trust between individuals in business transactions was directly dependent on the degree of trust they had in their government. So trust starts at the top, which is why I opened this essay with the sentence I chose. Trust starts with world leaders, and the more powerful they are, the more important it is.

A very good barometer of the health of a democracy is its media. By this criterion, America is one of the healthiest democracies in the world. We all take pot shots at America, including me, but most of the criticism, and all the ammunition for the criticisms that I level at America, come from the American press themselves. The other emerging power in the 21st Century, China, and the re-emerging power, Russia, have quite a different view on what criticisms they tolerate, both internally and externally. In Russia, journalists have been assassinated, and China is 'the world's leading jailer of journalists' according to CPJ (Committee to Protect Journalists).

Without trust, there can be no negotiations, no security and no creativity for individuals; the world will be forced to conform to a parody of democracy, a façade and ultimately a farce. Whatever the political or economical outcomes of the 21st Century, there will be enormous pressure on humanity worldwide. Trust, on a global scale, will be requisite for a stable and sustainable future. It is only because of the media that debates can take place between groups and with an informed public. It is the role of the media to keep politicians honest, not only to themselves, but also to the rest of us. It is when politicians usurp this role that trust disappears. Everywhere.


Footnote: I wrote this almost immediately after I saw the U2 3D concert in a cinema. I came out of the theatre with the first sentence already in my head. So I had to write it down, and the rest just followed.

Clive James made the point in an interview last year, that democracy is not the norm, it's the exception; in the West, we take democracy for granted.

This issue is complementary to issues I discuss, in a different context, in my post entitled Human Nature (Nov. 07).

Friday, 11 April 2008

The Ghost in the Machine

One of my favourite Sci-Fi movies, amongst a number of favourites, is the Japanese anime, Ghost in the Shell, by Mamoru Oshii. Made in 1995, it’s a cult classic and appears to have influenced a number of sci-fi storytellers, particularly James Cameron (Dark Angel series) and the Wachowski brothers (Matrix trilogy). It also had a more subtle impact on a lesser known storyteller, Paul P. Mealing (Elvene). I liked it because it was not only an action thriller, but it had the occasional philosophical soliloquy by its heroine concerning what it means to be human (she's a cyborg). It had the perfect recipe for sci-fi, according to my own criterion: a large dose of escapism with a pinch of food-for-thought. 

But it also encapsulated two aspects of modern Japanese culture that are contradictory by Western standards. These are the modern Japanese fascination with robots, and their historical religious belief in a transmigratory soul, hence the title, Ghost in the Shell. In Western philosophy, this latter belief is synonymous with dualism, famously formulated by Rene Descartes, and equally famously critiqued by Gilbert Ryle. Ryle was contemptuous of what he called, ‘the dogma of the ghost in the machine’, arguing that it was a category mistake. He gave the analogy of someone visiting a university and being shown all the buildings: the library, the lecture theatres, the admin building and so on. Then the visitor asks, ‘Yes, that’s all very well, but where is the university?’ According to Ryle, the mind is not an independent entity or organ in the body, but an attribute of the entire organism. I will return to Ryle’s argument later. 

In contemporary philosophy, dualism is considered a non sequitur: there is no place for the soul in science, nor ontology apparently. And, in keeping with this philosophical premise, there are a large number of people who believe it is only a matter of time before we create a machine intelligence with far greater capabilities than humans, with no ghost required, if you will excuse the cinematic reference. Now, we already have machines that can do many things far better than we can, but we still hold the upper hand in most common sense situations. The biggest challenge will come from so called ‘bottom-up’ AI (Artificial Intelligence) which will be self-learning machines, computers, robots, whatever. But, most interesting of all, is a project, currently in progress, called the ‘Blue Brain’, run by Henry Markram in Lausanne, Switzerland. Markram’s stated goal is to eventually create a virtual brain that will be able to simulate everything a human brain can do, including consciousness. He believes this will be achieved in 10 years time or less (others say 30). According to him, it’s only a question of grunt: raw processing power. (Reference: feature article in the American science magazine, SEED, 14, 2008) 

For many people, who work in the field of AI, this is philosophically achievable. I choose my words carefully here, because I believe it is the philosophy that is dictating the goal and not the science. This is an area where the science is still unclear if not unknown. Many people will tell you that consciousness is one of the last frontiers of science. For some, this is one of 3 remaining problems to be solved by science; the other 2 being the origin of the universe and the origin of life. They forget to mention the resolution of relativity theory with quantum mechanics, as if it’s destined to be a mere footnote in the encyclopaedia of complete human knowledge. 

There are, of course, other philosophical points of view, and two well known ones are expressed by John Searle and Roger Penrose respectively. John Searle is most famously known for his thought experiment of the ‘Chinese Room’, in which you have someone sitting in an enclosed room receiving questions through an 'in box', in Chinese, and, by following specific instructions (in English in Searle's case), provides answers in Chinese that they issue through an 'out box'. The point is that the person behaves just like a processor and has no knowledge of Chinese at all. In fact, this is the perfect description of a ‘Turing machine’ (see my post, Is mathematics evidence of a transcendental realm?) only instead of tape going through a machine you have a person performing the instructions in lieu of a machine. 

The Chinese Room actually had a real world counterpart: not many people know that, before we had computers, small armies of people would be employed (usually women) to perform specific but numerous computations for a particular project with no knowledge of how their specific input fitted into the overall execution of said project. Such a group was employed at Bletchley Park during WWII to work on the decoding of enigma transmissions where Turing worked. These people were called ‘computers’ and Turing was instrumental in streamlining their analysis. However, according to Turing’s biographer, Andrew Hodges, Turing did not develop an electronic computer at Bletchley Park, as some people believe, and he did not invent the Colossus, or Colossi, that were used to break another German code, the Lorenz, ‘...but [Turing] had input into their purpose, and saw at first-hand their triumph.’ (Hodges, 1997). 

Penrose has written 3 books, that I’m aware of, addressing the question of AI (The Emperor’s New Mind, Shadows of the Mind and The Large, the Small and the Human Mind) and Turing’s work is always central to his thesis. In the last book listed, Penrose invites others to expound on alternative views: Abner Shimony, Nancy Cartwright and Stephen Hawking. Of the three books referred to, Shadows of the Mind is the most intellectually challenging, because he is so determined not to be misunderstood. I have to say that Penrose always comes across as an intellect of great ability, but also great humility – he rarely, if ever, shows signs of hubris. For this reason alone, I always consider his arguments with great respect, even if I disagree with his thesis. To quote the I Ching: ‘he possesses as if he possessed nothing.’ 

Penroses’s predominant thesis, based on Godel’s and Turing’s proof (which I discuss in more detail in my post, Is mathematics evidence of a transcendental realm?) is that the human mind, or any mind for that matter, cannot possibly run on algorithms, which are the basis of all Turing machines. So the human mind is not a Turing machine is Penrose’s conclusion. More importantly, in anticipation of a further development of this argument, algorithms are synonymous with software, and the original conceptual Turing machine, that Turing formulated in his ‘halting problem proof’, is really about software. The Universal Turing machine is software that can duplicate all other Turing machines, given the correct instructions, which is what software is. 

To return to Ryle, he has a pertinent point in regard to his analogy, that I referred to earlier, of the university and the mind; it’s to do with a generic phenomenon which is observed throughout many levels of nature, which we call ‘emergence’. The mind is an emergent property, or attribute, that arises from the activity of a large number of neurons (trillions) in the same way that the human body is an emergent entity that arises from a similarly large number of cells. Some people even argue that classical physics is an emergent property that arises from quantum mechanics (see my post on The Laws of Nature). In fact, Penrose contends that these 2 mysteries may be related (he doesn't use the term emergent), and he proposes a view that the mind is the result of a quantum phenomenon in our neurons. I won’t relate his argument here, mainly because I don’t have Penrose's intellectual nous, but he expounds upon it in both of his books: Shadows of the Mind and The Large, the Small and the Human Mind; the second one being far more accessible than the first. 

The reason that Markram, and many others in the AI field, believe they can create an artificial consciousness, is because, if it is an emergent property of neurons, then all they have to do is create artificial neurons and consciousness will follow. This is what Markram is doing, only his neurons are really virtual neurons. Markram has ‘mapped’ the neurons from a thin slice of a rat’s brain into a supercomputer, and when he ‘stimulates’ his virtual neurons with an electrical impulse it creates a pattern of ‘firing’ activity just like we would expect to find in a real brain. On the face of it, Markram seems well on his way to achieving his goal. 

But there are two significant differences between Markram’s model (if I understand it correctly) and the real thing. All attempts at AI, including Markram’s, require software, yet the human brain, or any other brain for that matter, appears to have no software at all. Some people might argue that language is our software, and, from a strictly metaphorical perspective, that is correct. But we don’t seem to have any ‘operational’ software, and, if we do, the brain must somehow create it itself. So, if we have a ‘software’, it’s self-generational from the neurons themselves. Perhaps this is what Markram expects to find in his virtual neuron brain, but his virtual neuron brain already is software (if I interpret the description given in SEED correctly). 

I tend to agree with some of his critics, like Thomas Nagel (quoted in SEED), that Markram will end up with a very accurate model of a brain’s neurons, but he still won’t have a mind. ‘Blue Brain’, from what I can gather, is effectively a software model of the neurons of a small portion of a rat’s brain running on 4 super computers comprising a total of 8,000 IBM microchips. And even if he can simulate the firing pattern of his neurons to duplicate the rat’s, I would suspect it would take further software to turn that simulation into something concrete like an action or an image. As Markram says himself, it would just be a matter of massive correlation, and using the super computer to reverse the process. So he will, theoretically, and in all probability, be able to create a simulated action or image from the firing of his virtual neurons, but will this constitute consciousness? I expect not, but others, including Markram, expect it will. He admits himself, if he doesn’t get consciousness after building a full scale virtual model of a human brain, it would beg the question: what is missing? Well, I would suggest that would be missing is life, which is the second fundamental difference that I alluded to in the preceding paragraph, but didn’t elaborate on. 

I contend that even simple creatures, like insects, have consciousness, so you shouldn’t need a virtual human brain to replicate it. If consciousness equals sentience, and I believe it does, then that covers most of the animal kingdom. 

So Markram seems to think that his virtual brain will not only be conscious, but also alive – it’s very difficult to imagine one without the other. And this, paradoxically, brings one back to the ghost in the machine. Despite all the reductionism and scientific ruminations of the last century, the mystery still remains. I’m sure many will argue that there is no mystery: when your neurons stop firing, you die – it’s that simple. Yes, it is, but why is life, consciousness and the firing of neurons so concordant and so co-dependent? And do you really think a virtual neuron model will also exhibit both these attributes? Personally, I think not. And to return to cinematic references: does that mean, as with Hal, in Arthur C. Clarke’s 2001, A Space Odyssey, that when someone pulls the plug on Markram’s 'Blue Brain', it dies? 

In a nutshell: nature demonstrates explicitly that consciousness is dependent on life, and there is no evidence that life can be created from software, unless, of course, that software is DNA.
 

Thursday, 27 March 2008

The Laws of Nature

This is another posting arising from an intellectually stimulating read: Michael Frayn’s The Human Touch, subtitled, Our Part in The Creation of the Universe. The short essay below is in response to just one chapter, The Laws of Nature. I have to say at the outset that Frayn is far more widely read than I am, and his discussion includes commentary and ruminations by various scientists and philosophers: Popper, Kuhn, Einstein, Planck, Bohr, Born, von Neumann, Feynman, Gell-Mann, Deutsch, Taylor, Prigogine and Cartwright, amongst others. A number of these I have not read at all, but I find it strange that he does not include Penrose (except for one passing reference in his notes) or Davies, who have written extensively on this subject, and have well known philosophical views.

I haven’t finished reading Frayn’s text (though I’ve read his extensive notes on the subject) so I may have more to say later, and the following was originally written in the form of a ‘letter to the author’, which I never sent.

The heart of Frayn’s dissertation on the ‘laws of nature’ seems to be the interaction between the human intellect and the natural world. There are 2 antithetical views, both of which involve mathematics, because, in physics at least, the ‘laws of nature’ can only be expressed in mathematics. We may give descriptions in plain language, creating man-made categories in our attempts, but without mathematics we would not be able to validly call them ‘laws’, whether they be fiction or otherwise.

The first of these antithetical views is that we have invented mathematical methods, which have evolved into, sometimes complex, sometimes simple, mathematical models that we can apply to numerous phenomena we observe, and, in many cases, find a near-perfect fit. The second view is that the laws already exist in nature, and mathematics is the only means by which they can be revealed. I tend to subscribe to the second view. I disagree philosophically with Einstein, who contended, quite reasonably, that 'the series of integers is obviously an invention of the human mind', but agree with him that there is an underlying order in the machinations of the universe. (In regard to Einstein's contention, I discuss this argument in detail in 2 other postings; but, even if we invent the numbers, the relationships between them we do not.)

We humans puzzle over facts like the planets maintaining their orbits for millions of years, or the self-organising properties of galaxies, or of life for that matter, or the predictability of the effects of light shone through slits. We look for patterns, so we are told, and therefore we project patterns onto the things we observe. But science has demonstrated that there are not only patterns in nature, but relationships between events that can be described in mathematics to unreasonable degrees of accuracy. My own view is that the mathematical relationships found in nature are not projected, it’s just that the deeper we look the more unfamiliar the relationships become.

It seems to me the laws, for want of a better word, exist in layers, so that, at different scales different ones dominate. It follows from this that we haven’t discovered them all, and possibly we never will, but it doesn’t mean that the ones we have discovered are therefore false or meaningless. I have had correspondence with philosophers of science who believe that one day we will find the one governing law or set of laws that will make all current laws obsolete, which means the current ones are false and meaningless, but history would suggest that this goal is as mythical as the original Holy Grail.

Everyone posits Einstein’s theories of relativity making Newton’s laws obsolete as the prime example of this process, yet the same set of ‘everyone’ uses Newton’s equations over Einstein’s for most purposes, because they are simpler and just as accurate for their requirements. Einstein made the point (according to Frayn’s reference, Abraham Pais) that Newton’s mechanics were based on ‘fictional principles’, yet gave the same results as his own theories for many phenomena. (Frayn quotes Pais in his belief that Einstein thought all theories were fictions.) I believe the main ‘fictional principle’ inherent in Newton's theory (for gravity at least) arises from the fact that there is no force experienced in gravity if you are in free fall; there is only a force when you are stopped from falling. This is arguably the most significant conceptual difference between Newton's and Einstein's theories, and appears to be one of the key motivations for Einstein seeking a different mathematical interpretation for gravity.

Einstein’s theories are an example of how the laws of nature are not what they appear to be at the scale we experience them, specifically in regard to gravity, space, time and mass. His equations supersede Newton’s, in all respects, because they more accurately describe the universe on cosmological and atomic scales, but they reduce to Newton’s equations when certain parameters become negligible.

On the other hand, quantum mechanics appears to be another set of laws altogether that lie behind the classical laws (including relativity) that only become apparent at atomic and sub-atomic scales. I would suggest, however, that this dissociation between the quantum and classical worlds is a result of a gap in our knowledge, as contended by Roger Penrose, rather than evidence that the ‘laws of nature’ are all fictions. Assuming that this gap can be resolved in the future, new laws expressed in new or different mathematical relationships would be revealed. It’s not axiomatic that these future discoveries will make our current knowledge obsolete or irrelevant, but, hopefully, less mystifying.

I would make the same prediction concerning our knowledge of evolution. In the same way that Darwin proposed a theory of evolution based on natural selection, without any knowledge of genes or DNA, future generations will make discoveries revealing secrets of DNA development which may change our view on evolution. I’m not talking about ‘Intelligent Design’, but discoveries that will prove ID a non sequitur; as ID is currently a symptom of our ignorance, not an aid to future discoveries as claimed by its proponents. (See my Nov.07 post: Is evolution fact? Is creationism myth?)

There are deep, fundamental, inexplicable principles involved when one examines natural phenomena. Not-so-obvious examples are the principle of least time in refraction (referenced in Frayn’s text; intuited by the 17th Century mathematician genius, Pierre de Fermat) and the principle of maximum relativistic time in gravity, expounded brilliantly by Richard Feynman in Six Not-So-Easy Pieces. These principles, I would contend, are not inventions, but discoveries, and they reveal an underlying natural schema that we could never have predicted through speculation alone. In fact, were it not for our powers of intellect and observation, in combination with a predilection for mathematics, we would not even know they exist.

Footnote: James Gleick in his biography of Feynman, GENIUS, gives the impression that these 2 phenomena could be different manifestations of the same underlying 'principle of least action' that Feynman even employed in his examination of quantum mechanics. Anyone who is familiar with both these phenomena will appreciate the connection - it's like the light or the particles choose their own path - as Gleick expounds quite eruditely, but without the equations.


Addendum 1: Since I published this post, I've read Feynman's lecture series that he gave in New Zealand in 1983 published under the title, QED, The Strange Theory of Light and Matter. In his first lecture, he gives a brilliant exposition (in plain English) on how light reflected by a mirror 'follows the least time path' can be explained by quantum mechanics. I need to add the caveat that no one understands quantum mechanics, a point that Feynman is at pains to make himself, right at the start of his lectures.

Addendum 2: I wrote a later post on 'least action' which is more erudite.

Friday, 14 March 2008

Imagination

I first came across the term ‘intentionality’ as a philosophical term when I was reading John Searle’s Mind, and I had difficulties with it until I substituted the term imagination. I had forgotten about this until I read another account in The Oxford Companion to the Mind (edited by Richard L. Gregor, 1987), thinking I was going to read about intentionality as a mental purpose, as it would be used in ordinary language. Once again, forgetting all about my experience with John Searle, I was about half way through the discourse when I found myself substituting the term imagination, and then I realised: I had taken this mental journey before.

This is an example of how I believe we integrate new knowledge into existing knowledge. When we come across a new experience or phenomenon, or new information, we axiomatically look for something we are already familiar with that we can analogise it with. It’s also why metaphor is such a favoured form of description and is so readily adopted and understood without extraneous explanations. So, in the absence of anything better, I substitute imagination for ‘intentionality’ but the more I read the more I conclude that they are the same thing. According to The Oxford Companion to the Mind, intentionality is only evident in mental states and is about 'aboutness’. When I read Searle’s account and the examples he gave of someone being able to conceptualise a real event that had occurred in history or in another place or another time, or an event that had never occurred at all, then that’s imagination. Also I argue that this is not unique to humans. The fact that many species can plan and co-operate, especially when hunting, suggests that they can ‘imagine’ the outcome they are trying to achieve.

I once had a brief correspondence with Peter Watson, author of A Terrible Beauty (an extraordinarily erudite and comprehensive book of the ‘ideas and minds that shaped the 20th Century’), who contended that words like ‘imagine’ and ‘introspection’ have outlived their usefulness, and that they no longer fit in with our comprehension of our mental states, and, possibly, are even misleading. I had serious problems with this dismissal of our inner world, as I saw it. Also he talked about ‘imagination’ as if he really meant ‘creativity', which is an essential but limited aspect of how we imagine (more on that below). When I quizzed him on this, he explained that his real complaint was that he found words like ‘imagination’ vague; according to Watson, 'imagination' was even more vague than ‘mind’. (I must say in passing that I have the utmost respect for Peter Watson, even though we’ve never met, and he responded good-naturedly to all my criticisms.)

But I think the reason that people are uncomfortable with terms like these: imagination, introspection, mind; is that they defy objectivity by their very nature. You cannot talk with any validity about anyone's imagination, introspection or mind, except your own. Our inner world is subjectivity incarnate, yet, because we all have one, we can talk about it in a common language.

In my view, ordinary people know what we mean by ‘imagination’ and ‘introspection’ even if no one can explain how it happens, and they remain essential components of our psychological lives. In my posting, The Meaning of Life, I allude to Watson’s philosophical viewpoint by referring to an extreme position that considers our internal world to be so dependent on the external world, that it makes the inner world we all experience irrelevant (some people do take this view). In fact, Watson did make the point that our inner world is completely dependent on the external world – no one can really claim that anything is created independently of the outer world. And he said that this was his salient point: no one ever came up with a valid theory or idea by introspection alone, without considering external factors. I would agree with him on this, but it doesn’t mean that imagination and introspection have no role to play.

Also he has a point, regarding the dependence of our inner world on the outer world, when one considers that we all think in a language and we all gain our language from the external world (I make this point in my posting on Self). Language is one of the means, arguably the most important, but not the only one, that allows an interaction between the inner and outer world, and it goes both ways – we are not passive participants in the world. And yes, our imagination is fueled by external events, yet, without imagination there would be no art, in any form, and, in particular, no stories; not only for the creator, but also for the recipient.

Being a storyteller myself, this is something I can talk about with some experience. I find it interesting that a writer can compose a story that so engrosses the reader that he or she actually forgets they’re reading. How does one achieve this? It’s simple in principle, but very difficult in execution: one allows the reader to create an imaginary world that he or she inhabits so successfully, they become emotionally involved as if it was real, or as if they were in a dream. It's called suspension of disbelief - essential to the success of any story. And I think dreaming is the link, because writing a story is not unlike having a dream, only you consciously interfere with it, and that’s what ‘creating’ is really all about. I could elaborate on this, but this is not the place.

While it seems I’m getting off the track, I made a point in another posting, The Universe’s Interpreters, that the reason films, video and computer games have not made novels extinct (weakened yes, but not yet endangered) is because we can so readily and effortlessly create pictures in our minds. I contend, though I have no scientific evidence, that if we didn’t think in a language, we would think in images. The basis for this contention is that we dream in images and metaphor, and I believe that is our primal language. (Freudian yes, but without referencing Freud.) So much of imagination involves imagery – a point that Searle somehow misses when he discusses intentionality, yet it is obvious. (It occurred to me that Searle had the same aversion to the term that Watson revealed.) Searle does make the point, however, that intentionality can involve desires and beliefs, which, of themselves, can be manifested in sensory form (he gives the examples of hunger and thirst).

It’s only humans who create art, and it is often proposed that the emergence of art is the first indication of our evolutionary separation from other homo-related species. But imagination, along with the other conscious attributes we have, are not unique to humans, just our ability to exploit them and project them into the external world.

It’s not for nothing that Searle claims the problem of intentionality is as great as the problem of consciousness – I would contend they are manifestations of the same underlying phenomena – as though one is passive and the other active. Searle wrote his book, Mind, in part, to offer explanations for these phenomena (although he added the caveat that he had only scratched the surface), whereas I make no such attempt. That’s not to say that in the future we won’t know more, but I also think that our reductionist approach will find its own limitations – I predict we will uncover more knowledge only to reveal more mysteries, as we have done with quantum mechanics.

However, from this premise, I would say that imagination, or ‘intentionality’ (if I interpret it correctly) is a manifestation of mental activity, and one that we are unlikely to find in a machine, but that’s another topic for another day.