An unusual oxymoron, I know, but, like anything delivered tongue-in-cheek, it contains an element of serious conjecture. Many years ago (quarter of a century), I read a book on anthropology, which left no great impression on me except that the author said that there were 2 types of culture world wide. One cultural type had a religion based on a ‘creator’ or creation myth, and the other had a religion based on ancestor worship.
I would possibly add a third, which is religion based on the projection of the human psyche. In a historical context, religion has arisen primarily from an attempt to project our imagination beyond the grave. Fascination in the afterlife started early for humans, if ritual burials are anything to go by. By extension, the God of humans, in all the forms that we have, is largely manifest in the afterlife. The only ‘Earthly’ experiences of God or Gods occur in mythology.
Karen Armstrong, in her book, The History of God demonstrates how God has evolved over time as a reflection of the human psyche. I know that Armstrong is criticised on both sides of the religious divide, but The History of God is still one of the best books on religion I have read. It’s one of her earliest publications when she was still disillusioned by her experience as a Carmelite nun. A common theme in Armstrong’s writing is the connection between religion and myth.
I’ve referred to Ludwig Feuerbach in previous posts for his famous quote: God is the outward projection of the human psyche (I think he said ‘man’s inner nature’), so I’ve taken a bit of licence; but I think that’s as good a definition of God as you’re going to get. Feuerbach also said that ‘God is in man’s image’ not the other way round. He apparently claimed he wasn’t an atheist, yet I expect most people today would call him an atheist.
For most people, who have God as part of their existential belief, it is manifest as an internal mental experience yet is ‘sensed’ as external. Neurologist, Andrew Newberg of University of Pennsylvania, has demonstrated via brain imaging experiments that people’s experience of ‘religious feelings [God] do seem to be quite literally self-less’. This is why I claim that God is purely subjective, because everyone’s idea of God is different. I’ve long argued that a person’s idea of God says more about them than it says about God.
I would make an analogy with colour, because colour only occurs in some sentient creature’s mind, even though it is experienced as being external to the observer. There is, of course, an external cause for this experience, which is light reflected off objects. People can equally argue that there is an external cause for one’s experience of God, but I would argue that that experience is unique to that person. Colour can be tested, whereas God cannot.
Contrary to what people might think, I’m not judgemental about people’s belief in God – it’s not a litmus test for anything. But if God is a reflection of an individual’s ideal then judge the person and not their God.
When I was 16, I read Albert Camus’ La Peste (The Plague) and it challenged my idea of God. At the time, I knew nothing about Camus or his philosophy, or even his history with the French resistance during WWII. I also read L’Etranger (The Outsider) and, in both books, Camus, through his protagonists, challenges the Catholic Church. In La Peste, there is a scene where the 2 lead characters take a swim at night (if my memory serves me correctly) and, during a conversation, one of them conjectures that it would possibly be better for God if we didn’t believe in God. Now, this may seem the ultimate cynicism but it actually touched a chord with me at that time and at that age. A God who didn’t want you to believe in God would be a God with no ego. That is my ideal.
Philosophy, at its best, challenges our long held views, such that we examine them more deeply than we might otherwise consider.
Paul P. Mealing
- Paul P. Mealing
- Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
Friday, 6 May 2011
Friday, 22 April 2011
Sentience, free will and AI
In the 2 April 2011 edition of New Scientist, the editorial was titled Rights for robots; We will know when it’s time to recognise artificial cognition. Implicit in the header and explicit in the text is the idea that robots will one day have sentience just like us. In fact they highlighted one passage: “We should look to the way people treat machines and have faith in our ability to detect consciousness.”
I am a self-confessed heretic on this subject because I don’t believe machine intelligence will ever be sentient, and I’m happy to stick my neck out in this forum so that one day I can possibly be proven wrong. One of the points of argument that the editorial makes is that ‘there is no agreed definition of consciousness’ and ‘there’s no way to tell that you aren’t the only conscious being in a world of zombies.’ In other words, you really don’t know if the person right next to you is conscious (or in a dream) so you’ll be forced to give a cognitive robot the same benefit of the doubt. I disagree.
Around the same time as reading this, I took part in a discussion on Rust Belt Philosophy about what sentience is. Firstly, I contend that sentience and consciousness are synonymous, and I think sentience is pretty pervasive in the animal kingdom. Does that mean that something that is unconscious is not sentient? Strictly speaking, yes, because I would define sentience as the ability to feel something, either emotionally or physically. Now, we often feel something emotionally when we dream, so arguably that makes one sentient when unconscious. But I see this as the exception that makes my definition more pertinent rather than the exception that proves me wrong.
In First Aid courses you are taught to squeeze someone’s fingers to see if they are conscious. So to feel something is directly correlated with consciousness and that’s also how I would define sentience. Much of the brain’s activity is subconscious even to the extent that problem-solving is often executed subliminally. I expect everyone has had the experience of trying to solve a puzzle, then leaving it for a period of time, only to solve it ‘spontaneously’ when they next encounter it. I believe the creative process often works in exactly the same way, which is why it feels so spontaneous and why we can’t explain it even after we’ve done it. This subconscious problem-solving is a well known cognitive phenomenon, so it’s not just a ‘folk theory’.
This complex subconscious activity observed in humans, I believe is quite different from the complex instinctive behaviour that we see in animals: birds building nests, bees building hives, spiders building webs, beavers building dams. These activities seem ‘hard-wired’, to borrow from the AI lexicon as we tend to do.
A bee does a complex dance to communicate where the honey is. No one believes that the bee cognitively works this out the way we would, so I expect it’s totally subconscious. So if a bee can perform complex behaviours without consciousness does that mean it doesn’t have consciousness at all? The obvious answer is yes, but let’s look at another scenario. The bee gets caught in a spider’s web and tries desperately to escape. Now I believe that in this situation the bee feels fear and, by my definition, that makes it sentient. This is an important point because it underpins virtually every other point I intend to make. Now, I don’t really know if the bee ‘feels’ anything at all, so it’s an assumption. But my assumption is that sentience, and therefore consciousness, started with feelings and not logic.
In last week’s issue of New Scientist, 16 April 2011, the cover features the topic, Free Will: The illusion we can’t live without. The article, written by freelance writer, Dan Jones, is headed The free will delusion. In effect, science argues quite strongly that free will is an illusion, but one we are reluctant to relinquish. Jones opens with a scenario in 2500 when free will has been scientifically disproved and human behaviour is totally predictable and deterministic. Now, I don’t think there’s really anything in the universe that’s totally predictable, including the remote possibility that Earth could one day be knocked off its orbit, but that’s the subject of another post. What’s more relevant to this discussion is Jones’ opening sentence where he says: ‘…neuroscientists know precisely how the hardware of the brain runs the software of the mind and dictates behaviour.’ Now, this is purely a piece of speculative fiction, so it’s not necessarily what Jones actually believes. But it’s the implicit assumption that the brain’s processes are identical to a computer’s that I find most interesting.
The gist of the article, by the way, is that when people really believe they have no free will, they behave very unempathetically towards others, amongst other aberrational behaviours. In other words, a belief in our ability to direct our own destiny is important to our psychological health. So, if the scientists are right, it’s best not to tell anyone. It’s ironic that telling people they have no free will makes them behave as if they don’t, when allowing them to believe they have free will gives their behaviour intentionality. Apparently, free will is a ‘state-of-mind’.
On a more recent post of Rust Belt Philosophy, I was reminded that, contrary to conventional wisdom, emotions play an important role in rational behaviour. Psychologists now generally believe that, without emotions, our decision-making ability is severely impaired. And, arguably, it’s emotions that play the key role in what we call free will. Certainly, it’s our emotions that are affected if we believe we have no control over our behaviour. Intentions are driven as much by emotion as they are by logic. In fact, most of us make decisions based on gut feelings and rationalise them accordingly. I’m not suggesting that we are all victims of our emotional needs like immature children, but that the interplay between emotions and rational thought are the key to our behaviours. More importantly, it’s our ability to ‘feel’ that not only separates us from machine intelligence in a physical sense, but makes our ‘thinking’ inherently different. It’s also what makes us sentient.
Many people believe that emotion can be programmed into computers to aid them in decision-making as well. I find this an interesting idea and I’ve explored it in my own fiction. If a computer reacted with horror every time we were to switch it off would that make it sentient? Actually, I don’t think it would, but it would certainly be interesting to see how people reacted. My point is that artificially giving AI emotions won’t make them sentient.
I believe feelings came first in the evolution of sentience, not logic, and I still don’t believe that there’s anything analogous to ‘software’ in the brain, except language and that’s specific to humans. We are the only species that ‘downloads’ a language to the next generation, but that doesn’t mean our brains run on algorithms.
So evidence in the animal kingdom, not just humans, suggests that sentience, and therefore consciousness, evolved from emotions, whereas computers have evolved from pure logic. Computers are still best at what we do worst, which is manipulate huge amounts of data. Which is why the human genome project actually took less time than predicted. And we still do best at what they do worst, which is make decisions based on a host of parameters including emotional factors as well as experiential ones.
I am a self-confessed heretic on this subject because I don’t believe machine intelligence will ever be sentient, and I’m happy to stick my neck out in this forum so that one day I can possibly be proven wrong. One of the points of argument that the editorial makes is that ‘there is no agreed definition of consciousness’ and ‘there’s no way to tell that you aren’t the only conscious being in a world of zombies.’ In other words, you really don’t know if the person right next to you is conscious (or in a dream) so you’ll be forced to give a cognitive robot the same benefit of the doubt. I disagree.
Around the same time as reading this, I took part in a discussion on Rust Belt Philosophy about what sentience is. Firstly, I contend that sentience and consciousness are synonymous, and I think sentience is pretty pervasive in the animal kingdom. Does that mean that something that is unconscious is not sentient? Strictly speaking, yes, because I would define sentience as the ability to feel something, either emotionally or physically. Now, we often feel something emotionally when we dream, so arguably that makes one sentient when unconscious. But I see this as the exception that makes my definition more pertinent rather than the exception that proves me wrong.
In First Aid courses you are taught to squeeze someone’s fingers to see if they are conscious. So to feel something is directly correlated with consciousness and that’s also how I would define sentience. Much of the brain’s activity is subconscious even to the extent that problem-solving is often executed subliminally. I expect everyone has had the experience of trying to solve a puzzle, then leaving it for a period of time, only to solve it ‘spontaneously’ when they next encounter it. I believe the creative process often works in exactly the same way, which is why it feels so spontaneous and why we can’t explain it even after we’ve done it. This subconscious problem-solving is a well known cognitive phenomenon, so it’s not just a ‘folk theory’.
This complex subconscious activity observed in humans, I believe is quite different from the complex instinctive behaviour that we see in animals: birds building nests, bees building hives, spiders building webs, beavers building dams. These activities seem ‘hard-wired’, to borrow from the AI lexicon as we tend to do.
A bee does a complex dance to communicate where the honey is. No one believes that the bee cognitively works this out the way we would, so I expect it’s totally subconscious. So if a bee can perform complex behaviours without consciousness does that mean it doesn’t have consciousness at all? The obvious answer is yes, but let’s look at another scenario. The bee gets caught in a spider’s web and tries desperately to escape. Now I believe that in this situation the bee feels fear and, by my definition, that makes it sentient. This is an important point because it underpins virtually every other point I intend to make. Now, I don’t really know if the bee ‘feels’ anything at all, so it’s an assumption. But my assumption is that sentience, and therefore consciousness, started with feelings and not logic.
In last week’s issue of New Scientist, 16 April 2011, the cover features the topic, Free Will: The illusion we can’t live without. The article, written by freelance writer, Dan Jones, is headed The free will delusion. In effect, science argues quite strongly that free will is an illusion, but one we are reluctant to relinquish. Jones opens with a scenario in 2500 when free will has been scientifically disproved and human behaviour is totally predictable and deterministic. Now, I don’t think there’s really anything in the universe that’s totally predictable, including the remote possibility that Earth could one day be knocked off its orbit, but that’s the subject of another post. What’s more relevant to this discussion is Jones’ opening sentence where he says: ‘…neuroscientists know precisely how the hardware of the brain runs the software of the mind and dictates behaviour.’ Now, this is purely a piece of speculative fiction, so it’s not necessarily what Jones actually believes. But it’s the implicit assumption that the brain’s processes are identical to a computer’s that I find most interesting.
The gist of the article, by the way, is that when people really believe they have no free will, they behave very unempathetically towards others, amongst other aberrational behaviours. In other words, a belief in our ability to direct our own destiny is important to our psychological health. So, if the scientists are right, it’s best not to tell anyone. It’s ironic that telling people they have no free will makes them behave as if they don’t, when allowing them to believe they have free will gives their behaviour intentionality. Apparently, free will is a ‘state-of-mind’.
On a more recent post of Rust Belt Philosophy, I was reminded that, contrary to conventional wisdom, emotions play an important role in rational behaviour. Psychologists now generally believe that, without emotions, our decision-making ability is severely impaired. And, arguably, it’s emotions that play the key role in what we call free will. Certainly, it’s our emotions that are affected if we believe we have no control over our behaviour. Intentions are driven as much by emotion as they are by logic. In fact, most of us make decisions based on gut feelings and rationalise them accordingly. I’m not suggesting that we are all victims of our emotional needs like immature children, but that the interplay between emotions and rational thought are the key to our behaviours. More importantly, it’s our ability to ‘feel’ that not only separates us from machine intelligence in a physical sense, but makes our ‘thinking’ inherently different. It’s also what makes us sentient.
Many people believe that emotion can be programmed into computers to aid them in decision-making as well. I find this an interesting idea and I’ve explored it in my own fiction. If a computer reacted with horror every time we were to switch it off would that make it sentient? Actually, I don’t think it would, but it would certainly be interesting to see how people reacted. My point is that artificially giving AI emotions won’t make them sentient.
I believe feelings came first in the evolution of sentience, not logic, and I still don’t believe that there’s anything analogous to ‘software’ in the brain, except language and that’s specific to humans. We are the only species that ‘downloads’ a language to the next generation, but that doesn’t mean our brains run on algorithms.
So evidence in the animal kingdom, not just humans, suggests that sentience, and therefore consciousness, evolved from emotions, whereas computers have evolved from pure logic. Computers are still best at what we do worst, which is manipulate huge amounts of data. Which is why the human genome project actually took less time than predicted. And we still do best at what they do worst, which is make decisions based on a host of parameters including emotional factors as well as experiential ones.
Sunday, 3 April 2011
Why we shouldn’t take religion too seriously
This arose from an article in last week’s New Scientist titled Thou shalt believe – or not by Jonathan Lanman (26 March 2011, pp.38-9). Lanman lectures at the school of anthropology and Keble College, Oxford University. He’s giving a talk, entitled Atheism Explained, at St. Mary’s University College Twickenham, UK on 5 April (a couple of days away).
Lanman spent 2008 studying atheism in US, UK, Denmark and online. As a result of his research, Lanman made a distinction between what he calls ‘non-theism’ and ‘strong atheism’, whereby non-theists are effectively agnostic – they don’t really care – and strong atheists vigorously oppose religious belief on moral and political grounds. He found a curious correlation. In countries that are strongly and overtly religious, strong atheism is more predominate, whereas in countries like Sweden, where religion is not so strong, the converse is true. In his own words, there is a negative correlation between strong atheism and non-theism.
I live in Australia where there is a pervasive I-don’t-care attitude towards religious belief, so we are closer to the Swedish model than the American one. In fact, when I visited America a decade ago (both pre and post 911, as well as during) I would say the biggest difference between Australian and American culture is in religion. I spent a lot of time in Texas, where it was almost a culture shock. My experience with the blogosphere has only reinforced that impression.
What is obvious is that where religion takes on a political face then opposition is inevitable. In Australian politics there are all sorts of religious flavours amongst individual politicians, but they rarely become an issue. This wasn’t the case a couple of generations ago when there was a Protestant/Catholic divide through the entire country that started with education and permeated every community, including the small country town where I grew up. That all changed in the 1960s, and, with few exceptions, no one who remembers it wants to revisit it.
Now there is a greater mix of religions than ever, and the philosophy is largely live and let live. Even as a child, religion was seen as something deeply personal and intimate that wasn’t invaded or even shared, and that’s an attitude I’ve kept to this day. Religion, to me, is part of someone’s inner world, totally subjective, influenced by culture, yes, but ultimately personal and unique to the individual.
If people can’t joke about religion in the same way we joke about nationality, or if they feel the need to defend their beliefs in blood, then they are taking their religion too seriously. Even some atheists, in my view, take religion too seriously, when they fail, or refuse, to distinguish between secular adherents to a faith and fundamentalists. If we want to live together, then we can’t take religion too seriously no matter what one’s personal beliefs may be.
Lanman spent 2008 studying atheism in US, UK, Denmark and online. As a result of his research, Lanman made a distinction between what he calls ‘non-theism’ and ‘strong atheism’, whereby non-theists are effectively agnostic – they don’t really care – and strong atheists vigorously oppose religious belief on moral and political grounds. He found a curious correlation. In countries that are strongly and overtly religious, strong atheism is more predominate, whereas in countries like Sweden, where religion is not so strong, the converse is true. In his own words, there is a negative correlation between strong atheism and non-theism.
I live in Australia where there is a pervasive I-don’t-care attitude towards religious belief, so we are closer to the Swedish model than the American one. In fact, when I visited America a decade ago (both pre and post 911, as well as during) I would say the biggest difference between Australian and American culture is in religion. I spent a lot of time in Texas, where it was almost a culture shock. My experience with the blogosphere has only reinforced that impression.
What is obvious is that where religion takes on a political face then opposition is inevitable. In Australian politics there are all sorts of religious flavours amongst individual politicians, but they rarely become an issue. This wasn’t the case a couple of generations ago when there was a Protestant/Catholic divide through the entire country that started with education and permeated every community, including the small country town where I grew up. That all changed in the 1960s, and, with few exceptions, no one who remembers it wants to revisit it.
Now there is a greater mix of religions than ever, and the philosophy is largely live and let live. Even as a child, religion was seen as something deeply personal and intimate that wasn’t invaded or even shared, and that’s an attitude I’ve kept to this day. Religion, to me, is part of someone’s inner world, totally subjective, influenced by culture, yes, but ultimately personal and unique to the individual.
If people can’t joke about religion in the same way we joke about nationality, or if they feel the need to defend their beliefs in blood, then they are taking their religion too seriously. Even some atheists, in my view, take religion too seriously, when they fail, or refuse, to distinguish between secular adherents to a faith and fundamentalists. If we want to live together, then we can’t take religion too seriously no matter what one’s personal beliefs may be.
Sunday, 20 March 2011
Ayaan Hirsi Ali’s story
I’ve just completed reading Aayan Hirsi Ali’s autobiography, Infidel. It’s the latest book in my book club (refer my blog roll) following on from another autobiography from another refugee, Anh Do, The Happiest Refugee. Do is a stand-up comic and television celebrity in Australia, and his brother, Khoa, is a successful filmmaker and former Young Australian of the Year. They are ‘boat people’, who are stigmatised in this country, and Khoa was actually dangled over the side of a boat by pirates when he was only 2 years old. It has to be said that our major political parties show a clear deficit in moral and political courage on the issue of ‘boat people’.
But I’ve detoured before I’ve even got started. We, in the West, live in a bubble, though, occasionally, through television, films and books, like Hirsi Ali’s, we get a glimpse into another world that the rest of us would call hell. And this hell is not transient or momentary for these people, but relentless, unforgiving and even normal for those who grow up in it. Hirsi Ali is one of the few people who has straddled these 2 worlds, and that makes her book all the more compelling. As Aminata Forna wrote in the Evening Standard: “Hirsi Ali has invited [us] to walk a mile in her shoes. Most wouldn’t last a hundred yards.”
There are many issues touched on in her story, none perhaps more pertinent than identity, but I won’t start there. I will start with the apparent historical gap between some Islamic cultures and the modern Western world – a clash of civilisations, if you like. I remember the years between my teens and mid twenties were the most transformational, conflicted and depressing in my life. Like many of my generation, it was a time when I rejected my parents’ and society’s values, not to mention the religion I had grown up with, and sought a world view that I could call my own. To some extent that’s exactly what Hirsi Ali has done, only she had to jump from a culture still imbued with 6th Century social mores into the birth of the second millennium. I can fully understand what drove her, but, looking back on my own coming-of-age experience, I doubt that I could emulate her. What she achieved is a monumental leap compared to my short jump. For me, it was generational; for her, it was trans-cultural and it spanned millennia.
Much of her book deals with the treatment of women in traditional Muslim societies, treated, in her own words, as ‘minors’ not adults. One should not forget that the emancipation of women from vassals to independent, autonomous beings with their own rights has been a very lengthy process in Western society. Most societies have been historically patriarchal in both the East and West. The perception and treatment of women as second-class citizens is not confined to Islamic societies by any means. But it does appear that many Islamic cultures have the most barbaric treatment of women (enshrined in law in many countries) and are the most tardy in giving women the social status they deserve, which is equality to men.
This attitude, supported by quotes from the Qur’an, demonstrates how dangerous and misguided it is to take one’s morals from God. Because a morality supposedly given by God, in scripture, can’t be challenged and takes no account of individual circumstances, evolution of cultural norms, progress in scientific knowledge or empathy for ‘others’. And this last criterion is possibly the most important, because it is the ability to treat people outside one’s religion as ‘others’ that permits bigotry, violence and genocide, all in the name of one’s God. This is so apparent in the violence that swept through Hirsi Ali’s home country, Somalia, and became the second most salient factor, I believe, in the rejection of her own religion.
When I first saw Hirsi Ali interviewed on TV (7.30 Report, ABC Australia) after she left Holland for America, she made the statement that Islam could never coexist in a Western secular society, logically based on her experience in Holland. In an interview I heard on the radio last year (also in Australia, with Margaret Throsby) I felt she had softened her stance and she argued that Muslims could live in a secular society. She was careful to make a distinction between Islam as a religion and Islam as political ideology (refer my post Dec. 2010). My personal experience of Muslims is that they are as varied in their political views as any other group of people. I know of liberal Muslims possibly because I hold liberal views, so that should not be surprising. But it gives me a different view to those who think that all Muslims are fundamental Islamists, or potentially so. One of Hirsi Ali’s messages is that an over-dependence on tolerance in a secular society can cause its own backlash.
I’ve written elsewhere (The problems with fundamentalism, Jan. 2008) that the limits of tolerance is intolerance of others. In other words, I am intolerant of intolerance. When Muslims, or anyone else of political persuasion, start to preach intolerance towards any other group then the opposition towards that intolerance in a healthy secular society can be immense. Australia has experienced that on a national level about a decade ago and it was ugly. Xenophobia is very easily aroused in almost any nation it would appear. People who preach hatred and bigotry, no matter who they are or which group they represent, and no matter how cleverly they disguise their rhetoric, should all be treated the same – they should be refuted and denounced in the loudest voices at the highest levels of authority.
But, as the events in Somalia demonstrate, it’s not just religion that can inflame or justify violence. Clan differences are enough to justify the most heinous crimes. All through her story, Hirsi Ali describes how everyone could find fault with every other group they came in contact with. Muslims and Africans are not alone in this prejudicial bias – I grew up with it in a Western secular society. The more insular a society is, the more bigoted they are. This is why I agree with Hirsi Ali that children should not be segregated in their education. The more children mix with other ethnicities the less insular they become in their attitudes towards other groups.
In a post I wrote on Evil (one of my earliest posts, Oct. 2007) I expounded on the idea that most of the atrocities committed in the last century, and every century beforehand for that matter, were based on some form of tribalism or an ingroup-outgroup mentality. This tribalism could be familial, religious, political, ethnic or national, but it revolves around the idea of identity. We underestimate how powerful this is because it’s almost subconscious.
Hirsi Ali’s book is almost entirely about identity and her struggle to overcome its strangulation on her life. All the role models in her young life, both female and male, were imbued with the importance and necessity of identity with her clan and her religion. In her life, religion and culture were inseparable. Her grandmother made her learn her ancestry off by heart because it might one day save her life, and, in fact, it did when she was only 20 years old and a man held a knife to her throat. By reciting her ancestry back far enough she was able to claim she was his ‘sister’ and he let her go.
People often mistakenly believe that their conscience is God whispering in their mind’s ear, when, in fact, it’s almost entirely socially and culturally formed, especially when we are children. It’s only as adults that we begin to question the norms we are brought up with, and then only when we are exposed to other social norms. A way that societies tend to overcome this ‘questioning’ is to imbue a sense of their cultural ‘superiority’ over everyone else’s. This comes across so strongly in Hirsi Ali’s book, and I recognised it as part of my own upbringing. To me, it’s a sign of immaturity that someone can only justify their own position, morally, intellectually or socially, by ridiculing everyone else’s.
One of the strongest influences on Hirsi Ali and her sister, Haweya, were the Western novels that they were exposed to: not just literary standards but pulp fiction romances. It reinforced my view that storytelling, and art in general, is the best medium to transmit ideas. It was this exposure to novels that led them to believe that there were other cultures and other ways of living, especially for women. Stories are what-ifs – they put us in someone else’s shoes and challenge our view of the world. It’s not surprising that some of the world’s greatest writers have been persecuted for their subversiveness.
But this leads to the almost heretical notion that only a society open to new ideas can progress out of ossification. If there is one singular message from Hirsi Ali’s book it’s that fundamentalism (of any stripe) does not only have to be challenged, but overcome, if societies want to move forward and evolve for the betterment for everyone, and not just for those who want to hold the reigns of power.
The real gulf that Hirsi Ali jumped was not religious but educational. I’ve argued many times that ignorance is the greatest enemy facing the 21st Century. Religious fundamentalism is arguably the greatest obstacle to genuine knowledge and rational thinking in the world today. Somewhat surprisingly, this is just as relevant to America as it is to any Islamic nation. The major difference between Islamic fundamentalism and Christian fundamentalism is geography, not beliefs.
Hirsi Ali is foremost a feminist. She once argued that Islam and the West can’t coexist, but she has since softened that stance. Perhaps, like me, she has met Muslim feminists who have found a way to reconcile their religious beliefs with their sense of independence and self-belief. Arguably, self-belief is the most important attribute a human being can foster. The corollary to this is that any culture that erodes that self-belief is toxic to itself.
I’ve written elsewhere (care of Don Cupitt, Sep. 09) that the only religion worth having is the one that you have hammered out for yourself. You don’t have to be an atheist to agree with Hirsi Ali’s basic philosophy of female emancipation, but you may have to challenge some aspects of scripture, both Christian and Islamic, if you want to live what you believe, which is what she has done.
But I’ve detoured before I’ve even got started. We, in the West, live in a bubble, though, occasionally, through television, films and books, like Hirsi Ali’s, we get a glimpse into another world that the rest of us would call hell. And this hell is not transient or momentary for these people, but relentless, unforgiving and even normal for those who grow up in it. Hirsi Ali is one of the few people who has straddled these 2 worlds, and that makes her book all the more compelling. As Aminata Forna wrote in the Evening Standard: “Hirsi Ali has invited [us] to walk a mile in her shoes. Most wouldn’t last a hundred yards.”
There are many issues touched on in her story, none perhaps more pertinent than identity, but I won’t start there. I will start with the apparent historical gap between some Islamic cultures and the modern Western world – a clash of civilisations, if you like. I remember the years between my teens and mid twenties were the most transformational, conflicted and depressing in my life. Like many of my generation, it was a time when I rejected my parents’ and society’s values, not to mention the religion I had grown up with, and sought a world view that I could call my own. To some extent that’s exactly what Hirsi Ali has done, only she had to jump from a culture still imbued with 6th Century social mores into the birth of the second millennium. I can fully understand what drove her, but, looking back on my own coming-of-age experience, I doubt that I could emulate her. What she achieved is a monumental leap compared to my short jump. For me, it was generational; for her, it was trans-cultural and it spanned millennia.
Much of her book deals with the treatment of women in traditional Muslim societies, treated, in her own words, as ‘minors’ not adults. One should not forget that the emancipation of women from vassals to independent, autonomous beings with their own rights has been a very lengthy process in Western society. Most societies have been historically patriarchal in both the East and West. The perception and treatment of women as second-class citizens is not confined to Islamic societies by any means. But it does appear that many Islamic cultures have the most barbaric treatment of women (enshrined in law in many countries) and are the most tardy in giving women the social status they deserve, which is equality to men.
This attitude, supported by quotes from the Qur’an, demonstrates how dangerous and misguided it is to take one’s morals from God. Because a morality supposedly given by God, in scripture, can’t be challenged and takes no account of individual circumstances, evolution of cultural norms, progress in scientific knowledge or empathy for ‘others’. And this last criterion is possibly the most important, because it is the ability to treat people outside one’s religion as ‘others’ that permits bigotry, violence and genocide, all in the name of one’s God. This is so apparent in the violence that swept through Hirsi Ali’s home country, Somalia, and became the second most salient factor, I believe, in the rejection of her own religion.
When I first saw Hirsi Ali interviewed on TV (7.30 Report, ABC Australia) after she left Holland for America, she made the statement that Islam could never coexist in a Western secular society, logically based on her experience in Holland. In an interview I heard on the radio last year (also in Australia, with Margaret Throsby) I felt she had softened her stance and she argued that Muslims could live in a secular society. She was careful to make a distinction between Islam as a religion and Islam as political ideology (refer my post Dec. 2010). My personal experience of Muslims is that they are as varied in their political views as any other group of people. I know of liberal Muslims possibly because I hold liberal views, so that should not be surprising. But it gives me a different view to those who think that all Muslims are fundamental Islamists, or potentially so. One of Hirsi Ali’s messages is that an over-dependence on tolerance in a secular society can cause its own backlash.
I’ve written elsewhere (The problems with fundamentalism, Jan. 2008) that the limits of tolerance is intolerance of others. In other words, I am intolerant of intolerance. When Muslims, or anyone else of political persuasion, start to preach intolerance towards any other group then the opposition towards that intolerance in a healthy secular society can be immense. Australia has experienced that on a national level about a decade ago and it was ugly. Xenophobia is very easily aroused in almost any nation it would appear. People who preach hatred and bigotry, no matter who they are or which group they represent, and no matter how cleverly they disguise their rhetoric, should all be treated the same – they should be refuted and denounced in the loudest voices at the highest levels of authority.
But, as the events in Somalia demonstrate, it’s not just religion that can inflame or justify violence. Clan differences are enough to justify the most heinous crimes. All through her story, Hirsi Ali describes how everyone could find fault with every other group they came in contact with. Muslims and Africans are not alone in this prejudicial bias – I grew up with it in a Western secular society. The more insular a society is, the more bigoted they are. This is why I agree with Hirsi Ali that children should not be segregated in their education. The more children mix with other ethnicities the less insular they become in their attitudes towards other groups.
In a post I wrote on Evil (one of my earliest posts, Oct. 2007) I expounded on the idea that most of the atrocities committed in the last century, and every century beforehand for that matter, were based on some form of tribalism or an ingroup-outgroup mentality. This tribalism could be familial, religious, political, ethnic or national, but it revolves around the idea of identity. We underestimate how powerful this is because it’s almost subconscious.
Hirsi Ali’s book is almost entirely about identity and her struggle to overcome its strangulation on her life. All the role models in her young life, both female and male, were imbued with the importance and necessity of identity with her clan and her religion. In her life, religion and culture were inseparable. Her grandmother made her learn her ancestry off by heart because it might one day save her life, and, in fact, it did when she was only 20 years old and a man held a knife to her throat. By reciting her ancestry back far enough she was able to claim she was his ‘sister’ and he let her go.
People often mistakenly believe that their conscience is God whispering in their mind’s ear, when, in fact, it’s almost entirely socially and culturally formed, especially when we are children. It’s only as adults that we begin to question the norms we are brought up with, and then only when we are exposed to other social norms. A way that societies tend to overcome this ‘questioning’ is to imbue a sense of their cultural ‘superiority’ over everyone else’s. This comes across so strongly in Hirsi Ali’s book, and I recognised it as part of my own upbringing. To me, it’s a sign of immaturity that someone can only justify their own position, morally, intellectually or socially, by ridiculing everyone else’s.
One of the strongest influences on Hirsi Ali and her sister, Haweya, were the Western novels that they were exposed to: not just literary standards but pulp fiction romances. It reinforced my view that storytelling, and art in general, is the best medium to transmit ideas. It was this exposure to novels that led them to believe that there were other cultures and other ways of living, especially for women. Stories are what-ifs – they put us in someone else’s shoes and challenge our view of the world. It’s not surprising that some of the world’s greatest writers have been persecuted for their subversiveness.
But this leads to the almost heretical notion that only a society open to new ideas can progress out of ossification. If there is one singular message from Hirsi Ali’s book it’s that fundamentalism (of any stripe) does not only have to be challenged, but overcome, if societies want to move forward and evolve for the betterment for everyone, and not just for those who want to hold the reigns of power.
The real gulf that Hirsi Ali jumped was not religious but educational. I’ve argued many times that ignorance is the greatest enemy facing the 21st Century. Religious fundamentalism is arguably the greatest obstacle to genuine knowledge and rational thinking in the world today. Somewhat surprisingly, this is just as relevant to America as it is to any Islamic nation. The major difference between Islamic fundamentalism and Christian fundamentalism is geography, not beliefs.
Hirsi Ali is foremost a feminist. She once argued that Islam and the West can’t coexist, but she has since softened that stance. Perhaps, like me, she has met Muslim feminists who have found a way to reconcile their religious beliefs with their sense of independence and self-belief. Arguably, self-belief is the most important attribute a human being can foster. The corollary to this is that any culture that erodes that self-belief is toxic to itself.
I’ve written elsewhere (care of Don Cupitt, Sep. 09) that the only religion worth having is the one that you have hammered out for yourself. You don’t have to be an atheist to agree with Hirsi Ali’s basic philosophy of female emancipation, but you may have to challenge some aspects of scripture, both Christian and Islamic, if you want to live what you believe, which is what she has done.
Wednesday, 2 March 2011
A discussion on Wiki-Leaks without Assange
This is, in effect, a follow-up from a previous post on Wiki-Leaks (The forgotten man, last month), though from a different point of view. It’s a truly international discussion with 3 participants from the US, one from Iceland and one from Berlin, chaired in front of a live TV audience in Australia. This discussion is more diverse than the 4 Corners programme I referenced in my earlier post, and, arguably, more balanced as well.
When Assange was first criticised for endangering lives, I admit I considered that to be irresponsible, but events have revealed, whether by good luck or good management, that those concerns have not materialised. This aspect of the debate on Wiki-Leaks is discussed at length in this programme. The other thing that is brought out in this discussion is that you really can’t preach transparency if you can’t practice it.
But I think the most significant aspect of all this is how the internet has changed the way information can be delivered. Closing down Wiki-Leaks will be like trying to put the genie back into the bottle. Whatever happens to Assange, the world’s media will never be the same again. Wiki-Leaks has changed the rules and I don’t think, short of totalitarian measures, they can be reversed.
Addendum: For the latest refer this post (16 August 2012)
When Assange was first criticised for endangering lives, I admit I considered that to be irresponsible, but events have revealed, whether by good luck or good management, that those concerns have not materialised. This aspect of the debate on Wiki-Leaks is discussed at length in this programme. The other thing that is brought out in this discussion is that you really can’t preach transparency if you can’t practice it.
But I think the most significant aspect of all this is how the internet has changed the way information can be delivered. Closing down Wiki-Leaks will be like trying to put the genie back into the bottle. Whatever happens to Assange, the world’s media will never be the same again. Wiki-Leaks has changed the rules and I don’t think, short of totalitarian measures, they can be reversed.
Addendum: For the latest refer this post (16 August 2012)
Saturday, 19 February 2011
Metaphysics in mathematics revisited
I recently wrote a post on E. Brian Davies’ book, Why Beliefs Matter (Metaphysics in mathematics, science and religion). Davies is Professor of Mathematics at Kings College London, so his knowledge and erudition of the subject far outweighs mine. I feel that that imbalance was not represented in that post, so this is an attempt to redress it.
Davies’ book is structured in 5 parts: The Scientific Revolution; The Human Condition; The Nature of Mathematics; Sense and Nonsense; and Science and Religion.
Davies addresses mathematical Platonism in 2 parts: The Human Condition and The Nature of Mathematics. Due to the nature of my essay, I believe I gave him short thrift and, for the sake of fairness as well as completeness, I seek to make amends.
For a start, Davies discusses Platonism in its wider context, not just in relation to mathematics, but in its influence on Western thought, regarding religion as well as science. Many people have argued that Aquinas and Augustine were both influenced by Platonism, to the extent that Earth is an imperfect replica of Heaven where the perfect ‘forms’ of all earthly entities exist. There is a parallel view expressed in some interpretations of Taoism as well. Note that one doesn’t need a belief in ‘God’ to embrace this viewpoint, but one can see how it readily marries into such a belief.
Davies discusses at length Popper’s 3 worlds: World 1 (physical); World 2 (mental); and World 3 (cultural). Under a subsection: 2.7 Plato, Popper, Penrose; he compares Popper’s 3 worlds with Penrose’s that I expounded on in my previous post: Physical, Mental and Mathematical (Platonic). In fact, Davies concludes that they are the same. I’m sure Penrose would disagree and so do I.
There is a relationship between mathematics and the physical world that doesn’t exist with other cultural ideas. Even non-Platonists, like Paul Davies and Albert Einstein, acknowledge that the correlation between mathematical relationships and physical phenomena (like relativity and quantum mechanics for example) is a unique manifestation of human intelligence. In his book, The Mind of God (a reference to Hawking’s famous phrase) Paul Davies devotes an entire chapter to this topic, entitled The Mathematical Secret.
On the other hand, Brian Davies produces compelling arguments that mathematics is cultural rather than Platonic. He compares it to other cultural entities like language, music, art and stories, all of which are products of the human brain. In one of his terse statements in bold type he says: Mathematics is an aspect of human culture, just as are language, law, music and architecture.
But, as I’ve argued in one of my previous posts (Is mathematics evidence of a transcendental realm? Jan. 08) there is a fundamental difference. No one else could have written Hamlet other than Shakespeare and no one else could have composed Beethoven’s Ninth except Beethoven, but someone else could have discovered Schrodinger’s equations and someone else could have discovered Riemann’s geometry. These mathematical entities have an objectivity that great works of art don’t.
Likewise I think that comparisons with language are misleading. No one has mathematics as their first language, unless you want to include computers. Deaf people can have sign language as a first language, but mathematics is not a communicative language in the same way that first languages are. In fact, one might argue that mathematics is an explanatory language or an analytic language; it has no nouns or verbs, subjects and predicates. Instead it has equalities and inequalities, propositions, proofs, conjectures and deductions. Even music is more communicative than mathematics which leads to another analogy.
Is music the score on the page, the sounds that you hear or the emotion it creates in your head? Music only becomes manifest when it is played on a musical instrument, even if that musical instrument is the human voice. Likewise mathematics only becomes manifest when it is expressed by a human intelligence (and possibly a machine intelligence). But the difference is that mathematical concepts have been expressed by various cultures independently of each other. Mathematical concepts like quadratic equations, Pascal’s triangle and logarithms have been discovered (or invented) more than once.
Davies makes the point that invention is a necessary part of mathematics, and I wouldn’t disagree. But he goes further, and argues that the distinction between invention and discovery cannot be readily drawn, by comparing mathematics to material inventions. He argues that a stone axe may have been the result of an accidental discovery, and Galileo’s pendulum clock was as much a discovery as an invention. I would argue that Galileo discovered a principle of nature that he could exploit and people might say the same about mathematical discoveries, so the analogy can actually work against Davies’ own argument if one rewords it slightly.
In my previous post, I did Davies an injustice when I referred to his conclusion about mathematical Platonism being irrelevant. In section 3.2 The Irrelevance of Platonism, Davies explains how some constructivist theories (like Jordan algebras) don’t fit into Platonism by definition. I don’t know anything about Jordan algebras so I can’t comment. But the constructivist position, as best I understand it, says that the only mathematics we know is what we’ve created. A Platonist will argue that the one zillionth integer of pi exists even if no one has calculated it yet, whereas the constructivist says we’ll only know what it is when we have calculated it. Both positions are correct, but when it comes to proofs, there is merit in taking the constructivist approach, because a proof is only true when someone has taken the effort to prove it. This is why, if I haven’t misconstrued him, Davies calls himself a mathematical ‘pluralist’ because he can adjust his position from a classicalist to a formalist to a constructivist depending on the mathematics he’s examining. A classicalist would be a Platonist if I understand him correctly.
I still haven’t done Davies justice, which is why I recommend you read his book. Even though I disagree with him on certain philosophical points, his knowledge is far greater than mine, and the book, in its entirety, is a worthy contribution to philosophical discourse on mathematics, science and religion, and there aren’t a lot of books that merit that combined accolade.
Davies’ book is structured in 5 parts: The Scientific Revolution; The Human Condition; The Nature of Mathematics; Sense and Nonsense; and Science and Religion.
Davies addresses mathematical Platonism in 2 parts: The Human Condition and The Nature of Mathematics. Due to the nature of my essay, I believe I gave him short thrift and, for the sake of fairness as well as completeness, I seek to make amends.
For a start, Davies discusses Platonism in its wider context, not just in relation to mathematics, but in its influence on Western thought, regarding religion as well as science. Many people have argued that Aquinas and Augustine were both influenced by Platonism, to the extent that Earth is an imperfect replica of Heaven where the perfect ‘forms’ of all earthly entities exist. There is a parallel view expressed in some interpretations of Taoism as well. Note that one doesn’t need a belief in ‘God’ to embrace this viewpoint, but one can see how it readily marries into such a belief.
Davies discusses at length Popper’s 3 worlds: World 1 (physical); World 2 (mental); and World 3 (cultural). Under a subsection: 2.7 Plato, Popper, Penrose; he compares Popper’s 3 worlds with Penrose’s that I expounded on in my previous post: Physical, Mental and Mathematical (Platonic). In fact, Davies concludes that they are the same. I’m sure Penrose would disagree and so do I.
There is a relationship between mathematics and the physical world that doesn’t exist with other cultural ideas. Even non-Platonists, like Paul Davies and Albert Einstein, acknowledge that the correlation between mathematical relationships and physical phenomena (like relativity and quantum mechanics for example) is a unique manifestation of human intelligence. In his book, The Mind of God (a reference to Hawking’s famous phrase) Paul Davies devotes an entire chapter to this topic, entitled The Mathematical Secret.
On the other hand, Brian Davies produces compelling arguments that mathematics is cultural rather than Platonic. He compares it to other cultural entities like language, music, art and stories, all of which are products of the human brain. In one of his terse statements in bold type he says: Mathematics is an aspect of human culture, just as are language, law, music and architecture.
But, as I’ve argued in one of my previous posts (Is mathematics evidence of a transcendental realm? Jan. 08) there is a fundamental difference. No one else could have written Hamlet other than Shakespeare and no one else could have composed Beethoven’s Ninth except Beethoven, but someone else could have discovered Schrodinger’s equations and someone else could have discovered Riemann’s geometry. These mathematical entities have an objectivity that great works of art don’t.
Likewise I think that comparisons with language are misleading. No one has mathematics as their first language, unless you want to include computers. Deaf people can have sign language as a first language, but mathematics is not a communicative language in the same way that first languages are. In fact, one might argue that mathematics is an explanatory language or an analytic language; it has no nouns or verbs, subjects and predicates. Instead it has equalities and inequalities, propositions, proofs, conjectures and deductions. Even music is more communicative than mathematics which leads to another analogy.
Is music the score on the page, the sounds that you hear or the emotion it creates in your head? Music only becomes manifest when it is played on a musical instrument, even if that musical instrument is the human voice. Likewise mathematics only becomes manifest when it is expressed by a human intelligence (and possibly a machine intelligence). But the difference is that mathematical concepts have been expressed by various cultures independently of each other. Mathematical concepts like quadratic equations, Pascal’s triangle and logarithms have been discovered (or invented) more than once.
Davies makes the point that invention is a necessary part of mathematics, and I wouldn’t disagree. But he goes further, and argues that the distinction between invention and discovery cannot be readily drawn, by comparing mathematics to material inventions. He argues that a stone axe may have been the result of an accidental discovery, and Galileo’s pendulum clock was as much a discovery as an invention. I would argue that Galileo discovered a principle of nature that he could exploit and people might say the same about mathematical discoveries, so the analogy can actually work against Davies’ own argument if one rewords it slightly.
In my previous post, I did Davies an injustice when I referred to his conclusion about mathematical Platonism being irrelevant. In section 3.2 The Irrelevance of Platonism, Davies explains how some constructivist theories (like Jordan algebras) don’t fit into Platonism by definition. I don’t know anything about Jordan algebras so I can’t comment. But the constructivist position, as best I understand it, says that the only mathematics we know is what we’ve created. A Platonist will argue that the one zillionth integer of pi exists even if no one has calculated it yet, whereas the constructivist says we’ll only know what it is when we have calculated it. Both positions are correct, but when it comes to proofs, there is merit in taking the constructivist approach, because a proof is only true when someone has taken the effort to prove it. This is why, if I haven’t misconstrued him, Davies calls himself a mathematical ‘pluralist’ because he can adjust his position from a classicalist to a formalist to a constructivist depending on the mathematics he’s examining. A classicalist would be a Platonist if I understand him correctly.
I still haven’t done Davies justice, which is why I recommend you read his book. Even though I disagree with him on certain philosophical points, his knowledge is far greater than mine, and the book, in its entirety, is a worthy contribution to philosophical discourse on mathematics, science and religion, and there aren’t a lot of books that merit that combined accolade.
Subscribe to:
Posts (Atom)