Actually the competition (ran over the 8-9 Apr weekend) was called the SFL Challenge (Sci-Fi London Challenge), which I came across in New Scientist a few weeks earlier. You had to register and you were given a title and a line of dialogue from which to write a 2000 word short story in 48 hrs (actually 50hrs). But in reality, you only needed 24 once you got the bit between your teeth, though, in some circumstances that in itself may take 24 hrs. I was lucky in that I found the story started for me as soon as I put pen to paper. I say 'lucky' because that doesn't normally happen. It also could mean that the story is complete crap, but obviously I don't think so, otherwise I wouldn't be willing to post it here.
This is the complete opposite to the science fiction I normally write or used to write (as I haven't written anything in the past few years) in that it verges on real science, whilst my novel length attempts I would call science fantasy as opposed to so-called hard core sci-fi. You’ll understand what I mean if you read it. It could almost actually happen. In fact, similar ‘incidents’, for want of a better term, have notoriously happened in the past. I can’t say anymore without giving the plot away.
Some may think it a touch ambitious for a male writer to tell a story in first person female, or even a conceit, but I have done it before, though not in first person. There is little difference, from a writer’s perspective in writing in third person intimate or first person. Third person intimate (as it’s called) is distinct from third person omniscient. The point is that in both third person intimate and first person, the story is told from inside a character’s head, so little difference really.
FUTURE GUARANTEED
Cue dialogue: The drug could permanently enhance mirror neurons and make people too empathetic.
Word limit: 2000
‘Someone once said that evil should be called lack of empathy. When one looks at history, even recent history, atrocities have always occurred when one group of people demonise another group. In order to commit atrocities like genocide, or even milder forms of human rights abuse like denying sanctuary to refugees, it requires one to completely reject any empathetic feelings.’
Dr Robert immediately had the room’s attention by mentioning the word ‘evil’ in his first utterance, and possibly hit a nerve by introducing a topic that everyone would have an opinion on. I looked around the room to assess the reaction of the seventy odd people present and noted, that as a woman, I was in a distinct minority. He went on to woo us further by suggesting the highly improbable, if not impossible.
‘So imagine if we could cure evil, so to speak. We could guarantee the future.’ He paused to let the idea sink in. ‘Imagine if we could create a drug that would effectively eliminate evil; that would stop all atrocities in their tracks. A drug that could permanently enhance mirror neurons and make people more empathetic.’
As a science journalist, I knew this was an extraordinary claim. But was it just a blatant, self-promoting publicity grab or did it have substance? I needed to find out. In his next breath, Dr Robert offered me a means to that end.
‘We are looking for volunteers to trial this drug. And we have set up a web site for people to register. It will be a controlled double-blind experiment, so some of the volunteers will be given a placebo. We have already passed this by an ethics committee, which is why I can tell you about it here today.’
The group broke up and we went into an adjoining room for drinks and nibbles. I got myself a glass of white wine and observed Dr Robert from the fringe of the pack.
I guessed that he was in his early forties, reasonably good looking, with a relaxed and confident manner. I got the impression he was used to addressing large groups of people, and possibly corporate boards, using a combination of charm and intellect to persuade others to follow and support him in whatever he wished to pursue. I had to admit he reminded me of my ex-husband, who had used the same combination to sweep me off my feet when I was not quite twenty. More than a decade later, with wisdom and hindsight, I now know that someone who looks and behaves like they were perfect casting for the romantic lead of the movie playing in your head, can in reality be self-centred, inconsiderate and insensitive to the needs of others. Richard wasn’t evil, just a bastard, but an empathy enhancing drug may have performed wonders.
I was abruptly broken out of my reverie when I saw Dr Robert approaching me, carrying a glass of red.
‘I don’t believe we’ve met.’
We both changed hands with our glasses so we could cordially shake. His grip was gentle, which I suspect he reserved for women.
‘Jennifer Law, I’m a journalist with Science of Today.’
‘A very respectable periodical. I understand you have a wider audience than just geeks and science professionals.’
‘I would like to think so. I’ve been told that even some politicians read us.’
‘Well, you must be doing something right.’
‘Or possibly something wrong.’ We both chuckled and I lifted my glass to my lips to hide behind. Then I got serious. ‘To be honest, I’d like to be part of your trial.’
‘So you can report on it. From the inside, so to speak.’
‘Exactly.’
He gave me a look as if he was reassessing me. ‘Well, at least you’re up front.’
‘Yes, I find in the long run, it earns respect.’
He gave me another look, and I believe he liked what he saw. It occurred to me that he possibly liked blondes. From my experience, dark haired men often do.
‘Here’s my card,’ he said, taking it out of an inside pocket.
I looked at it: psychiatrist. It had his mobile number.
‘Thanks Dr Robert. Tomorrow I’ll go to your web site and register.’
‘Call me David.’ And he touched my arm ever so lightly.
We both smiled and he turned his back so he could meld into the crowd.
If I was to be honest, we’d been flirting and despite the alarms going off inside my head, I had to admit I enjoyed it.
The next day I went on-line to register. I noticed that they asked for the usual parameters: age, gender, profession and education level. They also asked, rather unexpectedly, if we could take a week out of our lives to participate. Naturally, I said yes, but I suspect that particular question would have eliminated a lot of potential volunteers before they even registered.
Those of us who were successful were booked into a hotel in the inner city, all expenses paid, and told on the first day to attend an introductory meeting in a ‘function’ room on the top floor. I estimated there were thirty or more of us, varying in age from early twenties to late forties, maybe early fifties, roughly equally divided by gender.
Dr Robert addressed us, saying that we would be divided by ballot into two groups and separated. I assumed that one group would be on the drug and the other on the placebo. Dr Robert told us that even he didn’t know who would be on the drug and who wouldn’t.
A female assistant then proceeded to read out a list of names which formed the first group and, as requested, they assembled one by one on the left side of the room, being the right side to Dr Robert and his assistant. I was in the second group so I moved to the right side.
Before dismissing us, Dr Robert told us something about the purpose of the trial. ‘It’s important that you understand that this drug doesn’t enhance empathy per se. It enhances mirror neurons, which actually fire when we observe the activities of others. But it is widely believed that this feature, which is not unique to humans by the way, allows what we call empathy with others. The trial is to specifically observe how or if we can get inside someone else’s head, figuratively speaking.’
I found it intriguing that he didn’t elaborate on how he would do that or how it would be measured. Someone in the other group obviously had the same thought, as they asked that very question.
Dr Robert replied, ‘I can’t answer that as it may affect the results.’ He smiled knowing that his answer would only intrigue us further, but perhaps that was the idea.
We then exited, under the guide of another two assistants, male this time, out separate doors.
We were given an oily gold liquid in a cup; the sort one usually associates with cough medicine. It had no distinct taste but the texture matched its look. Of course one wondered if its lack of taste indicated that it was the placebo, but I knew that was intuitive thinking misleading cognitive deduction. We were told that we would be given the same dose under supervision for every day of the trial.
Over the next few days we had no contact with the other group. Under the supervision of our assistant, who called himself Jones, we were involved in discussions about racial issues and societal dynamics. We watched documentaries, mainly concerned with historical events like the civil rights movement in the 1960s and pre-War Europe in the 1930s. I have to admit I was starting to feel acute disappointment, as I could see nothing innovative or novel about this approach. I found myself becoming bored and irritated, with the daily cumulative feeling that I was wasting my time. Also Dr Robert had effectively disappeared and I was beginning to feel that I had been duped. I also began to realise that many in the group felt the same way. If we were taking the drug, as opposed to the placebo, then our mirror neurons were in full synchronicity.
After four days we were told by Jones that we would be doing an exercise with the other group, which would be a role playing exercise. We would not be told what our roles were until we met. I noticed that the two groups even occupied separate floors of the hotel and used separate dining rooms. There had been no fraternising at all. Only a fire could have caused us to meet.
The next day we found ourselves in the function room where we had started. This time Dr Robert was no where to be seen.
We were going to play a game and some props were introduced to help us. The props consisted of two partitions, in the form of fences with 2 metre vertical poles about 10 centimetres apart. They were placed about 3 metres from the opposing walls where there were no doors. Half of our group were allocated to stand behind one partition and half of their group to stand behind the other partition, which was the one behind us. So both sides had their opposing side standing between them and half their group who were effectively prisoners.
Then those of us who weren’t prisoners were given a list of crimes committed by our opponents against imaginary members of our own group. The crimes included murder, rape, infanticide and torture; the usual accusations associated with war crimes. Each prisoner was given a number, which was associated with a specific crime. Our job, as a group, was to negotiate the release of their prisoners.
Logically, we would exchange prisoners with similar crimes, but everyone, myself included, felt that treating it as a book-keeping exercise didn’t serve justice.
Recollecting events later, I was surprised how seriously we all took it. No one said: It’s only a game. Both the assistants took up the cause for their respective sides, urging us not to give in to our opponents’ demands. I’m not sure how long it went on for, but later that morning the exercise was called off with no prisoners released, and we were allowed to return to our rooms.
Later that day we were called back to the function room for a debriefing. I have to admit I didn’t even want to go back into that room, but I had the feeling that it would be the last time.
This time Dr Robert was present and told us that the trial was over. He said we would all be debriefed over the next 24 hours and allowed to go home.
Dr Robert debriefed me personally. I’m unsure if that was deliberate, but I suspect it was. I have to confess my original attraction, even warmth, for him felt tainted by the experience that he had just put me through.
‘May I call you Jennifer?’
‘Sure,’ I said, feeling raw. ‘Can you tell me if I was on the drug?’
‘There was no drug. Everyone was given a placebo.’
I was so stunned that words would not form in my mind.
‘But, believe it or not, the trial was a success.’
‘How can you say that?’
‘We gave everyone the impression that their mirror neurons would be enhanced, and they were to the extent that we kept you in a group. Empathy has a dark side in that it causes people to associate more strongly with their group. It’s as much a cause for evil as an antidote.’ He elaborated, ‘The real purpose of the trial was to show that empathy, through mirror neurons, is a two-edged sword.’
I said no more. I left the room knowing that any romantic feelings I might have felt for Dr Robert had long dissipated. Not because he reminded me of Richard, but because I couldn’t abide his deception, however he may justify it.
Philosophy, at its best, challenges our long held views, such that we examine them more deeply than we might otherwise consider.
Paul P. Mealing
- Paul P. Mealing
- Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
Friday 14 April 2017
Sunday 19 March 2017
The importance of purpose
A short while ago, New Scientist (Issue: 28 January 2017) had on its cover the headline, The Meaning of Life. On reading the article, titled Why am I here? (by Teal Burrell, pp. 30-33) it was really about the importance to health in finding purpose in one’s life. I believe this is so essential that I despair when I see hope and opportunity deliberately curtailed as we do with our treatment of refugees. It’s criminal – I really believe that – because it’s so fundamental to both psychological and physical health. As someone who often struggled to find purpose, this is a subject close to my heart.
As the article points out, for many people, religion provides a ‘higher purpose’, which is really a separate topic, but not an unrelated one. The author also references Viktor Frankl’s famous book, Man’s Search for Meaning (very early in the piece), which I’ve sometimes argued is the only book I’ve read that should be compulsory reading. The book is based on Frankl’s experience as a holocaust survivor, but ultimately led to a philosophy and a psychological method (for want of a better term) that he practiced as a psychologist.
I’ve also read another book of his, The Unconscious God, where he argues that there are 3 basic ways in which we find purpose or meaning in our lives. One, through a relationship; two through a project; and three through dealing with adversity. This last seems paradoxical, even oxymoronic, yet it is the premise of virtually every work of narrative fiction that all of us (who watch cinema or TV) imbibe with addictive enthusiasm. I’ve long argued that wisdom doesn’t come from achievements or education but dealing with adversity in our lives, which is impossible to avoid no matter who you are. It makes one think of Socrates' (attributed) famous aphorism: The unexamined life is not worth living. If we think about it, we only examine our lives when we fail. So a life without failure is not really much of a life. The corollary to this is that risk is essential to success and to gaining maturity in all things.
Humans are the most socially complex creatures on the planet – take language. I’ve recently read a book, Cosmo Sapiens; Human Evolution from the Origin of the Universe, by John Hands. It’s as ambitious as its title suggests and it took him 10 years to complete: very erudite and comprehensive, Hands challenges science orthodoxies without being anti-science. But his book is not the topic of this post, so I won’t distract you further. One of his many salient points is that humans are unique, not the least because of our ability for self-reflection. He contends that we are not the only species with the ability to ‘know’, but we are the only species who ‘know that we know’ (his words) or think about thinking (my words). The point is that cognitively we are distinct from every other species on the planet because we can consider and cogitate on our origins, our mortality and our place in the overall scheme of things, in ways that other species can’t possibly think about.
And language is the key attribute, because, without it, we can’t even think in the way that we all take for granted; yet it's derived from our social environment (we all had to be taught). I understand that children isolated from adults can develop their own language, but, even under these extremely rare circumstances, it requires social interaction to develop. This is a lengthy introduction to the fact that all of us require social interaction (virtually from birth) to have a meaningful life in any way, shape or form. We spend a large part of our lives interacting with others and, to a very large extent, the quality of that interaction determines the quality of our lives.
And this is a convoluted way of reaching the first of Frankl’s ‘ways of finding meaning’: through a relationship. For most of us this implies a conjugal relationship with all that entails. For many of us, in our youth, there is a tendency to put all our eggs in that particular basket. But with age, our perspective changes with lust playing a lesser role, whilst more resilient traits like friendship, reliance and trust become more important, even necessary, in long term relationships, upon which we build something meaningful for ourselves and others. For many people, I think children provide a purpose, not that I’ve ever had any, but it’s something I’ve observed.
I know from personal experience, that having a project can provide purpose, and for many people, myself included, it can seem necessary. We live in a society (in the West, anyway) where our work often defines us and gives us an identity. I think this has historical roots. Men, in particular, were defined by what they do, often following a family tradition. This idea of a hereditary role (for life) is not as prevalent as it once was, but I suspect it snuffed out the light of aspiration for many. A couple of weeks ago I saw David Stratton; a Cinematic Life, followed by a Q&A with the man himself. David, who is about a decade older than me, came to Australia and made a career as a film critic, becoming one of the most respected, not only in Australia, but in the world. However, the cost was the bitter disappointment expressed by his father for not taking over the family grocery business back in England. Women, on the other hand, were not allowed the luxury of finding their own independent identity until relatively recently in Western societies. It’s the word ‘independent’ that was their particular stumbling block, because, even in my postwar childhood, women were not meant to be independent of a man.
The movie, Up in the Air, starring George Clooney, which I reviewed back in 2010, does a fair job of addressing this issue in the guise of cinematic entertainment. To illustrate my point, I’ll quote from my own post:
The movie opens with a montage of people being sacked (fired) with a voice-over of Clooney explaining his job. This cuts to the core of the movie for me: what do we live for? For many people their job defines them – it is their identity, in their own eyes and the eyes of their society. So cutting off someone’s job is like cutting off their life – it’s humiliating at the very least, suicidally depressing at worst and life-changing at best.
So purpose is something most of us pursue, either through relationships within our family or through our work or both. But many of you will be asking: is there a higher purpose? I can’t answer that, but I’ll provide my own philosophical slant on it.
Socrates (again), who was forced to take his own life (as a consequence of a democratic process, it should be noted) supposedly said, in addition to the well-worn trope quoted above: 'Whether death is a door to another world or an endless sleep, we don’t know'. And I would add: We are not meant to know. I’m agnostic about an afterlife, but, to be honest, I’m not expecting one, and I’ve provided my views elsewhere. But there is a point worth making, which is that people who believe that their next life is more important than the one they’re currently living often have a perverse, not to say destructive, view on mortality. One only has to look at suicide bombers who believe that their death is a ticket to Paradise.
Having said all that, it’s well known that people with religious beliefs can benefit psychologically in that they often live healthy and fulfilling lives (as the New Scientist article, referenced in the introduction, attests). Personally, I think that when one reaches the end of one’s life, they will judge it not by their achievements and successes but by the lives they have touched. Purpose can best be found when we help others, whether it be through work or family or sport or just normal everyday interactions with strangers.
As the article points out, for many people, religion provides a ‘higher purpose’, which is really a separate topic, but not an unrelated one. The author also references Viktor Frankl’s famous book, Man’s Search for Meaning (very early in the piece), which I’ve sometimes argued is the only book I’ve read that should be compulsory reading. The book is based on Frankl’s experience as a holocaust survivor, but ultimately led to a philosophy and a psychological method (for want of a better term) that he practiced as a psychologist.
I’ve also read another book of his, The Unconscious God, where he argues that there are 3 basic ways in which we find purpose or meaning in our lives. One, through a relationship; two through a project; and three through dealing with adversity. This last seems paradoxical, even oxymoronic, yet it is the premise of virtually every work of narrative fiction that all of us (who watch cinema or TV) imbibe with addictive enthusiasm. I’ve long argued that wisdom doesn’t come from achievements or education but dealing with adversity in our lives, which is impossible to avoid no matter who you are. It makes one think of Socrates' (attributed) famous aphorism: The unexamined life is not worth living. If we think about it, we only examine our lives when we fail. So a life without failure is not really much of a life. The corollary to this is that risk is essential to success and to gaining maturity in all things.
Humans are the most socially complex creatures on the planet – take language. I’ve recently read a book, Cosmo Sapiens; Human Evolution from the Origin of the Universe, by John Hands. It’s as ambitious as its title suggests and it took him 10 years to complete: very erudite and comprehensive, Hands challenges science orthodoxies without being anti-science. But his book is not the topic of this post, so I won’t distract you further. One of his many salient points is that humans are unique, not the least because of our ability for self-reflection. He contends that we are not the only species with the ability to ‘know’, but we are the only species who ‘know that we know’ (his words) or think about thinking (my words). The point is that cognitively we are distinct from every other species on the planet because we can consider and cogitate on our origins, our mortality and our place in the overall scheme of things, in ways that other species can’t possibly think about.
And language is the key attribute, because, without it, we can’t even think in the way that we all take for granted; yet it's derived from our social environment (we all had to be taught). I understand that children isolated from adults can develop their own language, but, even under these extremely rare circumstances, it requires social interaction to develop. This is a lengthy introduction to the fact that all of us require social interaction (virtually from birth) to have a meaningful life in any way, shape or form. We spend a large part of our lives interacting with others and, to a very large extent, the quality of that interaction determines the quality of our lives.
And this is a convoluted way of reaching the first of Frankl’s ‘ways of finding meaning’: through a relationship. For most of us this implies a conjugal relationship with all that entails. For many of us, in our youth, there is a tendency to put all our eggs in that particular basket. But with age, our perspective changes with lust playing a lesser role, whilst more resilient traits like friendship, reliance and trust become more important, even necessary, in long term relationships, upon which we build something meaningful for ourselves and others. For many people, I think children provide a purpose, not that I’ve ever had any, but it’s something I’ve observed.
I know from personal experience, that having a project can provide purpose, and for many people, myself included, it can seem necessary. We live in a society (in the West, anyway) where our work often defines us and gives us an identity. I think this has historical roots. Men, in particular, were defined by what they do, often following a family tradition. This idea of a hereditary role (for life) is not as prevalent as it once was, but I suspect it snuffed out the light of aspiration for many. A couple of weeks ago I saw David Stratton; a Cinematic Life, followed by a Q&A with the man himself. David, who is about a decade older than me, came to Australia and made a career as a film critic, becoming one of the most respected, not only in Australia, but in the world. However, the cost was the bitter disappointment expressed by his father for not taking over the family grocery business back in England. Women, on the other hand, were not allowed the luxury of finding their own independent identity until relatively recently in Western societies. It’s the word ‘independent’ that was their particular stumbling block, because, even in my postwar childhood, women were not meant to be independent of a man.
The movie, Up in the Air, starring George Clooney, which I reviewed back in 2010, does a fair job of addressing this issue in the guise of cinematic entertainment. To illustrate my point, I’ll quote from my own post:
The movie opens with a montage of people being sacked (fired) with a voice-over of Clooney explaining his job. This cuts to the core of the movie for me: what do we live for? For many people their job defines them – it is their identity, in their own eyes and the eyes of their society. So cutting off someone’s job is like cutting off their life – it’s humiliating at the very least, suicidally depressing at worst and life-changing at best.
So purpose is something most of us pursue, either through relationships within our family or through our work or both. But many of you will be asking: is there a higher purpose? I can’t answer that, but I’ll provide my own philosophical slant on it.
Socrates (again), who was forced to take his own life (as a consequence of a democratic process, it should be noted) supposedly said, in addition to the well-worn trope quoted above: 'Whether death is a door to another world or an endless sleep, we don’t know'. And I would add: We are not meant to know. I’m agnostic about an afterlife, but, to be honest, I’m not expecting one, and I’ve provided my views elsewhere. But there is a point worth making, which is that people who believe that their next life is more important than the one they’re currently living often have a perverse, not to say destructive, view on mortality. One only has to look at suicide bombers who believe that their death is a ticket to Paradise.
Having said all that, it’s well known that people with religious beliefs can benefit psychologically in that they often live healthy and fulfilling lives (as the New Scientist article, referenced in the introduction, attests). Personally, I think that when one reaches the end of one’s life, they will judge it not by their achievements and successes but by the lives they have touched. Purpose can best be found when we help others, whether it be through work or family or sport or just normal everyday interactions with strangers.
Friday 17 February 2017
My 2 sided philosophy
In a way, this gets incorporated into Roger Penrose's 3 world philosophy that I discussed last year, but the core principle of my world view, that turns up again and again in my musings, can be best understood as a philosophy in 2 parts, if not 2 worlds. I'm not an academic, so don't expect me to formalise this as I suspect one is meant to, but there is a principle involved here that I wish to make more fundamental than I have done in the past.
This has been prompted, not surprisingly, by various things I’ve read recently, in particular in Philosophy Now (Issues 117 & 118) and a letter I wrote to the Editor of said magazine, which re-iterated some of the ideas that I expressed in my post on Penrose’s 3 Worlds, referenced above.
A great deal of my personal philosophy stems from the view that there are effectively 2 worlds for each and every one of us: an inner world and an outer world; and the confluence and interaction of these 2 aspects of reality pretty well determine how we live our lives, how we navigate relationships and how we effectively determine our destiny.
I’ve even used this dichotomous philosophical principle as a premise for how I write fiction. Basically, a story should include an inner journey and an outer journey where the outer journey is the plot and the inner journey is the character. In fact, writing fiction reinforced my philosophical point of view, when I realised it’s totally analogous to real life. The outer journey is fate and the inner journey is free will. The 2 are complementary rather than contradictory, but the complementarity is even more obvious when one thinks of it in terms of consciousness and the physical world. To illustrate my point, I will insert an edited version of the letter (I referenced above) to the Editor of Philosophy Now.
This is in reference to an essay by Nick Inman, titled “Nowhere Men” (published in Issue 117).
One doesn’t need to argue for a ‘soul’ or a ‘spirit’ to appreciate that some aspects of Inman’s argument have validity without religious connotations. In particular, there are 2 aspects of one’s self, whereby one aspect is subjective and uniquely known only to ‘You’, and another aspect is objective and known to everyone you interact with. But I think the most pertinent point he makes is that it is only through intelligent conscious entities, like us, that the Universe has any meaning at all. In answer to the oft asked question: Why is there something rather than nothing? Without consciousness there might as well be nothing. When you cease to be conscious there is nothing for You. Because consciousness is so ubiquitous and taken-for-granted in our everyday lives, we tend not to consider its essential role in providing reality. In other words, we need both an objective world and subjective consciousness for reality to become manifest.
As you can see, this is almost an ontological manifesto, which suggests that the existence of the Universe and the emergence of intelligent beings are entwined in ways which we prefer to ignore or dismiss. The scientific answer to this is that there is a multiverse of possibly infinite universes, the vast majority of which cannot sustain life. I’ve discussed this elsewhere, but the multiverse is an epistemological dead end in that it explains everything and nothing, which, ironically, is its appeal. We don’t know if there is a metaphysical purpose to our existence, and I’m not arguing that there is; I’m simply pointing out that reality requires both an objective world, called the Universe, and a subjective consciousness, epitomised by our existence.
It is for this reason that the so-called strong anthropic principle (as opposed to the weak principle) has long appealed to me. Neither of the anthropic principles, I should point out, are scientific principles; they are more like metaphysical premises that can’t be proven or falsified, given our current knowledge. I’m currently reading a highly ambitious and lengthy book by John Hands called Cosmo Sapiens; Human Evolution from the Origin of the Universe. It’s a comprehensive survey and review of the latest scientific theories concerning cosmology, biological evolution and the emergence of humanity. Not surprisingly, he briefly discusses Brandon Carter’s weak and strong anthropic principles plus John Barrow’s and Frank Tipler’s book-length dissertation on the subject. Effectively, the weak anthropic principle states that the Universe allows conscious intelligent agents to arise because we’re in it, which Hands points out is a tautology – a point I’ve made myself on this blog. The strong anthropic principle effectively states that the Universe specifically allows intelligent agents to exist otherwise it wouldn’t exist itself. It’s not stated that way, but that’s a reasonable interpretation, and, as you can see, it leans heavily towards teleology, which I’ve also discussed elsewhere. On that point, if one believes in teleology then it’s hard not to conclude that the Universe is deterministic, which means there is no free will. Einstein believed this so strongly that he couldn’t accept the inherent indeterminism displayed by quantum mechanics and therefore believed that the theory was incomplete and hid an underlying deterministic Universe that we're yet to discover.
Personally, I believe in free will and a non-deterministic Universe, which creates a paradox for the strong anthropic principle. I resolve this paradox by arguing for a pseudo-teleological Universe, whereby the Universe has all the laws of physics and parameters to allow conscious entities to evolve without determining what they will be in advance. I’ve argued this in a post on the fine-tuned Universe, and elsewhere.
I’m not arguing a religious reason for our existence, though, of course, I don’t know if such a reason exists, and I would argue that neither does anyone else, though many people claim they do. I’m arguing what the evidence tells me. We are the consequence of a lengthy and convoluted evolution that we are still struggling to understand and explain, even down to the molecular level. The Universe has laws and parameters that are ‘finely tuned’ for the emergence of complex intelligent life and we are the evidence. Without consciousness the Universe would have no meaning at all, which is why the strong anthropic principle is apposite if not scientific. Our existence is the only thing that gives the Universe meaning and we are the only entities (that we know of) that have the cognitive capacity to probe that meaning, which we do through science, I should point out, not religion.
Now, anyone who read my post on Penrose’s 3 worlds, knows it consisted of the Universe, Mind and Mathematics. So where does mathematics fit into my 2 sided philosophy? Mathematics, as most of us know it and use it, is a bridge between the Universe and the Mind, specifically the human mind. And it’s a bridge that has provided more insights and more meaning than any other we’ve discovered. In fact, the limits of our knowledge of mathematics arguably determines the limits of our knowledge of the Universe, certainly in the last century and since the times of Galileo and Newton. A few years ago, following in the footsteps of John Barrow, I wrote a post called Mathematics as religion. Religion, in its many cultural manifestations, often claims to have access to transcendental truth. Well, I contend that mathematics is our only depository of universal transcendental truths and Godel’s Incompleteness Theorem effectively tells us that it’s infinite, so it’s a never-ending endeavour. By corollary, it follows that there are and always will be mathematical truths that we don’t know.
Last week’s New Scientist (4 Feb 2017) cover story was ‘The Essence of Reality’, which was an attempt to understand what truly underpins the Universe beyond space and time. Some argue that the answer is information, essentially quantum information, which of course is mathematical. The point is, notwithstanding whether that question can ever be answered, quantum mechanics, which is a little over a century old, remains our most successful scientific theory to date, and can only be understood and interpreted through the medium of mathematics.
Footnote: Brandon Carter’s definitions of his 2 anthropic principles.
The weak principle: ‘that what we can expect to observe must be restricted by the condition necessary for our presence as observers.’
The strong principle: ‘that the universe (and hence the fundamental parameters on which it depends) must be such as to admit the creation of observers with it at some stage.’
This has been prompted, not surprisingly, by various things I’ve read recently, in particular in Philosophy Now (Issues 117 & 118) and a letter I wrote to the Editor of said magazine, which re-iterated some of the ideas that I expressed in my post on Penrose’s 3 Worlds, referenced above.
A great deal of my personal philosophy stems from the view that there are effectively 2 worlds for each and every one of us: an inner world and an outer world; and the confluence and interaction of these 2 aspects of reality pretty well determine how we live our lives, how we navigate relationships and how we effectively determine our destiny.
I’ve even used this dichotomous philosophical principle as a premise for how I write fiction. Basically, a story should include an inner journey and an outer journey where the outer journey is the plot and the inner journey is the character. In fact, writing fiction reinforced my philosophical point of view, when I realised it’s totally analogous to real life. The outer journey is fate and the inner journey is free will. The 2 are complementary rather than contradictory, but the complementarity is even more obvious when one thinks of it in terms of consciousness and the physical world. To illustrate my point, I will insert an edited version of the letter (I referenced above) to the Editor of Philosophy Now.
This is in reference to an essay by Nick Inman, titled “Nowhere Men” (published in Issue 117).
One doesn’t need to argue for a ‘soul’ or a ‘spirit’ to appreciate that some aspects of Inman’s argument have validity without religious connotations. In particular, there are 2 aspects of one’s self, whereby one aspect is subjective and uniquely known only to ‘You’, and another aspect is objective and known to everyone you interact with. But I think the most pertinent point he makes is that it is only through intelligent conscious entities, like us, that the Universe has any meaning at all. In answer to the oft asked question: Why is there something rather than nothing? Without consciousness there might as well be nothing. When you cease to be conscious there is nothing for You. Because consciousness is so ubiquitous and taken-for-granted in our everyday lives, we tend not to consider its essential role in providing reality. In other words, we need both an objective world and subjective consciousness for reality to become manifest.
As you can see, this is almost an ontological manifesto, which suggests that the existence of the Universe and the emergence of intelligent beings are entwined in ways which we prefer to ignore or dismiss. The scientific answer to this is that there is a multiverse of possibly infinite universes, the vast majority of which cannot sustain life. I’ve discussed this elsewhere, but the multiverse is an epistemological dead end in that it explains everything and nothing, which, ironically, is its appeal. We don’t know if there is a metaphysical purpose to our existence, and I’m not arguing that there is; I’m simply pointing out that reality requires both an objective world, called the Universe, and a subjective consciousness, epitomised by our existence.
It is for this reason that the so-called strong anthropic principle (as opposed to the weak principle) has long appealed to me. Neither of the anthropic principles, I should point out, are scientific principles; they are more like metaphysical premises that can’t be proven or falsified, given our current knowledge. I’m currently reading a highly ambitious and lengthy book by John Hands called Cosmo Sapiens; Human Evolution from the Origin of the Universe. It’s a comprehensive survey and review of the latest scientific theories concerning cosmology, biological evolution and the emergence of humanity. Not surprisingly, he briefly discusses Brandon Carter’s weak and strong anthropic principles plus John Barrow’s and Frank Tipler’s book-length dissertation on the subject. Effectively, the weak anthropic principle states that the Universe allows conscious intelligent agents to arise because we’re in it, which Hands points out is a tautology – a point I’ve made myself on this blog. The strong anthropic principle effectively states that the Universe specifically allows intelligent agents to exist otherwise it wouldn’t exist itself. It’s not stated that way, but that’s a reasonable interpretation, and, as you can see, it leans heavily towards teleology, which I’ve also discussed elsewhere. On that point, if one believes in teleology then it’s hard not to conclude that the Universe is deterministic, which means there is no free will. Einstein believed this so strongly that he couldn’t accept the inherent indeterminism displayed by quantum mechanics and therefore believed that the theory was incomplete and hid an underlying deterministic Universe that we're yet to discover.
Personally, I believe in free will and a non-deterministic Universe, which creates a paradox for the strong anthropic principle. I resolve this paradox by arguing for a pseudo-teleological Universe, whereby the Universe has all the laws of physics and parameters to allow conscious entities to evolve without determining what they will be in advance. I’ve argued this in a post on the fine-tuned Universe, and elsewhere.
I’m not arguing a religious reason for our existence, though, of course, I don’t know if such a reason exists, and I would argue that neither does anyone else, though many people claim they do. I’m arguing what the evidence tells me. We are the consequence of a lengthy and convoluted evolution that we are still struggling to understand and explain, even down to the molecular level. The Universe has laws and parameters that are ‘finely tuned’ for the emergence of complex intelligent life and we are the evidence. Without consciousness the Universe would have no meaning at all, which is why the strong anthropic principle is apposite if not scientific. Our existence is the only thing that gives the Universe meaning and we are the only entities (that we know of) that have the cognitive capacity to probe that meaning, which we do through science, I should point out, not religion.
Now, anyone who read my post on Penrose’s 3 worlds, knows it consisted of the Universe, Mind and Mathematics. So where does mathematics fit into my 2 sided philosophy? Mathematics, as most of us know it and use it, is a bridge between the Universe and the Mind, specifically the human mind. And it’s a bridge that has provided more insights and more meaning than any other we’ve discovered. In fact, the limits of our knowledge of mathematics arguably determines the limits of our knowledge of the Universe, certainly in the last century and since the times of Galileo and Newton. A few years ago, following in the footsteps of John Barrow, I wrote a post called Mathematics as religion. Religion, in its many cultural manifestations, often claims to have access to transcendental truth. Well, I contend that mathematics is our only depository of universal transcendental truths and Godel’s Incompleteness Theorem effectively tells us that it’s infinite, so it’s a never-ending endeavour. By corollary, it follows that there are and always will be mathematical truths that we don’t know.
Last week’s New Scientist (4 Feb 2017) cover story was ‘The Essence of Reality’, which was an attempt to understand what truly underpins the Universe beyond space and time. Some argue that the answer is information, essentially quantum information, which of course is mathematical. The point is, notwithstanding whether that question can ever be answered, quantum mechanics, which is a little over a century old, remains our most successful scientific theory to date, and can only be understood and interpreted through the medium of mathematics.
Footnote: Brandon Carter’s definitions of his 2 anthropic principles.
The weak principle: ‘that what we can expect to observe must be restricted by the condition necessary for our presence as observers.’
The strong principle: ‘that the universe (and hence the fundamental parameters on which it depends) must be such as to admit the creation of observers with it at some stage.’
Saturday 4 February 2017
When the patients take over the asylum
Oscar-winning filmmaker and left-wing provocateur, Michael Moore, has suggested that Trump’s occupation of the White House has been akin to a coup. It should be pointed out that Moore actually predicted Trump’s win when others were dismissive. Personally, I find it difficult to give Trump credit for any nuanced strategic thinking. I think he’s just a completely inexperienced and incompetent politician with a severe case of power-gone-to-his-head syndrome.
What is indisputable (at a time when facts are disputed every day) is that Trump and his closest advisor, Stephen Bannon, have taken the reigns of the presidency with unprecedented zeal and dare-I-say-it, recklessness. Recklessness, because they are issuing executive orders without consulting the parties that have to enact them and with no apparent regard to the consequences at home and abroad. Stephen Bannon, like Trump, has no experience in political office, but unlike Trump wasn’t elected. He’s been criticised for sexism and racism, even white supremacy, and is best known as the executive chairman of Breitbart news, website for the ‘Alt-Right’. He is currently Trump’s ‘Chief Strategist’, and is widely believed to be the man behind the new executive orders banning Muslims from specific countries.
As an outsider (from Australia) it’s almost beyond belief that a new leader (Prime Minister or President) can come into office and, within days, start drafting new laws with immediate effect. Trump gives the impression that he has little regard for the ‘rule of law’ in his country, which was a key note of Obama’s farewell speech, who had no idea that this very issue would be put to the test by his successor. In fact, it seems that Trump’s key advisor, Bannon, who was not even elected by the people, is the man making laws, literally on the run.
When the acting Attorney General (Sally Yates) with over 27 years experience, defies a Presidential executive order because she believes it’s unconstitutional, then maybe people in high places should take notice. Obviously, I’m no expert on American constitutional law, but I imagine that issuing executive orders that are legally dubious could lead down the road to impeachment. It’s early days, so Trump and Bannon may temper their newfound egotistical powers, but neither give the impression of having that inclination. If they continue to issue executive orders that challenge the constitution or even the intent of the constitution, then eventually Congress is going to say enough is enough. After all, isn’t that the purported role of Congress?
As I say, I’m no expert, but one doesn’t have to be an expert to note that in his first 10 days of Office, Trump has pushed the envelope in abusing his newfound presidential powers like no one before him. Another example of overt abuse of presidential authority is the gagging of government scientists, even on social media; tantamount to declaring war on science.
Trump is like the school bully who has been made school captain – no, he’s actually been made school principal, if one extends the metaphor accurately. He is a man who boasts about groping women, who ridicules and humiliates his opponents and detractors, who is a serial liar and who foments hate towards Muslims, Mexicans and refugees. How did a man with these qualities get elected President when we knew all this before he was elected? I don’t completely blame the American people; after all he lost the ‘popular’ vote by 2.9 million. But I do wonder how many, who stayed away from the polling booth, now regret it.
There have been 2 side-effects to Trump’s presidency, one of which was expected and one less obvious. It was reported that a mosque was burned down in Texas (the congregation of the Victoria Islamic Center), which highlights the obvious side-effect of Trump’s anti-Muslim rhetoric. But the local Jewish community has offered its synagogue as a place of worship for the Muslims while their mosque can be rebuilt. This is the unexpected second side-effect of Trump’s policies.
I think Americans are generally compassionate, generous and accepting. I lived and worked in America before, during and after 9/11, so I witnessed first hand the inherent optimism of the American people in the face of adversity. I think Obama’s professed optimism in future generations of Americans, that he expressed in his farewell address, is well founded. I think Trump will bring out the best and the worst in the American people, but the best will prevail.
Meanwhile, in the face of this new authoritarian leadership of the so-called free world (isn’t that an oxymoron?) we could do a lot worse than follow the advice of former Dr Who actor, David Tennant.
Addendum: Yes, I've changed the third paragraph.
What is indisputable (at a time when facts are disputed every day) is that Trump and his closest advisor, Stephen Bannon, have taken the reigns of the presidency with unprecedented zeal and dare-I-say-it, recklessness. Recklessness, because they are issuing executive orders without consulting the parties that have to enact them and with no apparent regard to the consequences at home and abroad. Stephen Bannon, like Trump, has no experience in political office, but unlike Trump wasn’t elected. He’s been criticised for sexism and racism, even white supremacy, and is best known as the executive chairman of Breitbart news, website for the ‘Alt-Right’. He is currently Trump’s ‘Chief Strategist’, and is widely believed to be the man behind the new executive orders banning Muslims from specific countries.
As an outsider (from Australia) it’s almost beyond belief that a new leader (Prime Minister or President) can come into office and, within days, start drafting new laws with immediate effect. Trump gives the impression that he has little regard for the ‘rule of law’ in his country, which was a key note of Obama’s farewell speech, who had no idea that this very issue would be put to the test by his successor. In fact, it seems that Trump’s key advisor, Bannon, who was not even elected by the people, is the man making laws, literally on the run.
When the acting Attorney General (Sally Yates) with over 27 years experience, defies a Presidential executive order because she believes it’s unconstitutional, then maybe people in high places should take notice. Obviously, I’m no expert on American constitutional law, but I imagine that issuing executive orders that are legally dubious could lead down the road to impeachment. It’s early days, so Trump and Bannon may temper their newfound egotistical powers, but neither give the impression of having that inclination. If they continue to issue executive orders that challenge the constitution or even the intent of the constitution, then eventually Congress is going to say enough is enough. After all, isn’t that the purported role of Congress?
As I say, I’m no expert, but one doesn’t have to be an expert to note that in his first 10 days of Office, Trump has pushed the envelope in abusing his newfound presidential powers like no one before him. Another example of overt abuse of presidential authority is the gagging of government scientists, even on social media; tantamount to declaring war on science.
Trump is like the school bully who has been made school captain – no, he’s actually been made school principal, if one extends the metaphor accurately. He is a man who boasts about groping women, who ridicules and humiliates his opponents and detractors, who is a serial liar and who foments hate towards Muslims, Mexicans and refugees. How did a man with these qualities get elected President when we knew all this before he was elected? I don’t completely blame the American people; after all he lost the ‘popular’ vote by 2.9 million. But I do wonder how many, who stayed away from the polling booth, now regret it.
There have been 2 side-effects to Trump’s presidency, one of which was expected and one less obvious. It was reported that a mosque was burned down in Texas (the congregation of the Victoria Islamic Center), which highlights the obvious side-effect of Trump’s anti-Muslim rhetoric. But the local Jewish community has offered its synagogue as a place of worship for the Muslims while their mosque can be rebuilt. This is the unexpected second side-effect of Trump’s policies.
I think Americans are generally compassionate, generous and accepting. I lived and worked in America before, during and after 9/11, so I witnessed first hand the inherent optimism of the American people in the face of adversity. I think Obama’s professed optimism in future generations of Americans, that he expressed in his farewell address, is well founded. I think Trump will bring out the best and the worst in the American people, but the best will prevail.
Meanwhile, in the face of this new authoritarian leadership of the so-called free world (isn’t that an oxymoron?) we could do a lot worse than follow the advice of former Dr Who actor, David Tennant.
Addendum: Yes, I've changed the third paragraph.
Sunday 1 January 2017
The smartest man in the room
In my last post I made passing mention of Barry Jones, who is now 84 and has just written a book, Knowledge Courage Leadership. When I was a kid, growing up in the newly discovered and infinite possibilities of ‘television-land’, Barry Jones was a TV quiz champion on Pick-a-Box, sponsored by BP and hosted by Bob Dyer, an ex-pat American. In the days (decades) before the internet and Google, Barry had a truly encyclopaedic mind, and when he entered virtually every Australian’s living room, he was quite literally the smartest man in the room.
Many years later, when he published Dictionary of World Biography (in the late 80s) someone I worked with at the time, who was widely read and a self-imagined scholar, told me that Barry Jones was a 'savant', which he meant in the most derogatory sense. In other words, whilst Barry could summon facts at will, he had no analytical skills and no real intelligence worthy of the name. Looking back, I would put that down to intellectual jealousy, but, even at the time, I thought his observation very wide of the mark.
The point is, having read his latest offering, I think the sobriquet, ‘smartest person in the room’, still stands, especially compared to the current crop of politicians we have attempting to govern our country. At 84, he displays more vision than anyone currently involved in politics in Australia. For a start, he’s pretty scathing about the nature of what he calls ‘retail politics’, where the only criterion for a decision or a policy is if it can be ‘sold’ to the electorate. In the so-called ‘post-truth’ era, most vividly demonstrated by Donald Trump’s recent election campaign, ‘byte-sized’ slogans overrun and out-rate attempts at evidence-based explanations. In fact, he uses the word, ‘evidence’, quite a lot in his own preferred version of political discourse.
He gives a summing up of the political leaders in this country that he has known or met or worked with, giving a subjective yet honest appraisal. In his time in politics, he was told that he didn’t have a ‘killer instinct’, which means he could never engage in character-assassination, which has become increasingly an integral component of the ‘game’ as it is played in Australia. In fact, it’s probably the most important part of the game if you have any aspirations of party leadership.
He then goes on to do the same for a number of world leaders, whom he has personally had some engagement with; some more so than others. At the end of the book, he gives a rather scholarly and informed analysis of the French Revolution, explaining, as he does, why he considers it unique in the history of Western civilization and why it is still relevant to current global politics. It basically illustrates how precariously our civilised existence is when political power and economic subsistence are no longer in balance. I’m probably doing him an injustice in attempting to sum up his treatise with a one-liner; but that was the message I received. It’s happened in a number of revolutions, when paranoia and violence combine to completely destabilise a nation and drive it into civil war. There are examples in evidence right now, not to mention the ones from last century.
But the most important part of the book for me, was a chapter or section, titled: Evidence v. Opinion / Feeling / Interest; the attack on scientific method. It was an address he gave, apparently, at the Australian National University for the European Molecular Biology Laboratory (EMBL) on 2 July 2014. He starts off with a quote from Don Watson, heavily laden with sarcasm:
The people are sovereign… to hell with the sovereignty of scientific facts, popular opinion will determine if the Earth is warming and what to do about it, just as it determined the answer to polio and the movement of the planets.
As anyone knows, who is a regular reader of this blog, this is a subject close to my heart. But Jones gives it a perspective that I hadn’t considered before. He points out that as the number of university graduates has increased in Australia and the information revolution exploded via the internet, there has been a ‘dumbing down’ in areas concerning legitimate science, evidence-based knowledge and the consequential political decision-making that should be informed by such learning.
To quote: Paradoxically, the Knowledge Revolution has been accompanied by a persistent ‘dumbing down’, with IT reinforcing the personal and immediate, rather than the complex, long-term and remote.
Barry Jones was Science Minister from 1983 to 1990 (the longest serving in Australian politics) and he maintains, in his own words: ‘an intense interest in science/research and its implications for public policy and politics generally.’
He wrote a book in 1982, Sleepers Awake, which I must confess I haven’t read, even though I always took an interest in what he said in the media. According to his own appraisal: ‘Three decades on, my central thesis stands up pretty well.’ And his ‘central thesis’ was ‘trying to predict the social, economic and personal impact of technological change, [but] in 1982 I was on my own.’ Note that I alluded to Barry’s predictions in my last post (Political Irony).
What makes Barry Jones exceptional in the world of politics is his grasp of the enormous gap between political expediency and reality. Yes, reality. I will allow Barry’s own words to illustrate my point:
I can claim to have put six or seven issues on the national agenda, but I started talking about them 10 > 15 > 20 years before audiences, and my political colleagues were ready to listen. In politics, timing is (almost) everything and the best time to raise an issue is about ten minutes before its importance becomes blindingly obvious.
We live in an era when science totally governs our lives, yet it is so subliminal, so ubiquitous, so everyday common, that we fail to appreciate that fundamental fact. Most of the public are science illiterate in the sense that they see absolutely no value in acquiring scientific knowledge. The argument is that you don’t need to know the laws of thermodynamics to drive a car – in fact, you don’t need to know anything technical about the dynamics of a vehicle to operate it.
This is a fair assessment as far as it goes, but when it comes to making decisions about issues like climate change or vaccinations or education of scientifically validated theories like evolution, then a large percentage of populations in well-educated societies, are plain ignorant.
The problem is, as Barry points out, in far more articulate and erudite prose than I can muster, politicians, who are often as ignorant as their electorate, exploit this shortcoming by giving slogan-bearing opinions in lieu of evidence-based facts, knowing that emotion will always win over rationality if the relevant emotional buttons are pushed.
He laments the fact that complex explanations of complex phenomena are considered simply ‘too hard’, and then, to illustrate his point, provides an entire chapter on the explanation of climate change and its history, going back to the 19th Century and even earlier. He gives the example (amongst others) of Tony Abbot (before he became Prime Minister of Australia, when he was Leader of the Opposition) stating: ‘carbon dioxide was invisible, weightless and could not be measured’. In fact, carbon dioxide is not weightless and is easy to measure. We know from chemistry that ‘On burning, each tonne of coal produces 3.67 tonnes of CO2… (a confirmation of Lavoisier)’. This is a prime example of a science-illiterate politician (a future PM, nonetheless) exploiting a largely science-illiterate voting public.
Jones makes the salient point that ‘Not to choose is to choose’, citing ‘French statesman and diplomat Charles de Talleyrand (1754-1838)… failure to act in a crisis has the same effect as an intervention: in practice there is no neutrality.’
I know, and I imagine Barry Jones knows as well, that the people who are stubbornly opposed to climate change are not persuaded by facts or evidence and often provide their own facts and evidence to make their point. Anyone who has studied science, even to the rudimentary level that I have, knows that science is complex, not easy to understand or communicate and can rarely be broken down into byte-sized chunks for easy digestion. Nevertheless, as I alluded to earlier and in other posts, I’m often struck by the obvious contradiction between our total reliance on science and our ability to ignore or obfuscate its message when it conflicts with our ideological agendas. Science is our best tool for predicting the future and for planning for future generations on this planet, yet very few politicians, not to mention commentators in the media, give science more than lip service in providing this essential role. One of the problems is that its message is often negative and pessimistic, which is when we should take most heed, yet politicians can’t win elections with negative messages. As a consequence, we only hear the negative message when its effects have become so obvious it can no longer be ignored.
Many years later, when he published Dictionary of World Biography (in the late 80s) someone I worked with at the time, who was widely read and a self-imagined scholar, told me that Barry Jones was a 'savant', which he meant in the most derogatory sense. In other words, whilst Barry could summon facts at will, he had no analytical skills and no real intelligence worthy of the name. Looking back, I would put that down to intellectual jealousy, but, even at the time, I thought his observation very wide of the mark.
The point is, having read his latest offering, I think the sobriquet, ‘smartest person in the room’, still stands, especially compared to the current crop of politicians we have attempting to govern our country. At 84, he displays more vision than anyone currently involved in politics in Australia. For a start, he’s pretty scathing about the nature of what he calls ‘retail politics’, where the only criterion for a decision or a policy is if it can be ‘sold’ to the electorate. In the so-called ‘post-truth’ era, most vividly demonstrated by Donald Trump’s recent election campaign, ‘byte-sized’ slogans overrun and out-rate attempts at evidence-based explanations. In fact, he uses the word, ‘evidence’, quite a lot in his own preferred version of political discourse.
He gives a summing up of the political leaders in this country that he has known or met or worked with, giving a subjective yet honest appraisal. In his time in politics, he was told that he didn’t have a ‘killer instinct’, which means he could never engage in character-assassination, which has become increasingly an integral component of the ‘game’ as it is played in Australia. In fact, it’s probably the most important part of the game if you have any aspirations of party leadership.
He then goes on to do the same for a number of world leaders, whom he has personally had some engagement with; some more so than others. At the end of the book, he gives a rather scholarly and informed analysis of the French Revolution, explaining, as he does, why he considers it unique in the history of Western civilization and why it is still relevant to current global politics. It basically illustrates how precariously our civilised existence is when political power and economic subsistence are no longer in balance. I’m probably doing him an injustice in attempting to sum up his treatise with a one-liner; but that was the message I received. It’s happened in a number of revolutions, when paranoia and violence combine to completely destabilise a nation and drive it into civil war. There are examples in evidence right now, not to mention the ones from last century.
But the most important part of the book for me, was a chapter or section, titled: Evidence v. Opinion / Feeling / Interest; the attack on scientific method. It was an address he gave, apparently, at the Australian National University for the European Molecular Biology Laboratory (EMBL) on 2 July 2014. He starts off with a quote from Don Watson, heavily laden with sarcasm:
The people are sovereign… to hell with the sovereignty of scientific facts, popular opinion will determine if the Earth is warming and what to do about it, just as it determined the answer to polio and the movement of the planets.
As anyone knows, who is a regular reader of this blog, this is a subject close to my heart. But Jones gives it a perspective that I hadn’t considered before. He points out that as the number of university graduates has increased in Australia and the information revolution exploded via the internet, there has been a ‘dumbing down’ in areas concerning legitimate science, evidence-based knowledge and the consequential political decision-making that should be informed by such learning.
To quote: Paradoxically, the Knowledge Revolution has been accompanied by a persistent ‘dumbing down’, with IT reinforcing the personal and immediate, rather than the complex, long-term and remote.
Barry Jones was Science Minister from 1983 to 1990 (the longest serving in Australian politics) and he maintains, in his own words: ‘an intense interest in science/research and its implications for public policy and politics generally.’
He wrote a book in 1982, Sleepers Awake, which I must confess I haven’t read, even though I always took an interest in what he said in the media. According to his own appraisal: ‘Three decades on, my central thesis stands up pretty well.’ And his ‘central thesis’ was ‘trying to predict the social, economic and personal impact of technological change, [but] in 1982 I was on my own.’ Note that I alluded to Barry’s predictions in my last post (Political Irony).
What makes Barry Jones exceptional in the world of politics is his grasp of the enormous gap between political expediency and reality. Yes, reality. I will allow Barry’s own words to illustrate my point:
I can claim to have put six or seven issues on the national agenda, but I started talking about them 10 > 15 > 20 years before audiences, and my political colleagues were ready to listen. In politics, timing is (almost) everything and the best time to raise an issue is about ten minutes before its importance becomes blindingly obvious.
We live in an era when science totally governs our lives, yet it is so subliminal, so ubiquitous, so everyday common, that we fail to appreciate that fundamental fact. Most of the public are science illiterate in the sense that they see absolutely no value in acquiring scientific knowledge. The argument is that you don’t need to know the laws of thermodynamics to drive a car – in fact, you don’t need to know anything technical about the dynamics of a vehicle to operate it.
This is a fair assessment as far as it goes, but when it comes to making decisions about issues like climate change or vaccinations or education of scientifically validated theories like evolution, then a large percentage of populations in well-educated societies, are plain ignorant.
The problem is, as Barry points out, in far more articulate and erudite prose than I can muster, politicians, who are often as ignorant as their electorate, exploit this shortcoming by giving slogan-bearing opinions in lieu of evidence-based facts, knowing that emotion will always win over rationality if the relevant emotional buttons are pushed.
He laments the fact that complex explanations of complex phenomena are considered simply ‘too hard’, and then, to illustrate his point, provides an entire chapter on the explanation of climate change and its history, going back to the 19th Century and even earlier. He gives the example (amongst others) of Tony Abbot (before he became Prime Minister of Australia, when he was Leader of the Opposition) stating: ‘carbon dioxide was invisible, weightless and could not be measured’. In fact, carbon dioxide is not weightless and is easy to measure. We know from chemistry that ‘On burning, each tonne of coal produces 3.67 tonnes of CO2… (a confirmation of Lavoisier)’. This is a prime example of a science-illiterate politician (a future PM, nonetheless) exploiting a largely science-illiterate voting public.
Jones makes the salient point that ‘Not to choose is to choose’, citing ‘French statesman and diplomat Charles de Talleyrand (1754-1838)… failure to act in a crisis has the same effect as an intervention: in practice there is no neutrality.’
I know, and I imagine Barry Jones knows as well, that the people who are stubbornly opposed to climate change are not persuaded by facts or evidence and often provide their own facts and evidence to make their point. Anyone who has studied science, even to the rudimentary level that I have, knows that science is complex, not easy to understand or communicate and can rarely be broken down into byte-sized chunks for easy digestion. Nevertheless, as I alluded to earlier and in other posts, I’m often struck by the obvious contradiction between our total reliance on science and our ability to ignore or obfuscate its message when it conflicts with our ideological agendas. Science is our best tool for predicting the future and for planning for future generations on this planet, yet very few politicians, not to mention commentators in the media, give science more than lip service in providing this essential role. One of the problems is that its message is often negative and pessimistic, which is when we should take most heed, yet politicians can’t win elections with negative messages. As a consequence, we only hear the negative message when its effects have become so obvious it can no longer be ignored.
Tuesday 13 December 2016
Political Irony
There’s a strange phenomenon happening worldwide (in the Western world, at least) whereby centrist politics is not working, or should I say: not winning. Politics naturally divides itself into 2 because the population naturally divides itself into 2: right leaning and left leaning, though there’s a broad spectrum.
There is evidence that our genetic makeup contributes to which way we lean, possibly even more than environmental factors, which would explain why there seems to be roughly an even divide and why almost all societies seem to be split between the two. It comes down to personality traits as I’ve discussed once before, albeit a long time ago. Basically, conservatives are more conscientious, arguably less impulsive and more resistant to change. I know that’s being a bit stereotypical but studies pretty well support that view. Liberal-minded individuals are more open to change and diverse ideas. The thing is that it would seem functional societies need both types: people to challenge the status quo and people to maintain the status quo.
But recent events in Britain, America, parts of Europe, and here in Australia, indicate that politics is becoming more polarised, virtually worldwide, with people on both sides of the political divide becoming disenchanted with the status quo. The status quo has been to go for the centre in order to grab the highest number of people on both sides, but we’ve seen a clear desertion of the centre when it comes to polling and actual elections.
I’m not an economist or a political commentator, but I am a participant in the process and an observer. I should say at the outset, something that I don’t hide, which is my political leanings are definitely towards the left, so that will have a subjective influence on my particular interpretation of events.
I don’t believe that there is a single factor, but a confluence of factors, some of which I’ll try and elaborate on. However, I think that we are going through a socio-economic change not unlike the one that must have been experienced during the industrial revolution, only this time it’s a technological revolution caused by automation. Basically, automation is putting people out of jobs in the Western world, and I would suggest that this is only the beginning. I know this, partly because I work in the industry where it’s taking place: industrial engineering. But I can remember Barry Jones, Australia’s first science minister, foretelling this coming ‘revolution’ some 30 or more years ago. Barry Jones was most unusual in that he was probably more scientist than politician; certainly, he was a scholar of the highest calibre, which made him something of an oddity in politics.
I would argue that our economic paradigms are yet to catch up with what’s happening in the workplace, not that I’m claiming to have any solutions. But if things stay as they are then the divide between those with jobs and those without is going to become greater as technological advances in robotics and data management become more ubiquitous. So what about all the jobs going offshore? Yes, cheap labour is being exploited in countries with lax OHS regulations and where the cost of living is cheap. But, despite what Donald Trump told his voters, manufacturing has increased in America, not decreased (over the last decade) while unemployment has gone up. How do I know this? Chas Licciardello, the nerd on Planet America showed the graphics on one of the shows he co-hosted with John Barron, explaining that this was due to automation and not offshore labour, otherwise the manufacturing graphic would have declined with the employment graphic.
But, as I alluded to earlier in my discourse, there are other factors involved, not least the still lingering effects of the GFC (Global Financial Crisis), which, need I remind anyone, actually started in America with the sub-prime mortgage debacle. So that also had its biggest impact on the least affluent in society, or most economically vulnerable, and they are the ones who are having the biggest say in our collective democracies. We should not be surprised that they feel betrayed by the political system and that they want to turn back the clock to a time when jobs weren’t so scarce and they weren’t at the mercy of the banks.
Someone once said (no idea who it was) that when times get tough, economically, societies have a tendency to turn against their fellows. People look for someone to blame and we have witch-hunts (which actually were the consequence of dire circumstances in medieval times). One only has to look at pre-war Europe when Jews were demonised and blamed for everyone else’s economic plight. John Maynard Keynes warned after the armistice deal at the end of World War 1, that it would bankrupt Germany and start another war, which, of course, we now all know it did.
And now we are in similar, if not exactly the same, circumstances where an election candidate can gain substantial ‘populist’ votes for promising to stop immigrants from taking our jobs and undermining our society with un-Western cultural mores. Protectionism and isolationism is suddenly attractive when globalism has never been more lucrative. And it is the right wing of politics, and often, the far right, in whatever country, that has had the most appeal to those who feel disenfranchised and essentially cheated by the system. No where is this more apparent, than in Donald Trump’s recent win in the American presidential election. He has demonstrated just how divided America currently is and the division is largely between the big cities and the rural areas, just like it is in Australia and also England with the recent Brexit vote. It’s the people in outlying regions that feel most affected by the economic crisis – this is a worldwide phenomenon in the Western world. It’s a wakeup call to all mainstream political parties that they can’t leave these people behind or think they can win elections just by appealing to city voters.
However, as alluded to in the title, there is an irony here – in fact, there are a few ironies. Firstly, all politicians know, including the ones who don’t admit it, that immigration, in the long term, is good for the economy. Countries like Australia, America, Canada and New Zealand are dependent on immigration for their continued economic growth. There is a limit to economic growth by population growth - and whilst that’s another issue which will need to be addressed some time before this century is over - it’s not what the current political climate is about. The other irony, particularly in America, is that Trump will promote deregulation of commerce, which is what created the financial crisis, which is what spawned the disenfranchised and unemployed workers, who voted him into office.
There is a further irony in that many of these populist leaders – certainly in Australia and America – have an almost virulent opposition to science when it doesn’t suit their ideological agenda. This is particularly true when it comes to climate science. Why is this ironic? Because science has created all the affluence, the infrastructure and the extraordinary communication convenience that everyone in the West considers their birthright.
A recent article in New Scientist (3Dec16, pp.29-32) claimed that people on both sides of an ideological divide will use whatever science they believe to bolster their position. This is called confirmation bias, and we are all guilty. But the issue with climate science is that many on the right believe that it’s a conspiracy by scientists to keep themselves in a job. Most people find this ludicrous, but anyone who is a climate-change sceptic (at least in Australia) believes this with absolute conviction. One Australian politician (recently elected into the Senate) claimed: “I know science fiction when I see it”. How could you argue with that? Not with ‘science facts’, obviously.
Somehow, all these issues get tied to the opposition of gay rights and gay marriage, which one can understand in the classic conservative versus liberal political arena. What this has in common is that it’s a desire to turn back the clock to when things were simpler: men were men and women were women; and marriage was between sexes and not with same sexes. So Trump’s slogan: “Let’s make America great again”; is also a call to turn back the clock by bringing in protectionism and stopping immigration from taking jobs and losing jobs offshore. When Americans made American cars for Americans to drive and didn’t import them from Japan or Europe because they were more fuel-efficient. In fact, he’d love to go back to when fossil fuels were easy to access and there was no limit on their supply. Addiction to oil is arguably the hardest addiction for Western nations to overcome, and, until we do, we really will be living in the past.
But the gay marriage issue is like a marker in the political sand, because one day, like abolition of slavery and women’s suffrage, it will become the status quo and it will be valued and defended equally by both sides of politics. We are in a transition: politically, culturally, technologically and economically.
There is evidence that our genetic makeup contributes to which way we lean, possibly even more than environmental factors, which would explain why there seems to be roughly an even divide and why almost all societies seem to be split between the two. It comes down to personality traits as I’ve discussed once before, albeit a long time ago. Basically, conservatives are more conscientious, arguably less impulsive and more resistant to change. I know that’s being a bit stereotypical but studies pretty well support that view. Liberal-minded individuals are more open to change and diverse ideas. The thing is that it would seem functional societies need both types: people to challenge the status quo and people to maintain the status quo.
But recent events in Britain, America, parts of Europe, and here in Australia, indicate that politics is becoming more polarised, virtually worldwide, with people on both sides of the political divide becoming disenchanted with the status quo. The status quo has been to go for the centre in order to grab the highest number of people on both sides, but we’ve seen a clear desertion of the centre when it comes to polling and actual elections.
I’m not an economist or a political commentator, but I am a participant in the process and an observer. I should say at the outset, something that I don’t hide, which is my political leanings are definitely towards the left, so that will have a subjective influence on my particular interpretation of events.
I don’t believe that there is a single factor, but a confluence of factors, some of which I’ll try and elaborate on. However, I think that we are going through a socio-economic change not unlike the one that must have been experienced during the industrial revolution, only this time it’s a technological revolution caused by automation. Basically, automation is putting people out of jobs in the Western world, and I would suggest that this is only the beginning. I know this, partly because I work in the industry where it’s taking place: industrial engineering. But I can remember Barry Jones, Australia’s first science minister, foretelling this coming ‘revolution’ some 30 or more years ago. Barry Jones was most unusual in that he was probably more scientist than politician; certainly, he was a scholar of the highest calibre, which made him something of an oddity in politics.
I would argue that our economic paradigms are yet to catch up with what’s happening in the workplace, not that I’m claiming to have any solutions. But if things stay as they are then the divide between those with jobs and those without is going to become greater as technological advances in robotics and data management become more ubiquitous. So what about all the jobs going offshore? Yes, cheap labour is being exploited in countries with lax OHS regulations and where the cost of living is cheap. But, despite what Donald Trump told his voters, manufacturing has increased in America, not decreased (over the last decade) while unemployment has gone up. How do I know this? Chas Licciardello, the nerd on Planet America showed the graphics on one of the shows he co-hosted with John Barron, explaining that this was due to automation and not offshore labour, otherwise the manufacturing graphic would have declined with the employment graphic.
But, as I alluded to earlier in my discourse, there are other factors involved, not least the still lingering effects of the GFC (Global Financial Crisis), which, need I remind anyone, actually started in America with the sub-prime mortgage debacle. So that also had its biggest impact on the least affluent in society, or most economically vulnerable, and they are the ones who are having the biggest say in our collective democracies. We should not be surprised that they feel betrayed by the political system and that they want to turn back the clock to a time when jobs weren’t so scarce and they weren’t at the mercy of the banks.
Someone once said (no idea who it was) that when times get tough, economically, societies have a tendency to turn against their fellows. People look for someone to blame and we have witch-hunts (which actually were the consequence of dire circumstances in medieval times). One only has to look at pre-war Europe when Jews were demonised and blamed for everyone else’s economic plight. John Maynard Keynes warned after the armistice deal at the end of World War 1, that it would bankrupt Germany and start another war, which, of course, we now all know it did.
And now we are in similar, if not exactly the same, circumstances where an election candidate can gain substantial ‘populist’ votes for promising to stop immigrants from taking our jobs and undermining our society with un-Western cultural mores. Protectionism and isolationism is suddenly attractive when globalism has never been more lucrative. And it is the right wing of politics, and often, the far right, in whatever country, that has had the most appeal to those who feel disenfranchised and essentially cheated by the system. No where is this more apparent, than in Donald Trump’s recent win in the American presidential election. He has demonstrated just how divided America currently is and the division is largely between the big cities and the rural areas, just like it is in Australia and also England with the recent Brexit vote. It’s the people in outlying regions that feel most affected by the economic crisis – this is a worldwide phenomenon in the Western world. It’s a wakeup call to all mainstream political parties that they can’t leave these people behind or think they can win elections just by appealing to city voters.
However, as alluded to in the title, there is an irony here – in fact, there are a few ironies. Firstly, all politicians know, including the ones who don’t admit it, that immigration, in the long term, is good for the economy. Countries like Australia, America, Canada and New Zealand are dependent on immigration for their continued economic growth. There is a limit to economic growth by population growth - and whilst that’s another issue which will need to be addressed some time before this century is over - it’s not what the current political climate is about. The other irony, particularly in America, is that Trump will promote deregulation of commerce, which is what created the financial crisis, which is what spawned the disenfranchised and unemployed workers, who voted him into office.
There is a further irony in that many of these populist leaders – certainly in Australia and America – have an almost virulent opposition to science when it doesn’t suit their ideological agenda. This is particularly true when it comes to climate science. Why is this ironic? Because science has created all the affluence, the infrastructure and the extraordinary communication convenience that everyone in the West considers their birthright.
A recent article in New Scientist (3Dec16, pp.29-32) claimed that people on both sides of an ideological divide will use whatever science they believe to bolster their position. This is called confirmation bias, and we are all guilty. But the issue with climate science is that many on the right believe that it’s a conspiracy by scientists to keep themselves in a job. Most people find this ludicrous, but anyone who is a climate-change sceptic (at least in Australia) believes this with absolute conviction. One Australian politician (recently elected into the Senate) claimed: “I know science fiction when I see it”. How could you argue with that? Not with ‘science facts’, obviously.
Somehow, all these issues get tied to the opposition of gay rights and gay marriage, which one can understand in the classic conservative versus liberal political arena. What this has in common is that it’s a desire to turn back the clock to when things were simpler: men were men and women were women; and marriage was between sexes and not with same sexes. So Trump’s slogan: “Let’s make America great again”; is also a call to turn back the clock by bringing in protectionism and stopping immigration from taking jobs and losing jobs offshore. When Americans made American cars for Americans to drive and didn’t import them from Japan or Europe because they were more fuel-efficient. In fact, he’d love to go back to when fossil fuels were easy to access and there was no limit on their supply. Addiction to oil is arguably the hardest addiction for Western nations to overcome, and, until we do, we really will be living in the past.
But the gay marriage issue is like a marker in the political sand, because one day, like abolition of slavery and women’s suffrage, it will become the status quo and it will be valued and defended equally by both sides of politics. We are in a transition: politically, culturally, technologically and economically.
Wednesday 7 December 2016
How algebra turned mathematics into a language
A little while ago I wrote a post arguing that mathematics as language was just a metaphor. I’ve since taken the post down, though those who subscribe may still have a copy. In the almost 10 years I’ve been writing this blog it’s only the second time I’ve deleted a post. The other occasion was very early in its life when I posted an essay on existentialism (from memory) only to post something more relevant.
The reason I took the post down was because I thought I was being a bit petty in criticising some guy on YouTube who was probably actually doing some good in the world, even if I disagreed with him on a philosophical level. Instead, I wrote a comment on his video, challenging the premise of his talk that the reason mathematics is ‘difficult’ for many people is because it’s not taught as a language. I would still challenge the validity of that premise, but I would now change my own approach by acknowledging that there is a sense in which mathematics is a language, but not in a lingua franca sense.
In my last post – the review of Arrival – language and communication are major themes, and I make mention of a piece of expositional dialogue that I thought very insightful and stuck in my brain as a revelatory thought. To remind everyone: it was the realisation that language determines the limits of what we can think because we all think in a language. In other words, if a language doesn’t define the specific concepts we are trying to comprehend then we struggle to conjure up those concepts, and mathematics provides a good example.
The reason that mathematics is best not construed as a language is because mathematics, as it’s generally practiced, has its own language and that language is algebra. As I’ve said before: mathematics is not so much about numbers as the relationship between numbers, and the efficacy of algebra is that it allows one to see the relationships without the numbers.
And this is the thing, because some people find it easier to think in algebra than others. I will illustrate with examples.
A = k/B then B = k/A
If k is a constant (can’t change) and A and B are variables then there is an inverse relationship between A and B. In other words, if A gets larger then B must get smaller and vice versa. This can be written as A α 1/B or B α 1/A, where α (in this context) means ‘is proportional to’. Note that if the number on the bottom gets smaller then the whole term must get larger and, of course, the converse is also true: if the number on the bottom gets larger then the whole term must get smaller.
People who are familiar with these concepts think this automatically. They also know that if you move a term from one side of an equation to the other, then you either invert it or take its negative. So if you have a language that captures these concepts, then you can think in these concepts with no great effort. It also means that you are not easily intimidated by equations.
To give another common example: the distributive rule, which is arguably the most commonly used rule in algebra.
A = B(C + D) is the same as A = BC + BD
And if A = -B(C - D) then A = BD – BC
(Note that multiplying by minus changes the sign: from + to - and - to+ )
We could have done this differently because –(C – D) = D – C and B(D – C) = BD –BC (So same answer)
This is all very simple stuff and it can be extended to include square roots (including square roots of -1), logarithms, trig functions and so on. Even calculus is just algebra with numbers disappearing into zero with the inverse of infinity (called infinitesimals).
One of the problems in learning mathematics is that we are trying to learn new concepts and simultaneously a new ‘language’ of symbols. But if the language of algebra allows one to think in new concepts, then a hurdle becomes a springboard to new knowledge.
The reason I took the post down was because I thought I was being a bit petty in criticising some guy on YouTube who was probably actually doing some good in the world, even if I disagreed with him on a philosophical level. Instead, I wrote a comment on his video, challenging the premise of his talk that the reason mathematics is ‘difficult’ for many people is because it’s not taught as a language. I would still challenge the validity of that premise, but I would now change my own approach by acknowledging that there is a sense in which mathematics is a language, but not in a lingua franca sense.
In my last post – the review of Arrival – language and communication are major themes, and I make mention of a piece of expositional dialogue that I thought very insightful and stuck in my brain as a revelatory thought. To remind everyone: it was the realisation that language determines the limits of what we can think because we all think in a language. In other words, if a language doesn’t define the specific concepts we are trying to comprehend then we struggle to conjure up those concepts, and mathematics provides a good example.
The reason that mathematics is best not construed as a language is because mathematics, as it’s generally practiced, has its own language and that language is algebra. As I’ve said before: mathematics is not so much about numbers as the relationship between numbers, and the efficacy of algebra is that it allows one to see the relationships without the numbers.
And this is the thing, because some people find it easier to think in algebra than others. I will illustrate with examples.
A = k/B then B = k/A
If k is a constant (can’t change) and A and B are variables then there is an inverse relationship between A and B. In other words, if A gets larger then B must get smaller and vice versa. This can be written as A α 1/B or B α 1/A, where α (in this context) means ‘is proportional to’. Note that if the number on the bottom gets smaller then the whole term must get larger and, of course, the converse is also true: if the number on the bottom gets larger then the whole term must get smaller.
People who are familiar with these concepts think this automatically. They also know that if you move a term from one side of an equation to the other, then you either invert it or take its negative. So if you have a language that captures these concepts, then you can think in these concepts with no great effort. It also means that you are not easily intimidated by equations.
To give another common example: the distributive rule, which is arguably the most commonly used rule in algebra.
A = B(C + D) is the same as A = BC + BD
And if A = -B(C - D) then A = BD – BC
(Note that multiplying by minus changes the sign: from + to - and - to
We could have done this differently because –(C – D) = D – C and B(D – C) = BD –BC (So same answer)
This is all very simple stuff and it can be extended to include square roots (including square roots of -1), logarithms, trig functions and so on. Even calculus is just algebra with numbers disappearing into zero with the inverse of infinity (called infinitesimals).
One of the problems in learning mathematics is that we are trying to learn new concepts and simultaneously a new ‘language’ of symbols. But if the language of algebra allows one to think in new concepts, then a hurdle becomes a springboard to new knowledge.
Sunday 27 November 2016
Arrival; a masterclass in storytelling
Four movie reviews in one year; maybe I should change the title of my blog – no, just kidding. Someone (either Jake Wilson or Paul Byrnes from The Age) gave it the ultimate accolade: ‘At last, a science fiction movie with a brain.’ They also gave it 3.5 stars but ended their review with: ‘[the leads: Amy Adams, Forest Whitaker and Jeremy Renner] have the chops to keep us watching even when the narrative starts to wobble.’ So they probably wouldn’t agree with me calling it a masterclass.
It’s certainly not perfect – I’m not sure I’ve seen the perfect movie yet – but it’s clever on more than one level. I’m always drawn to good writing in a movie, which is something most people are not even aware of. It was based on a book, whose author escaped me as a couple in front of me got up to leave just as the name came up on the screen. But I have Google, so I can tell you that the screenplay was written by Eric Heisserer, and Ted Chiang wrote the novella, “Story of Your Life”, upon which it is based. French-Canadian director, Denis Villeneuve has also made Prisoners and Sicario, neither of which I’ve seen, but Sicario is highly acclaimed.
It would be remiss of me not to mention the music and soundscape, which really adds another dimension to this movie. I noticed that beginning and end scores were by Max Richter, whom I admire in the contemporary classical music scene. Though the overall music score is credited to Johann Johannsson. Some of the music reminded of Tibetan music with its almost subterranean tones. Australia also gets a bit of 'coverage', if that's the right word, though not always in a flattering manner. Forest Whitaker's character reminds us how we all but committed genocide against the Aboriginal people.
I haven’t read the book, but I’m willing to give credit to both writers for producing a ‘science fiction story with a brain’. Science fiction has a number of subgenres: the human diaspora into interstellar space; time travel; alien worlds; parallel universes; artificial intelligence; dystopian fiction, utopian fiction and the list goes on, with various combinations. The title alone tells us that this is an Alien encounter on Earth, but the movie keeps us guessing as to whether it’s an invasion or just a curious interloper or something else altogether.
I’ve written elsewhere that narrative tension is one of the essential writing skills and this story has it on many levels. To give one example without giving the plot away, there is a sequence of narrative events where we think we know what’s going to happen, with the suspense ramping up while we wait for what we expect to happen to happen, then something completely unexpected happens, which is totally within the bounds of possibility, therefore believable. In some respects this sums up the whole movie because all through it we are led to believe one thing only to learn we are witnessing something else. It’s called a reversal, which I’m not always a fan of, but this one is more than just a clever twist for the sake of being clever. Maybe that’s what the reviewer meant by ‘…when the narrative starts to wobble’. I don’t know. I have to confess I wasn’t completely sold, yet it was essential to the story and it works within the context of the story, so it’s part of the masterclass.
One of the things that struck me right from the beginning is that we see the movie almost in first person – though, not totally, as at least one cutaway scene requires the absence of the protagonist. I would not be surprised if Ted Chiang wrote his short story in the first person. I don’t know what nationality Ted Chiang is, but I assume he is of Chinese extraction, and the Chinese are major players in this movie.
Communication is at the core of this film, both plot and subplot, and Amy Adams’ character (Louise Banks) makes the pertinent point in a bit of expositional dialogue that was both relevant to the story and relevant to what makes us human: that language, to a large extent, determines how we think because, by the very nature of our brains, we are limited in what we can think by the language that we think in. That’s not what she said but that was the lesson I took from it.
I’ve made the point before, though possibly not on this blog, that science fiction invariably has something to say about the era in which it was written and this movie is no exception. Basically, we see how paranoia can be a dangerous contagion, as if we need reminding. We are also reminded how wars and conflicts bring out the best and worst in humanity with the worst often being the predominant player.
It’s certainly not perfect – I’m not sure I’ve seen the perfect movie yet – but it’s clever on more than one level. I’m always drawn to good writing in a movie, which is something most people are not even aware of. It was based on a book, whose author escaped me as a couple in front of me got up to leave just as the name came up on the screen. But I have Google, so I can tell you that the screenplay was written by Eric Heisserer, and Ted Chiang wrote the novella, “Story of Your Life”, upon which it is based. French-Canadian director, Denis Villeneuve has also made Prisoners and Sicario, neither of which I’ve seen, but Sicario is highly acclaimed.
It would be remiss of me not to mention the music and soundscape, which really adds another dimension to this movie. I noticed that beginning and end scores were by Max Richter, whom I admire in the contemporary classical music scene. Though the overall music score is credited to Johann Johannsson. Some of the music reminded of Tibetan music with its almost subterranean tones. Australia also gets a bit of 'coverage', if that's the right word, though not always in a flattering manner. Forest Whitaker's character reminds us how we all but committed genocide against the Aboriginal people.
I haven’t read the book, but I’m willing to give credit to both writers for producing a ‘science fiction story with a brain’. Science fiction has a number of subgenres: the human diaspora into interstellar space; time travel; alien worlds; parallel universes; artificial intelligence; dystopian fiction, utopian fiction and the list goes on, with various combinations. The title alone tells us that this is an Alien encounter on Earth, but the movie keeps us guessing as to whether it’s an invasion or just a curious interloper or something else altogether.
I’ve written elsewhere that narrative tension is one of the essential writing skills and this story has it on many levels. To give one example without giving the plot away, there is a sequence of narrative events where we think we know what’s going to happen, with the suspense ramping up while we wait for what we expect to happen to happen, then something completely unexpected happens, which is totally within the bounds of possibility, therefore believable. In some respects this sums up the whole movie because all through it we are led to believe one thing only to learn we are witnessing something else. It’s called a reversal, which I’m not always a fan of, but this one is more than just a clever twist for the sake of being clever. Maybe that’s what the reviewer meant by ‘…when the narrative starts to wobble’. I don’t know. I have to confess I wasn’t completely sold, yet it was essential to the story and it works within the context of the story, so it’s part of the masterclass.
One of the things that struck me right from the beginning is that we see the movie almost in first person – though, not totally, as at least one cutaway scene requires the absence of the protagonist. I would not be surprised if Ted Chiang wrote his short story in the first person. I don’t know what nationality Ted Chiang is, but I assume he is of Chinese extraction, and the Chinese are major players in this movie.
Communication is at the core of this film, both plot and subplot, and Amy Adams’ character (Louise Banks) makes the pertinent point in a bit of expositional dialogue that was both relevant to the story and relevant to what makes us human: that language, to a large extent, determines how we think because, by the very nature of our brains, we are limited in what we can think by the language that we think in. That’s not what she said but that was the lesson I took from it.
I’ve made the point before, though possibly not on this blog, that science fiction invariably has something to say about the era in which it was written and this movie is no exception. Basically, we see how paranoia can be a dangerous contagion, as if we need reminding. We are also reminded how wars and conflicts bring out the best and worst in humanity with the worst often being the predominant player.
Sunday 13 November 2016
When evolution is not evolution
No, I’m not talking about creationism (a subject I’ve discussed many times on this blog) but a rather esoteric argument produced by Donald D Hoffman and Chetan Prakash in an academic paper titled Objects of Consciousness. Their discussion on evolution is almost a side issue, and came up in their responses to the many objections they’ve fielded. I read the paper when I was sent a link by someone who knows I’m interested in this stuff.
Donald Hoffman is a cognitive scientist with a Ph.D. in Computational Psychology and is now a full professor at University of California, Irvine. Chetan Prakash is a Professor Emeritus at California State University, San Bernardino and has a Master of Science in Physics and a Master of Science in Applied Mathematics.
I should point out at the outset, that their thesis is so out there, that I seriously wondered if it was a hoax. But given their academic credentials and the many academic citations and references in their paper, I assume that the authors really believe in what they’re arguing. And what they’re arguing, in a nutshell, is that everyone’s (and I mean every person’s) perception of the world is false, because, aside from conscious agents, everything else, including spacetime, is impermanent.
Their paper is 20 pages long (including 5-6 pages of objections and replies) most of which are densely worded interspersed with some diagrams and equations. To distil someone’s treatise into a single paragraph is always a tad unfair, so I’ll rely heavily on direct quotations and references to impart their arguments. Besides, you can always read the entire paper for yourself. Basically, they argue that ‘interacting conscious agents’ are the only reality and that nothing else exists ‘unperceived’. They formulate a mathematical model of consciousness, from which they derive a wave function that is the bedrock of quantum mechanics (which I’ll refer to as QM for brevity). In other words, they argue that the Copenhagen interpretation of QM requires consciousness to bring objects into reality (except consciousness) which are all impermanent.
It’s a well known philosophical conundrum that you can’t prove that you’re not a ‘brain-in-a-vat’, and theirs is a similar point of view in that it can’t be proved that they’re wrong, even though, as they point out themselves, we mostly all believe their view is wrong. I don’t know of anyone (other than the authors) who think that the world ceases to exist when they’re not looking. This is known as solipsism and there is a very good argument against solipsism even though it can’t be proved it’s wrong. In fact, solipsism is absolutely true when you’re in a dream, so it’s not always wrong. The point is that when we’re in a dream, despite all its inconsistencies, we actually don’t know we’re in a dream, so how can you be sure you’re not in a dream when you’re consciously awake? The argument against solipsism is that it can only be held by one person: it’s impossible to believe that everyone else is a solipsist too.
In the objections, item 6, they ‘reject solipsism’, yet ‘also reject permanence, viz., the doctrine that 3D space and physical objects exist when they are not perceived [but not conscious agents]. To claim that conscious agents exist unperceived differs from the claim that unconscious objects and space-time exist unperceived.’ In other words, consciousness is the only reality, a point they make in response to Objection 19: ‘reality consists of interacting conscious agents.’ But if one takes this seriously, then even the bodies that we take for granted don’t exist ‘unperceived’ whilst our consciousness does. It’s utter nonsense, except in a dream. What they are describing is exactly the reality one perceives in a dream, so their theory is effectively that the reality we all believe we inhabit is, in effect, a dream. Which is logically a variation on solipsism. The only difference is that we all inhabit the same dream together. So we’re all brains in a vat, only connected. The authors, I’m sure, would reject this interpretation, yet it fits exactly with what they’re arguing. Only in a dream do objects, including our own bodies, cease to exist unperceived.
Evolution comes up a lot in their paper because one of the centrepieces of their thesis is that evolution by natural selection produces perceptions that favour ‘fitness’ over ‘truth’. They claim to run 'genetic algorithms’ that show that evolution by natural selection benefits perception for ‘fitness’ over ‘accuracy’. The point is that we must take this assertion on face value, because we don’t know what algorithms they’re using or how they even define fitness, perceptions and truth. In fact, Objection 12 asks this very question. Part of the authors' response goes: ‘For the sake of brevity, we omitted our definition of truth and perception… But they are defined precisely in Monte Carlo simulations of evolutionary games and genetic algorithms…’
In particular, the authors use vision to make their case. It’s well known that the brain creates a facsimile of what we see in ways that we are still trying to understand, and to which, to date, we’ve failed to engineer to the same degree of accuracy in artificial intelligence (AI). But theoretical algorithms and Monte Carlo simulations aside, we have the means to compare what we subjectively see with an objective representation.
It so happens that we have invented devices that create images (both stationary and dynamic) through chemical-electronic-mechanical means independently of the human brain and they show remarkable, but unsurprising, veracity with what our brain perceives subjectively. Now, you might say that the same brain perceives this simulated vision, so one would expect it to provide the same image. I think this is a long bow to draw, because the image effectively gets ‘processed’ twice: once through the device and once through the brain, yet the result is unequivocally the same without the interim process. In fact, the interim process can show what we miss, like the famous example of a gorilla moving through a room while you are concentrating on a thrown ball. But, in the context of their thesis, the camera is not a conscious entity yet it captures an image that is supposedly nonexistent when unperceived. And cameras can be set up to capture images without the interaction of so-called ‘conscious agents’.
Now the authors are correct when they point out that colour, for example, is a completely psychological phenomenon – it only exists in some creature’s mind, and it varies from species to species – this is well known and well understood. We also know that it’s caused by reflected light which can be scientifically explained by Richard Feynman’s (I know it’s not his alone) QED (Quantum Electrodynamics) and that the subjective experience of colour is a direct consequence of the frequency of electromagnetic radiation. But the fact that colour is subjective doesn’t make the objects, from which the effect is consequential, subjective as well.
Regarding the other mathematical contribution to their thesis, the authors have created a mathematical model of consciousness, from which they derive the wave function for QM. I’m not a logician, so I can’t say one way or another how valid this is. However, it should be pointed out that Erwin Schrodinger, who originally proposed the wave function, in his famous eponymous equation, didn’t derive it from anything. So the authors claim they’ve done something that the original creator of the wave function couldn’t do himself. As Richard Feynman once said: ‘Schrodinger’s equation can’t be derived from anything we know.’ However, the authors claim it can be derived from consciousness. I’m sceptical.
You may wonder what all this has to do with the title of this post. Well, in response to objection 19, the authors propose to come up with a ‘new theory of evolution’ based on their theory of conscious agents. To quote: ‘When the new evolutionary theory is projected onto the spacetime perceptual interface of H. Sapiens we must get back the standard evolutionary theory.’ This means that the DNA, and the molecules that make the DNA, that allowed consciousness to evolve are actually dependent on said consciousness, so the ‘new theory of evolution’ must logically contradict the ‘standard theory of evolution’.
As part of their thesis, the authors make an analogy between a computer desktop and spacetime, only, the way they describe it, it appears to be more than an analogy to them.
Space and time are the desktop of our personal interface, and three-dimensional objects are icons on the desktop. Our interface gives the impression that it reveals true cause and effect… But this appearance of cause and effect is simply a useful fiction, just as it is for the icons on the computer desktop.
(The interface, to which they refer, is a ‘species-specific interface’, which means it’s a human consciousness interface. They don’t say if this interface applies to other sentient creatures, or just us.)
The issue of cause and effect being a ‘useful fiction’ was taken up by someone (authors of objections are not given) in objection 17, to which the authors of the theory responded thus:
Our views on causality are consistent with interpretations of quantum theory that abandon microphysical causality… The burden of proof is surely on one who would abandon microphysical causation but still cling to macrophysical causation.
I could respond to this challenge, but it’s not relevant to my argument. The point is that the authors obviously don’t ‘cling to macrophysical causation’, which I would contend creates a problem when discussing evolutionary theory. The point is that according to every discussion on biological evolution I’ve read, extant species are consequentially dependent on earlier species, which means there is a causal chain going back to the first eukaryota. If this causal chain is a ‘useful fiction’ then it is hard to see how any theory of evolution that excludes it could be called evolutionary. With or without this useful fiction, the authors ‘new theory’ turns evolution on its head, with conscious agents taking precedence over physical objects, including species, all of which are impermanent. In spite of this ontological difficulty, the authors believe that when they ‘project’ their ‘new theory’ onto the ‘species-specific interface’ of impermanent spacetime (which doesn’t exist unperceived), the old ‘standard theory of evolution’ will be found.
I’ve left a comment on the bottom of the web page (link given in intro above) which challenges this specific aspect of their theory (using different words). If I get a response I’ll update this post accordingly.
Donald Hoffman is a cognitive scientist with a Ph.D. in Computational Psychology and is now a full professor at University of California, Irvine. Chetan Prakash is a Professor Emeritus at California State University, San Bernardino and has a Master of Science in Physics and a Master of Science in Applied Mathematics.
I should point out at the outset, that their thesis is so out there, that I seriously wondered if it was a hoax. But given their academic credentials and the many academic citations and references in their paper, I assume that the authors really believe in what they’re arguing. And what they’re arguing, in a nutshell, is that everyone’s (and I mean every person’s) perception of the world is false, because, aside from conscious agents, everything else, including spacetime, is impermanent.
Their paper is 20 pages long (including 5-6 pages of objections and replies) most of which are densely worded interspersed with some diagrams and equations. To distil someone’s treatise into a single paragraph is always a tad unfair, so I’ll rely heavily on direct quotations and references to impart their arguments. Besides, you can always read the entire paper for yourself. Basically, they argue that ‘interacting conscious agents’ are the only reality and that nothing else exists ‘unperceived’. They formulate a mathematical model of consciousness, from which they derive a wave function that is the bedrock of quantum mechanics (which I’ll refer to as QM for brevity). In other words, they argue that the Copenhagen interpretation of QM requires consciousness to bring objects into reality (except consciousness) which are all impermanent.
It’s a well known philosophical conundrum that you can’t prove that you’re not a ‘brain-in-a-vat’, and theirs is a similar point of view in that it can’t be proved that they’re wrong, even though, as they point out themselves, we mostly all believe their view is wrong. I don’t know of anyone (other than the authors) who think that the world ceases to exist when they’re not looking. This is known as solipsism and there is a very good argument against solipsism even though it can’t be proved it’s wrong. In fact, solipsism is absolutely true when you’re in a dream, so it’s not always wrong. The point is that when we’re in a dream, despite all its inconsistencies, we actually don’t know we’re in a dream, so how can you be sure you’re not in a dream when you’re consciously awake? The argument against solipsism is that it can only be held by one person: it’s impossible to believe that everyone else is a solipsist too.
In the objections, item 6, they ‘reject solipsism’, yet ‘also reject permanence, viz., the doctrine that 3D space and physical objects exist when they are not perceived [but not conscious agents]. To claim that conscious agents exist unperceived differs from the claim that unconscious objects and space-time exist unperceived.’ In other words, consciousness is the only reality, a point they make in response to Objection 19: ‘reality consists of interacting conscious agents.’ But if one takes this seriously, then even the bodies that we take for granted don’t exist ‘unperceived’ whilst our consciousness does. It’s utter nonsense, except in a dream. What they are describing is exactly the reality one perceives in a dream, so their theory is effectively that the reality we all believe we inhabit is, in effect, a dream. Which is logically a variation on solipsism. The only difference is that we all inhabit the same dream together. So we’re all brains in a vat, only connected. The authors, I’m sure, would reject this interpretation, yet it fits exactly with what they’re arguing. Only in a dream do objects, including our own bodies, cease to exist unperceived.
Evolution comes up a lot in their paper because one of the centrepieces of their thesis is that evolution by natural selection produces perceptions that favour ‘fitness’ over ‘truth’. They claim to run 'genetic algorithms’ that show that evolution by natural selection benefits perception for ‘fitness’ over ‘accuracy’. The point is that we must take this assertion on face value, because we don’t know what algorithms they’re using or how they even define fitness, perceptions and truth. In fact, Objection 12 asks this very question. Part of the authors' response goes: ‘For the sake of brevity, we omitted our definition of truth and perception… But they are defined precisely in Monte Carlo simulations of evolutionary games and genetic algorithms…’
In particular, the authors use vision to make their case. It’s well known that the brain creates a facsimile of what we see in ways that we are still trying to understand, and to which, to date, we’ve failed to engineer to the same degree of accuracy in artificial intelligence (AI). But theoretical algorithms and Monte Carlo simulations aside, we have the means to compare what we subjectively see with an objective representation.
It so happens that we have invented devices that create images (both stationary and dynamic) through chemical-electronic-mechanical means independently of the human brain and they show remarkable, but unsurprising, veracity with what our brain perceives subjectively. Now, you might say that the same brain perceives this simulated vision, so one would expect it to provide the same image. I think this is a long bow to draw, because the image effectively gets ‘processed’ twice: once through the device and once through the brain, yet the result is unequivocally the same without the interim process. In fact, the interim process can show what we miss, like the famous example of a gorilla moving through a room while you are concentrating on a thrown ball. But, in the context of their thesis, the camera is not a conscious entity yet it captures an image that is supposedly nonexistent when unperceived. And cameras can be set up to capture images without the interaction of so-called ‘conscious agents’.
Now the authors are correct when they point out that colour, for example, is a completely psychological phenomenon – it only exists in some creature’s mind, and it varies from species to species – this is well known and well understood. We also know that it’s caused by reflected light which can be scientifically explained by Richard Feynman’s (I know it’s not his alone) QED (Quantum Electrodynamics) and that the subjective experience of colour is a direct consequence of the frequency of electromagnetic radiation. But the fact that colour is subjective doesn’t make the objects, from which the effect is consequential, subjective as well.
Regarding the other mathematical contribution to their thesis, the authors have created a mathematical model of consciousness, from which they derive the wave function for QM. I’m not a logician, so I can’t say one way or another how valid this is. However, it should be pointed out that Erwin Schrodinger, who originally proposed the wave function, in his famous eponymous equation, didn’t derive it from anything. So the authors claim they’ve done something that the original creator of the wave function couldn’t do himself. As Richard Feynman once said: ‘Schrodinger’s equation can’t be derived from anything we know.’ However, the authors claim it can be derived from consciousness. I’m sceptical.
You may wonder what all this has to do with the title of this post. Well, in response to objection 19, the authors propose to come up with a ‘new theory of evolution’ based on their theory of conscious agents. To quote: ‘When the new evolutionary theory is projected onto the spacetime perceptual interface of H. Sapiens we must get back the standard evolutionary theory.’ This means that the DNA, and the molecules that make the DNA, that allowed consciousness to evolve are actually dependent on said consciousness, so the ‘new theory of evolution’ must logically contradict the ‘standard theory of evolution’.
As part of their thesis, the authors make an analogy between a computer desktop and spacetime, only, the way they describe it, it appears to be more than an analogy to them.
Space and time are the desktop of our personal interface, and three-dimensional objects are icons on the desktop. Our interface gives the impression that it reveals true cause and effect… But this appearance of cause and effect is simply a useful fiction, just as it is for the icons on the computer desktop.
(The interface, to which they refer, is a ‘species-specific interface’, which means it’s a human consciousness interface. They don’t say if this interface applies to other sentient creatures, or just us.)
The issue of cause and effect being a ‘useful fiction’ was taken up by someone (authors of objections are not given) in objection 17, to which the authors of the theory responded thus:
Our views on causality are consistent with interpretations of quantum theory that abandon microphysical causality… The burden of proof is surely on one who would abandon microphysical causation but still cling to macrophysical causation.
I could respond to this challenge, but it’s not relevant to my argument. The point is that the authors obviously don’t ‘cling to macrophysical causation’, which I would contend creates a problem when discussing evolutionary theory. The point is that according to every discussion on biological evolution I’ve read, extant species are consequentially dependent on earlier species, which means there is a causal chain going back to the first eukaryota. If this causal chain is a ‘useful fiction’ then it is hard to see how any theory of evolution that excludes it could be called evolutionary. With or without this useful fiction, the authors ‘new theory’ turns evolution on its head, with conscious agents taking precedence over physical objects, including species, all of which are impermanent. In spite of this ontological difficulty, the authors believe that when they ‘project’ their ‘new theory’ onto the ‘species-specific interface’ of impermanent spacetime (which doesn’t exist unperceived), the old ‘standard theory of evolution’ will be found.
I’ve left a comment on the bottom of the web page (link given in intro above) which challenges this specific aspect of their theory (using different words). If I get a response I’ll update this post accordingly.
Sunday 6 November 2016
Dr Strange; a surprisingly philosophical movie
I have to admit I wouldn’t have gone to see this based on the trailer, as it just appeared to be a special effects spectacular, which is what you expect from superhero movies. And it seemed very formulaic - an apprentice, a mentor, a villain who wants to destroy the world - you know the script. What changed my mind was a review by Stephen Romei in the Australian Weekend Review (29-30 Oct. 2016), who gave it 3.5 stars, and re-reading it, gives a lot of the plot away. I’ll try not to do that here, but I’m not promising.
Dr Stephen Strange is played by Benedict Cumberbatch, who is much better cast here than in The Imitation Game, which I thought was a travesty. As an aside, The Imitation Game was an insult to the real Alan Turing, but I don’t believe that was Cumberbatch’s fault. I blame the director, writers and producers, who, knowing the audience’s ignorance, gave them the caricature of genius that they expected the audience wanted to see.
Cumberbatch’s Dr Strange is a self-obsessed, egotistical, unapologetically self-promoting brain surgeon. He’s never known failure and that’s an important psychological point in my view. The first subliminal philosophical reference in this movie is the well-worn trope: the unexamined life is not worth living. This is pretty much the theme or premise of every story ever told. The point is that no one examines their life until they experience failure, and, of course, Strange faces failure of a catastrophic kind. Otherwise, there’d be no movie.
He then goes on a mystical journey, which many of us may have done at an intellectual level, but can only be done viscerally in the world of fiction. I should point out that I went through a prolonged ‘Eastern philosophy’ phase, which more or less followed on from the ‘Christian’ phase of my childhood. I’m now going through a mathematical phase, as anyone reading this blog could not have failed to notice.
Anyway, Strange’s journey is distinctly Eastern, which is the antithesis of his medical-science background. But he is introduced to an ‘astral’ or ‘spirit’ dimension, and there is a reference to the multiverse, which is a current scientific trope, if I may re-use that term in a different context. I don’t mind that ‘comic book’ movies allude to religious ideas or even that they mix them with science, because one can do that in fiction. I’ve done it myself. The multiverse is an allusion to everything that we don’t know scientifically (even in science) and is the current bulwark against metaphysics. Employing it in a fantasy movie to enhance the fantasy element is just clever storytelling. It embodies the idea, that is still very current in the East, that science cannot tell us everything.
There are 2 mythological references in the movie, including one biblical one. At one point the villain, Kaecilius (played by Mads Mikkelsen) attempts to seduce Strange to the ‘dark side’, which is very reminiscent of Satan’s attempt to seduce Jesus in the desert. I’ve always liked that particular biblical story, because it represents the corruption of power and status over the need to serve a disenfranchised public. In other words, it is an appeal to ego over the need to subordinate one’s ego for a greater good.
One of the themes of the story is mortality and immortality; something I’ve explored in my own fiction, possibly more explicitly. We live in a time where, as Woody Allen once explained in literary terms, we ‘suspend disbelief’ that we are going to live forever. We tend to avoid, in Western culture, any reference to mortality, yet it is an intrinsic part of life. We all eventually get there but refuse to face it until forced to. This is actually addressed in this movie, quite unexpectedly, as we don’t expect lessons in philosophy in a superhero movie.
Last but not least, there is a subtle but clever allusion to Camus’ famous retelling of the Greek Sisyphus myth (look it up), not something your average cinema audience member would be expected to know. It is embedded in one of those plot devices that I love: where the hero uses an unexpected ‘twist’, both literally and figuratively, and where brain defeats overwhelming force.
Dr Stephen Strange is played by Benedict Cumberbatch, who is much better cast here than in The Imitation Game, which I thought was a travesty. As an aside, The Imitation Game was an insult to the real Alan Turing, but I don’t believe that was Cumberbatch’s fault. I blame the director, writers and producers, who, knowing the audience’s ignorance, gave them the caricature of genius that they expected the audience wanted to see.
Cumberbatch’s Dr Strange is a self-obsessed, egotistical, unapologetically self-promoting brain surgeon. He’s never known failure and that’s an important psychological point in my view. The first subliminal philosophical reference in this movie is the well-worn trope: the unexamined life is not worth living. This is pretty much the theme or premise of every story ever told. The point is that no one examines their life until they experience failure, and, of course, Strange faces failure of a catastrophic kind. Otherwise, there’d be no movie.
He then goes on a mystical journey, which many of us may have done at an intellectual level, but can only be done viscerally in the world of fiction. I should point out that I went through a prolonged ‘Eastern philosophy’ phase, which more or less followed on from the ‘Christian’ phase of my childhood. I’m now going through a mathematical phase, as anyone reading this blog could not have failed to notice.
Anyway, Strange’s journey is distinctly Eastern, which is the antithesis of his medical-science background. But he is introduced to an ‘astral’ or ‘spirit’ dimension, and there is a reference to the multiverse, which is a current scientific trope, if I may re-use that term in a different context. I don’t mind that ‘comic book’ movies allude to religious ideas or even that they mix them with science, because one can do that in fiction. I’ve done it myself. The multiverse is an allusion to everything that we don’t know scientifically (even in science) and is the current bulwark against metaphysics. Employing it in a fantasy movie to enhance the fantasy element is just clever storytelling. It embodies the idea, that is still very current in the East, that science cannot tell us everything.
There are 2 mythological references in the movie, including one biblical one. At one point the villain, Kaecilius (played by Mads Mikkelsen) attempts to seduce Strange to the ‘dark side’, which is very reminiscent of Satan’s attempt to seduce Jesus in the desert. I’ve always liked that particular biblical story, because it represents the corruption of power and status over the need to serve a disenfranchised public. In other words, it is an appeal to ego over the need to subordinate one’s ego for a greater good.
One of the themes of the story is mortality and immortality; something I’ve explored in my own fiction, possibly more explicitly. We live in a time where, as Woody Allen once explained in literary terms, we ‘suspend disbelief’ that we are going to live forever. We tend to avoid, in Western culture, any reference to mortality, yet it is an intrinsic part of life. We all eventually get there but refuse to face it until forced to. This is actually addressed in this movie, quite unexpectedly, as we don’t expect lessons in philosophy in a superhero movie.
Last but not least, there is a subtle but clever allusion to Camus’ famous retelling of the Greek Sisyphus myth (look it up), not something your average cinema audience member would be expected to know. It is embedded in one of those plot devices that I love: where the hero uses an unexpected ‘twist’, both literally and figuratively, and where brain defeats overwhelming force.
Wednesday 14 September 2016
Penrose's 3 Worlds Philosophy
This is the not-so-well-known 3 worlds philosophy of Roger Penrose, who is a physicist, cosmologist, mathematician and author. I’ve depicted them pretty well as Penrose himself would, though his graphics (in his books) are far superior to mine (and they don’t run off the page). I know it doesn’t quite fit, but if I made it fit it wouldn’t be readable.
Penrose is best known for his books, The Emperor’s New Mind and Road to Reality; the former being far more accessible than the latter. In fact, I’d recommend The Emperor’s New Mind to anyone who wants a readable book that introduces them to the esoteric world of physics without too many equations and lots of exposition about things like relativity, quantum mechanics, thermodynamics and cosmology. Road to Reality is for the really serious physics student and I have to admit that it defeated me.
The controversial or contentious part of Penrose’s diagram is the ‘Platonic World’ (Mathematics) and its relationship to the other two. The ‘Physical World’ (Universe) and the ‘Mental World’ (Consciousness) are not the least bit contentious - you would think - as everyone reading this is obviously conscious and we all believe that we inhabit a physical universe (unless you are a solipsist). Solipsism, by the way, sounds nonsensical but is absolutely true when you are in a dream.
I’ve mentioned this triumvirate before in previous posts (without the diagram), but what prompted me to re-visit it was when I realised that many people don’t appreciate the subtle yet significant difference between mathematical equations (like Pythagoras’s Theorem or Euler’s equation, for example) and physics equations (like Einstein’s E = mc2 or Schrodinger’s equation). I’ll return to this specific point later, but first I should explain what the arrows signify in the graphic.
I deliberately placed the Physical World at the top of the diagram, because that is the intuitive starting point. The arrows signify that a very small part of the Universe has created the whole of consciousness (Penrose allows that it might not be all of consciousness, but I would contend that it is). Then a very small part of Consciousness has produced the whole of mathematics (that we know about) and here I would concede that we haven’t produced it all because there is still more to learn.
By analogy, according to the diagram, a small part of the Platonic (mathematical) world ‘created’ the physical universe. Whilst this is implied, I don’t believe it’s true and I’m not sure Penrose believes it’s true either. Numbers and equations, of themselves, don’t create anything. However, the Universe, to all appearances and scientific investigations, is a consequence of ‘natural’ laws, which are all mathematical in principle if not actual fact. In other words, the Universe obeys mathematical rules or laws to an extraordinarily accurate degree that appear to underpin its entire evolution and even its birth. There is a good argument that these laws pre-exist the Universe (including critical constants of nature) and therefore that mathematics pre-existed the Universe, hence its place in the diagram.
So there are at least 2 ways of looking at the diagram: one where the Universe comes first and Mathematics comes last, or alternatively, Mathematics comes first and Consciousness comes last; the latter being more contentious.
I should point out that, for many philosophers and scientists, this entire symbolic representation is misleading. For them, there are not even 2 worlds, let alone 3. They would argue that consciousness should not be considered separately to the physical world; it is simply a manifestation of the physical world and eventually we will create it artificially. I am not so sure on that last point, but, certainly, most scientists seem to be of the view that artificial intelligence (AI) is inevitable and if it’s indistinguishable from human intelligence then it will be conscious. In fact, I’ve read arguments (in New Scientist) that because we can’t tell if someone else has consciousness like we do (notice that I sabotaged the argument by using ‘we’) then we won’t know if AI has consciousness and therefore we will have to assume it does.
But aside from that whole other argument, consciousness plays a very significant role, independently of the Universe itself, in providing reality. Now bear with me, because I contend that consciousness provides an answer to that oft asked fundamentally existential question: why is there something rather than nothing? Without consciousness there might as well be nothing. Think about it: before you were born there was nothing and after you die there will be nothing. Without consciousness, there is no reality (at least, for you).
Also, without consciousness, the concepts of past, present and future have no relevance. In fact, it’s possible that consciousness is the only thing in the Universe that exists in a continuous present, which means that without memory (short term or long term) we wouldn’t even know we were conscious. I’ve made this point in another post (What is now?) where I discuss the possibility that quantum mechanics is in the future and so-called Classical physics is always in the past. I elaborate on a quote by Nobel laureate, William Lawrence Bragg, who effectively says just that.
Not to get too far off the track, I think consciousness deserves its ‘special place’ in the scheme of things, even though I concede that many would disagree.
So what about mathematics: does it also deserve a special place in the scheme of things? Most would say no, but again, I would say yes. Let me return briefly to the point I alluded to earlier: that mathematical equations have a different status to physics equations. Physics equations, like E = mc2, only have meaning in reference to the physical world, whereas a mathematical equation, like Euler’s equation, eix = cos x + i sin x, or his more famous identity, eiÏ€ + 1 = 0, have a meaning that’s independent of the Universe. In other words, Euler’s identity is an expression of a mathematical relationship that would still be true even if the Universe didn't exist.
Again, not everyone agrees, including Stephen Wolfram, who created Mathematica, so certainly much more clever than me. Wolfram argues, in an interview (see below) that mathematics is a cultural artefact, and I’ve come across that argument before. Wolfram has also suggested, if my memory serves me correctly, that the Universe could be all algorithms, which would make mathematics unnecessary, but I can’t see how you could have one without the other. Gregory Chaitin, quotes Wolfram (in Thinking about Godel and Turing) that the Universe could be pseudo-random, meaning that it only appears random, which would be consistent with the view that the Universe is all algorithms. Personally, I think he’s wrong on both counts: the Universe doesn’t run on algorithms and it is genuinely random, which I’ve argued elsewhere.
The problem I have with mathematics being a cultural artefact is that the more you investigate it the more it takes on a life of its own, metaphorically speaking. Besides, we know from Godel’s Incompleteness Theorem that mathematics will always contain truths that we cannot prove, no matter how much we have proved already, which implies that mathematics is a never-ending endeavour. And that implies that there must exist mathematical ‘truths’ that we are yet to discover and some that we will never know.
Godel’s Theorem seems to apply in practice as well as theory, when one considers that famous conjectures (like Fermat’s Last Theorem and Riemann’s Hypothesis) take centuries to solve because the required mathematics wasn’t known at the time they were proposed. For example, Riemann first presented his conjecture in 1859 (the same year Darwin published The Origin of Species), yet it has found connections with Hermitian matrices, used in quantum mechanics. Riemann’s Hypothesis is the most famous unsolved mathematical problem at the time of writing.
The connection between mathematics and humanity is that it is an epistemological bridge between our intellect and the physical world at all scales. The connection between mathematics and the Universe is more direct. There are dimensionless numbers, like the fine-structure constant, the mass ratio between protons and neutrons and the ratio of matter to anti-matter, all of which affect the Universe's fundamental capacity to produce sentient life. I wrote about this not so long ago. There is the inverse square law, which is a mathematical consequence of the Universe existing in 3 spatial dimensions that allows for extraordinarily stable orbits over astronomical time frames. Then there is quantum mechanics, which appears to underpin all of physical reality and can only be revealed in the language of mathematics.
Footnote 1: Stephen Wolfram's argument that mathematics is a cultural artefact and that there is no Platonic realm. Curiously, he uses the same examples I do to come up with a counter-argument to mine. I mostly agree with what he says; we just start and arrive at different philosophical positions.
Footnote 2: This is Roger Penrose being interviewed by the same person on the same topic, and giving the antithetical argument to Wolfram's. You can see that he and I are pretty well in agreement on this subject.
Footnote 3: This is Penrose's own take on his 3 worlds.
Sunday 28 August 2016
The relationship between science and philosophy
I’ve written on this before, but recent reading has made me revisit it, because I think it’s a lot closer and interrelated than people think, especially among scientists. I’m referring to the fact that more than one ‘famous’ scientist has been dismissive of philosophy and its contribution to our knowledge. I’m thinking Richard Dawkins, Stephen Hawking, Peter Atkins and, of course, Richard Feynman, whom I particularly admire.
In the Western epistemic canon, if I can use that term, philosophy and science have a common origin, as we all know, with the Ancient Greeks. There was a time when they were inseparable, and certainly up to Newton’s time, science was considered, if not actually called, ‘natural philosophy’. In some circles, it still is. This is to distinguish it from metaphysics, and I think that division is still relevant, though some may argue that metaphysics has no relevance in the modern world.
Plato argued that ‘Metaphysics… holds that what exists lies beyond experience’ (my on-board computer dictionary definition) which in the Platonic tradition would include mathematics, oddly enough. But in the Kantian and Hume tradition: ‘…objects of experience constitute the only reality’ (from the same source). I would suggest that this difference still exists in practice if not in theory. In other words, science is based on empirical evidence, though mathematics increasingly plays a role. Mathematics, by the way, does not constitute empirical evidence, but mathematics constitutes a source of ‘truth’ that can’t be ignored in any assessment of a scientific theory.
I find I’m already heading down a path I didn’t intend to follow, but maybe I can join it to the one I intended to follow further down the track. So let me backtrack and start again.
Most scientific theories start off in the realm of philosophy, though they may be informed by limited physical evidence. Think, for example, of Darwin’s theory of evolution by natural selection. Both he and Alfred Wallace (independently) came to the same conclusion, when they traveled to little-known parts of the world and saw creatures that were not only exotic but strange and unexpected. Most significantly, they realised how geography and relative isolation drove species’ diversity. This led them both to develop an unpopular and unproven philosophy called evolution. Evidence came much later in the form of fossils, genetics and, eventually, DNA, which is the clincher. Evidence can turn philosophy into science and theories into facts.
As anyone, who has any exposure to American culture, knows, the philosophical side of this debate still rages. And, to some extent, this is the very reason that some scientists would argue that philosophy is irrelevant or, at the very least, subordinate to science. This point alone is worth elaborating on. There is a dialectic between science and philosophy and the dominant discipline, for want of a better term, is simply dependent on our level of knowledge, or, more importantly perhaps, our level of ignorance. By dialectic I mean a to-ing and fro-ing, so that one informs the other in a continual and constructive dialogue, which leads to an evolvement which we call a theory.
Going back to the example of the theory of evolution, which, after 150 years, is both more fraught with difficulties and more cemented in evidence than either Darwin or Wallace could have imagined. In other words, and this is true in every branch of science, the more we learn about something the more mysteries we uncover. For example, DNA reveals in extraordinary relief how every species is related and how all life on Earth had a common origin, yet the origin and evolution of DNA itself, whilst not doubted, poses mysteries of its own. And while mysteries will always exist, anti-science proponents will find a foothold to sow scepticism and disbelief.
But my point is that the philosophy of evolutionary biology is strengthened by science to the extent that it is considered a fact by everyone except those who argue that the Bible has more credibility than science. Again, I’m getting off-track, but it illustrates why scientists have a tendency to demote philosophy, when it is used to promote ignorance over what is already known and accepted in mainstream science.
On a completely different tack, it’s well known that Einstein held a deep scepticism about the validity and long-term scientific legacy of quantum mechanics. What is lesser known is his philosophical belief in determinism that led him to be so intractable in his dissent. Einstein’s special theory of relativity led to some counter-intuitive ideas about time. Specifically, that simultaneity is subjective, not objective, if events are spatially separated (refer my post on Now). Einstein came to the philosophical conclusion that the Universe is determinant, where space and time are no longer separate but intrinsically combined in space-time. Mathematically, this is resolved by treating time as a fourth dimension, and, in Einstein’s universe, the future is just as fixed as the past, in the same way that a spatial dimension is fixed. This is a philosophical viewpoint that arose from his special theory of relativity and thus informed his worldview to the point that it contradicted the inherent philosophy of quantum mechanics that tells us, at a fundamental level, everything is random.
And this brings me full circle, because it was reading about the current, increasingly popular, many-worlds interpretation of quantum mechanics that led me to contemplate the metaphorically and unavoidably incestuous relationship between philosophy and science. In particular, adherents to this ‘theory’ have to contend with their belief that every action they do in this universe affects their counterparts in parallel universes. I’ve expressed my dissent for the many-worlds interpretation of quantum mechanics elsewhere, so I won’t discuss it here. However, I would like to address this specific consequence of this specific philosophy. You have a stream of consciousness that is really the only thing you have that gives you a reality. So, even if there are an infinite and continual branching of your current universe into parallel universes, your stream of consciousness only follows one and axiomatically that’s the only reality you know.
And now, to rejoin the path that led me astray, let's talk about mathematics. Mathematics has followed its own historical path in Western thought alongside science and philosophy with its own origins in Plato’s Academy. In fact, Plato adopted the curriculum or quadrivium from Pythagoras’s best student, Archytas (after specifically seeking him out), which was arithmetic, geometry, astronomy and music. Mathematics is obviously the common denominator in all these.
Mathematics also has philosophical ‘schools’ which I’ve written about elsewhere, so I won’t dwell on that here. Personally, I think mathematics contains truths that transcend humanity and the universe itself, but it’s the pervasive and seemingly ineluctable intrusion into science that has given it its special epistemological status. String Theory or M Theory is the latest, most popular contender for a so-called Theory of Everything (TOE) yet it’s more philosophy than scientific theory. It’s only mathematics that gives it epistemic status, and it’s arguably the best example of the dialect I was talking about. I’ve written in another post (based on Noson Yanofsky’s excellent book) that we will never know everything there is to know in both science and mathematics. This means that our endeavours in attempting to understand the Universe (or multiverse) will be never-ending, and thus the dialectic between science and philosophy will also be never-ending.
In the Western epistemic canon, if I can use that term, philosophy and science have a common origin, as we all know, with the Ancient Greeks. There was a time when they were inseparable, and certainly up to Newton’s time, science was considered, if not actually called, ‘natural philosophy’. In some circles, it still is. This is to distinguish it from metaphysics, and I think that division is still relevant, though some may argue that metaphysics has no relevance in the modern world.
Plato argued that ‘Metaphysics… holds that what exists lies beyond experience’ (my on-board computer dictionary definition) which in the Platonic tradition would include mathematics, oddly enough. But in the Kantian and Hume tradition: ‘…objects of experience constitute the only reality’ (from the same source). I would suggest that this difference still exists in practice if not in theory. In other words, science is based on empirical evidence, though mathematics increasingly plays a role. Mathematics, by the way, does not constitute empirical evidence, but mathematics constitutes a source of ‘truth’ that can’t be ignored in any assessment of a scientific theory.
I find I’m already heading down a path I didn’t intend to follow, but maybe I can join it to the one I intended to follow further down the track. So let me backtrack and start again.
Most scientific theories start off in the realm of philosophy, though they may be informed by limited physical evidence. Think, for example, of Darwin’s theory of evolution by natural selection. Both he and Alfred Wallace (independently) came to the same conclusion, when they traveled to little-known parts of the world and saw creatures that were not only exotic but strange and unexpected. Most significantly, they realised how geography and relative isolation drove species’ diversity. This led them both to develop an unpopular and unproven philosophy called evolution. Evidence came much later in the form of fossils, genetics and, eventually, DNA, which is the clincher. Evidence can turn philosophy into science and theories into facts.
As anyone, who has any exposure to American culture, knows, the philosophical side of this debate still rages. And, to some extent, this is the very reason that some scientists would argue that philosophy is irrelevant or, at the very least, subordinate to science. This point alone is worth elaborating on. There is a dialectic between science and philosophy and the dominant discipline, for want of a better term, is simply dependent on our level of knowledge, or, more importantly perhaps, our level of ignorance. By dialectic I mean a to-ing and fro-ing, so that one informs the other in a continual and constructive dialogue, which leads to an evolvement which we call a theory.
Going back to the example of the theory of evolution, which, after 150 years, is both more fraught with difficulties and more cemented in evidence than either Darwin or Wallace could have imagined. In other words, and this is true in every branch of science, the more we learn about something the more mysteries we uncover. For example, DNA reveals in extraordinary relief how every species is related and how all life on Earth had a common origin, yet the origin and evolution of DNA itself, whilst not doubted, poses mysteries of its own. And while mysteries will always exist, anti-science proponents will find a foothold to sow scepticism and disbelief.
But my point is that the philosophy of evolutionary biology is strengthened by science to the extent that it is considered a fact by everyone except those who argue that the Bible has more credibility than science. Again, I’m getting off-track, but it illustrates why scientists have a tendency to demote philosophy, when it is used to promote ignorance over what is already known and accepted in mainstream science.
On a completely different tack, it’s well known that Einstein held a deep scepticism about the validity and long-term scientific legacy of quantum mechanics. What is lesser known is his philosophical belief in determinism that led him to be so intractable in his dissent. Einstein’s special theory of relativity led to some counter-intuitive ideas about time. Specifically, that simultaneity is subjective, not objective, if events are spatially separated (refer my post on Now). Einstein came to the philosophical conclusion that the Universe is determinant, where space and time are no longer separate but intrinsically combined in space-time. Mathematically, this is resolved by treating time as a fourth dimension, and, in Einstein’s universe, the future is just as fixed as the past, in the same way that a spatial dimension is fixed. This is a philosophical viewpoint that arose from his special theory of relativity and thus informed his worldview to the point that it contradicted the inherent philosophy of quantum mechanics that tells us, at a fundamental level, everything is random.
And this brings me full circle, because it was reading about the current, increasingly popular, many-worlds interpretation of quantum mechanics that led me to contemplate the metaphorically and unavoidably incestuous relationship between philosophy and science. In particular, adherents to this ‘theory’ have to contend with their belief that every action they do in this universe affects their counterparts in parallel universes. I’ve expressed my dissent for the many-worlds interpretation of quantum mechanics elsewhere, so I won’t discuss it here. However, I would like to address this specific consequence of this specific philosophy. You have a stream of consciousness that is really the only thing you have that gives you a reality. So, even if there are an infinite and continual branching of your current universe into parallel universes, your stream of consciousness only follows one and axiomatically that’s the only reality you know.
And now, to rejoin the path that led me astray, let's talk about mathematics. Mathematics has followed its own historical path in Western thought alongside science and philosophy with its own origins in Plato’s Academy. In fact, Plato adopted the curriculum or quadrivium from Pythagoras’s best student, Archytas (after specifically seeking him out), which was arithmetic, geometry, astronomy and music. Mathematics is obviously the common denominator in all these.
Mathematics also has philosophical ‘schools’ which I’ve written about elsewhere, so I won’t dwell on that here. Personally, I think mathematics contains truths that transcend humanity and the universe itself, but it’s the pervasive and seemingly ineluctable intrusion into science that has given it its special epistemological status. String Theory or M Theory is the latest, most popular contender for a so-called Theory of Everything (TOE) yet it’s more philosophy than scientific theory. It’s only mathematics that gives it epistemic status, and it’s arguably the best example of the dialect I was talking about. I’ve written in another post (based on Noson Yanofsky’s excellent book) that we will never know everything there is to know in both science and mathematics. This means that our endeavours in attempting to understand the Universe (or multiverse) will be never-ending, and thus the dialectic between science and philosophy will also be never-ending.
Subscribe to:
Posts (Atom)