I’ve written about writing a few times now, including Writing’s 3 Essential Skills (Jul. 2013) and How to Create an Imaginary, Believable World (Aug. 2010), the last one being a particularly popular post. Also, I taught a creative writing course in 2009 and have given a couple of talks on the subject, but never intentionally to provide advice on how to make a story read like a movie in your head.
This post has arisen from a conversation I had when I realised I had effectively taught myself how to do this. It’s not something that I deliberately set out to do but I believe I achieved it inadvertently and comments from some readers appear to confirm this. At YABooksCentral, a teenage reviewer has made the point specifically, and many others have said that my book (Elvene) would make a good movie, including a filmmaker. Many have said that they ‘could see everything’ in their mind’s eye.
Very early in my writing career (though it’s never been my day job) I took some screenwriting courses and even wrote a screenplay. I found that this subconsciously influenced my prose writing in ways that I never foresaw and that I will now explain. The formatting of a screenplay doesn’t lend itself to fluidity, with separate headings for every scene and dialogue in blocks interspersed with occasional brief descriptive passages. Yet a well written screenplay lets you see the movie in your mind’s eye and you should write it as you’d imagine it appearing on a screen. However, contrary to what you might think, this is not the way to write a novel. Do not write a novel as if watching a movie. Have I confused you? Well, bear this in mind and hopefully it will all make sense before the end.
Significantly, a screenplay needs to be written in ‘real time’, which means descriptions are minimalistic and exposition non-existent (although screenwriters routinely smuggle exposition into their dialogue). Also, all the characterisation is in the dialogue and the action – you don’t need physical descriptions of a character, including their attire, unless it’s significant; just gender, generic age and ethnicity (if it’s important). It was this minimalistic approach that I subconsciously imported into my prose fiction.
There is one major difference between writing a screenplay and writing a novel and the two subsequent methods require different states of mind. In writing a screenplay you can only write what is seen and heard on the screen, whereas a novel can be written entirely (though not necessarily) from inside a character’s head. I hope this clarifies the point I made earlier. Now, as someone once pointed out to me (fellow blogger, Eli Horowitz) movies can take you into a character’s head through voiceover, flashbacks and dream sequences. But, even so, the screenplay would only record what is seen and heard on the screen, and these are exceptions, not the norm. Whereas, in a novel, getting inside a character’s head is the norm.
To finally address the question implicit in my heading, there are really only 2 ‘tricks’ for want of a better term: write the story in real time and always from some character’s point of view. Even description can be given through a character’s eyes, and the reader subconsciously becomes an actor. By inhabiting a character’s mind, the reader becomes fully immersed in the story.
Now I need to say something about scenes, because, contrary to popular belief, scenes are the smallest component of a story, not words or sentences or paragraphs. It’s best to think of the words on the page like the notes on a musical score. When you listen to a piece of music, the written score is irrelevant, and, even if you read the score, you wouldn’t hear the music anyway (unless, perhaps, if you’re a musician or a composer). Similarly, the story takes place in the reader’s mind where the words on the page conjure up images and emotions without conscious effort.
In a screenplay a scene has a specific definition, defined by a change in location or time. I use the same definition when writing prose. There are subtle methods for expanding and contracting time psychologically in a movie, and these can also be applied to prose fiction. I’ve made the point before that the language of story is the language of dreams, and in dreams, as in stories, sudden changes in location and time are not aberrant. In fact, I would argue that if we didn’t dream, stories wouldn’t work because our minds would continuously and subconsciously struggle with the logic.
Philosophy, at its best, challenges our long held views, such that we examine them more deeply than we might otherwise consider.
Paul P. Mealing
- Paul P. Mealing
- Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
Tuesday, 12 January 2016
Tuesday, 5 January 2016
Free will revisited
I’ve written quite a lot on this in the past, so one may wonder what I could add.
I’ve just read Mark Balaguer’s book, Free Will, which I won when Philosophy Now published my answer to their Question of the Month in their last issue (No 111, December 2015). It’s the fourth time I’ve won a book from them (out of 5 submissions).
It’s a well written book, not overly long or over-technical in a philosophical sense, so very readable whilst being well argued. Balaguer makes it clear from the outset where he stands on this issue, by continually referring to those who argue against free will as ‘the enemies of free will’. Whilst this makes him sound combative, the tone of his arguments are measured and not antagonistic. In his conclusion, he makes the important distinction that in ‘blocking’ arguments against free will, he’s not proving that free will exists.
He makes the distinction between what he calls Hume-style free will and Non-predetermined free will (NDP), which is a term I believe he’s coined for himself. Hume-style free will, is otherwise known as ‘compatibilism’, which means it’s compatible with determinism. In other words, even if everything in the world is deterministic from the Big Bang onwards, it doesn’t rule out you having free will. I know it sounds like a contradiction, but I think it’s to do with the fact that a completely deterministic universe doesn’t conflict with the subjective sense we all have of having free will. As I’ve expressed in numerous posts on this blog, I think there is ample evidence that the completely deterministic universe is a furphy, so compatibilism is not relevant as far as I’m concerned.
Balaguer also coins another term, ‘torn decision’, which he effectively uses as a litmus test for free will. In a glossary in the back he gives a definition which I’ve truncated:
A torn decision is a conscious decision in which you have multiple options and you’re torn as to which option is best.
He gives the example of choosing between chocolate or strawberry flavoured ice cream and not making a decision until you’re forced to, so you make it while you’re still ‘torn’. This is the example he keeps coming back to throughout the book.
In recent times, experiments in neuro-science have provided what some people believe are ‘slam-dunk’ arguments against free will, because scientists have been able to predict with 60% accuracy what decision a subject will make seconds before they make it, simply by measuring neuron activity in certain parts of the brain. Balaguer provides the most cogent arguments I’ve come across challenging these contentions. In particular, the Haynes studies, which showed neuron activity up to 10 seconds prior to the conscious decision. Balaguer points out that the neuron activity for these studies occurs in the PC and BA10 areas of the brain, which are associated with the ‘generation of plans’ and the ‘storage of plans’ respectively. He makes the point (in greater elaboration than I do here) that we should not be surprised if we subconsciously use our ‘planning’ areas of the brain whilst trying to make ‘torn decisions’. The other experiment and their counterparts, known as the Libet studies (since the 1960s) showed neuron activity half a second prior to conscious decision-making and was termed the ‘readiness potential’. Balaguer argues that there is ‘no evidence’ that the readiness potential causes the decision. Even so, it could be argued that, like the Haynes studies, it is subconscious activity happening prior to the conscious decision.
It is readily known (as Balaguer explicates) that much of our thinking is subconscious. We all have the experience of solving a problem subconsciously so it comes to us spontaneously when we don’t expect it to. And anyone who has pursued some artistic endeavour (like writing fiction) knows that a lot of it is subconscious so that the story and its characters appear on the page with seemingly divine-like spontaneity.
Backtracking to so-called Hume-style free will, it does have a relevance if one considers that our ‘wants’ - what we wish to do - are determined by our desires and needs. We assume that most of the animal kingdom behave on this principle. Few people (including Balaguer) discuss other sentient creatures when they discuss free will, yet I’ve long believed that consciousness and free will go hand-in-hand. In other words, I really can’t see the point of consciousness without free will. If everything is determined subconsciously, without the need to think, then why have we evolved to think?
But humans take thinking to a new level compared to every other species on the planet, so that we introspect and cogitate and reason and internally debate our way to many a decision.
Back in Feb., 2009, I reviewed Douglas Hofstadter’s Pulitzer prize winning book, Godel, Escher, Bach where, among other topics, I discussed consciousness, as that’s one of the themes of his book. Hofstadter coins the term ‘strange loop’. This is what I wrote back then:
By strange loop, Hofstadter means that we can effectively look at all the levels of our thinking except the ground level, which is our neurons. In between we have symbols, which is language, which we can discuss and analyse in a dispassionate way, just like I’m doing now. I can talk about my own thoughts and ideas as if they weren’t mine at all. Consciousness, in Hofstadter’s model (for want of a better word) is the top level, and neurons are the hardware level. In between we have the software (symbols) which is effectively language.
I was quick to point out that ‘software’ in this context is a metaphor – I don’t believe that language is really software, even though we ‘download’ it from generation to generation and it is indispensable to human reasoning, which we call thinking.
The point I’d make is that this is a 2 way process: the neurons are essential to thoughts, yet our thoughts I expect can affect neurons. I believe there is evidence that we can and do rewire our brains simply by exercising our mental faculties, even in later years, and surely exercising consciously is the very definition of will.
I’ve just read Mark Balaguer’s book, Free Will, which I won when Philosophy Now published my answer to their Question of the Month in their last issue (No 111, December 2015). It’s the fourth time I’ve won a book from them (out of 5 submissions).
It’s a well written book, not overly long or over-technical in a philosophical sense, so very readable whilst being well argued. Balaguer makes it clear from the outset where he stands on this issue, by continually referring to those who argue against free will as ‘the enemies of free will’. Whilst this makes him sound combative, the tone of his arguments are measured and not antagonistic. In his conclusion, he makes the important distinction that in ‘blocking’ arguments against free will, he’s not proving that free will exists.
He makes the distinction between what he calls Hume-style free will and Non-predetermined free will (NDP), which is a term I believe he’s coined for himself. Hume-style free will, is otherwise known as ‘compatibilism’, which means it’s compatible with determinism. In other words, even if everything in the world is deterministic from the Big Bang onwards, it doesn’t rule out you having free will. I know it sounds like a contradiction, but I think it’s to do with the fact that a completely deterministic universe doesn’t conflict with the subjective sense we all have of having free will. As I’ve expressed in numerous posts on this blog, I think there is ample evidence that the completely deterministic universe is a furphy, so compatibilism is not relevant as far as I’m concerned.
Balaguer also coins another term, ‘torn decision’, which he effectively uses as a litmus test for free will. In a glossary in the back he gives a definition which I’ve truncated:
A torn decision is a conscious decision in which you have multiple options and you’re torn as to which option is best.
He gives the example of choosing between chocolate or strawberry flavoured ice cream and not making a decision until you’re forced to, so you make it while you’re still ‘torn’. This is the example he keeps coming back to throughout the book.
In recent times, experiments in neuro-science have provided what some people believe are ‘slam-dunk’ arguments against free will, because scientists have been able to predict with 60% accuracy what decision a subject will make seconds before they make it, simply by measuring neuron activity in certain parts of the brain. Balaguer provides the most cogent arguments I’ve come across challenging these contentions. In particular, the Haynes studies, which showed neuron activity up to 10 seconds prior to the conscious decision. Balaguer points out that the neuron activity for these studies occurs in the PC and BA10 areas of the brain, which are associated with the ‘generation of plans’ and the ‘storage of plans’ respectively. He makes the point (in greater elaboration than I do here) that we should not be surprised if we subconsciously use our ‘planning’ areas of the brain whilst trying to make ‘torn decisions’. The other experiment and their counterparts, known as the Libet studies (since the 1960s) showed neuron activity half a second prior to conscious decision-making and was termed the ‘readiness potential’. Balaguer argues that there is ‘no evidence’ that the readiness potential causes the decision. Even so, it could be argued that, like the Haynes studies, it is subconscious activity happening prior to the conscious decision.
It is readily known (as Balaguer explicates) that much of our thinking is subconscious. We all have the experience of solving a problem subconsciously so it comes to us spontaneously when we don’t expect it to. And anyone who has pursued some artistic endeavour (like writing fiction) knows that a lot of it is subconscious so that the story and its characters appear on the page with seemingly divine-like spontaneity.
Backtracking to so-called Hume-style free will, it does have a relevance if one considers that our ‘wants’ - what we wish to do - are determined by our desires and needs. We assume that most of the animal kingdom behave on this principle. Few people (including Balaguer) discuss other sentient creatures when they discuss free will, yet I’ve long believed that consciousness and free will go hand-in-hand. In other words, I really can’t see the point of consciousness without free will. If everything is determined subconsciously, without the need to think, then why have we evolved to think?
But humans take thinking to a new level compared to every other species on the planet, so that we introspect and cogitate and reason and internally debate our way to many a decision.
Back in Feb., 2009, I reviewed Douglas Hofstadter’s Pulitzer prize winning book, Godel, Escher, Bach where, among other topics, I discussed consciousness, as that’s one of the themes of his book. Hofstadter coins the term ‘strange loop’. This is what I wrote back then:
By strange loop, Hofstadter means that we can effectively look at all the levels of our thinking except the ground level, which is our neurons. In between we have symbols, which is language, which we can discuss and analyse in a dispassionate way, just like I’m doing now. I can talk about my own thoughts and ideas as if they weren’t mine at all. Consciousness, in Hofstadter’s model (for want of a better word) is the top level, and neurons are the hardware level. In between we have the software (symbols) which is effectively language.
I was quick to point out that ‘software’ in this context is a metaphor – I don’t believe that language is really software, even though we ‘download’ it from generation to generation and it is indispensable to human reasoning, which we call thinking.
The point I’d make is that this is a 2 way process: the neurons are essential to thoughts, yet our thoughts I expect can affect neurons. I believe there is evidence that we can and do rewire our brains simply by exercising our mental faculties, even in later years, and surely exercising consciously is the very definition of will.
Tuesday, 15 December 2015
The battle for the future of Islam
There are many works of fiction featuring battles between ‘Good’ and ‘Evil’, yet it would not be distorting the truth to say that we are witnessing one now, though I think it is largely misconstrued by those of us who are on the sidelines. We see it as a conflict between Islam and the West, when it’s actually within Islam itself. This came home to me when I recently saw the biographical movie, He Named Me Malala (pronounce Ma-la-li, by the way).
Malala is well known as the 14 year old Pakistani school girl, shot in the head on a school bus by the Taliban for her outspoken views on education for girls in Pakistan. Now 18 years old (when the film was made) she has since won the Nobel Peace Prize and spoken in the United Nations, as well as having audiences with world leaders, like Barak Obama. In a recent interview with Emma Watson (on Emma’s Facebook page) she appeared much wiser than her years. In the movie, amongst her family, she behaves like an ordinary teenager with ‘crushes’ on famous sports stars. In effect, her personal battle with the Taliban represents in microcosm a much wider battle between past and future that is occurring on the world stage within Islam. A battle for the hearts and minds of Muslims all over the world.
IS or ISIS or Daesh has arisen out of conflicts between Shiites and Sunnis in both Iraq and Syria, but the declaration of a Caliphate has led to a much more serious, even sinister, connotation, because its followers believe they are fulfilling a prophecy which will only be resolved with the biblical end of the world. I’m not an Islamic scholar, so I’m quoting from Audrey Borowski, currently doing a PhD at the University of London, who holds a MSt (Masters degree) in Islamic Studies from Oxford University. She asserts: ‘…one of the Prophet Muhammad’s earliest hadith (sayings) locates the fateful showdown between Christians and Muslims that heralds the apocalypse in the city of Dabiq in Syria.’
“The Hour will not be established until the Romans (Christians) land at Dabiq. Then an army from Medina of the best people on the earth at that time… will fight them.”
She wrote an article of some length in Philosophy Now (Issue 111, Dec. 2015/Jan. 2016) titled Al Qaeda and ISIS; From Revolution to Apocalypse.
The point is that if someone believes they are in a fight for the end of the world, then destroying entire populations and cities is not off the table. They could resort to any tactic, like contaminating water supplies of entire cities or destroying food crops on a large scale. I alluded in the introduction that this apocalyptic ideology, in a fictional context, represents a classic contest between good and evil. From where I (and most people reading this blog) stand, anyone intent on destroying civilization as we know it, would be considered the ultimate evil.
What is most difficult for us to comprehend is that the perpetrators, the people ‘on the other side’ would see the roles reversed. Earlier this year (April 2015), I wrote a post titled Morality is totally in the eye of the beholder, where I explained how two different cultures in the same country (India) could have completely opposing views concerning a crime against a young woman, who was raped and murdered on a bus returning from seeing a movie with her boyfriend. One view was that the girl was the victim of a crime and the other view was that the girl was responsible for her own fate.
Many people have trouble believing that otherwise ordinary people, who commit evil acts in the form of atrocities, would see themselves as not being evil. We have an enormous capacity to justify to ourselves the most heinous acts, and no where is this more evident, than when one believes they are performing the ‘Will of God’. This is certainly the case with IS and their followers.
Unfortunately, this has led to a backlash in the West against all Muslims. In particular, we see both in social media and mainstream media, and even amongst mainstream politicians, a sentiment that Islam is fundamentally flawed and needs to be reformed. It seems to me that they are unaware that there is already a battle happening within Islam, where militant bodies like IS and Boko Haram and the Taliban represent the worse and a young schoolgirl from Pakistan represents the best.
Ayaan Hirsi Ali (whom I wrote about in March 2011), said when she was in Australia many years ago, that Islam was not compatible with a secular society, which is certainly true if Islamists wish to establish a religious-based government. There is a group, Hizb ut-Tahrir, who is banned in most Western countries, but not UK or Australia, and whose stated aim is to form a caliphate and whose political agenda, including the introduction of Sharia law, would clearly conflict with Australian law. But the truth is that there are many Muslims living an active and productive life in Australia, whilst still practising their religion. A secular society is not an atheistic society, yet is religiously nondependent by definition. In other words, there is room for variety in religious practice and that is what we see. Extremists of any religious persuasion are generally not well received in a pluralist multicultural society, yet that is the fear that is driving the debate in many secular societies.
Over a year ago (Aug 2014) I wrote a post titled Don’t judge all Muslims the same, based on another article I read in Philosophy Now (Issue 104, May/Jun 2014) by Terri Murray (Master of Theology, Heythrop College, London) who made a very salient point differentiating cultural values and ideals from individual ones. In particular, she asserted that an individual’s rights overrules the so-called rights of a culture or a community. Therefore, misogynistic issues like female genital mutilation, honour killings, child marriage, all of which are illegal in Australia, are abuses of individual rights that may be condoned, even considered normal practice, in some cultures.
Getting back to my original subject matter, like the case of the Indian girl (a medical graduate) who was murdered for going on a date, this really is a battle between past and future. IS and the Taliban and their variant Islamic ideologies represent a desire to regain a past that has no relevance in the 21st Century – it’s mediaeval, not only in concept but also in practice. One of the consequences of the Internet is that it has become a vehicle for both sides. So young women in far off countries are learning that there is another world where education can lead to a better life. And this is the key: education of women, as Malala has brought to the world’s attention, is the only true way forward. It’s curious that women are what these regimes seem to fear most, including IS, whose greatest fear is to be killed by a female Kurdish warrior, because then they won’t get to Paradise.
Malala is well known as the 14 year old Pakistani school girl, shot in the head on a school bus by the Taliban for her outspoken views on education for girls in Pakistan. Now 18 years old (when the film was made) she has since won the Nobel Peace Prize and spoken in the United Nations, as well as having audiences with world leaders, like Barak Obama. In a recent interview with Emma Watson (on Emma’s Facebook page) she appeared much wiser than her years. In the movie, amongst her family, she behaves like an ordinary teenager with ‘crushes’ on famous sports stars. In effect, her personal battle with the Taliban represents in microcosm a much wider battle between past and future that is occurring on the world stage within Islam. A battle for the hearts and minds of Muslims all over the world.
IS or ISIS or Daesh has arisen out of conflicts between Shiites and Sunnis in both Iraq and Syria, but the declaration of a Caliphate has led to a much more serious, even sinister, connotation, because its followers believe they are fulfilling a prophecy which will only be resolved with the biblical end of the world. I’m not an Islamic scholar, so I’m quoting from Audrey Borowski, currently doing a PhD at the University of London, who holds a MSt (Masters degree) in Islamic Studies from Oxford University. She asserts: ‘…one of the Prophet Muhammad’s earliest hadith (sayings) locates the fateful showdown between Christians and Muslims that heralds the apocalypse in the city of Dabiq in Syria.’
“The Hour will not be established until the Romans (Christians) land at Dabiq. Then an army from Medina of the best people on the earth at that time… will fight them.”
She wrote an article of some length in Philosophy Now (Issue 111, Dec. 2015/Jan. 2016) titled Al Qaeda and ISIS; From Revolution to Apocalypse.
The point is that if someone believes they are in a fight for the end of the world, then destroying entire populations and cities is not off the table. They could resort to any tactic, like contaminating water supplies of entire cities or destroying food crops on a large scale. I alluded in the introduction that this apocalyptic ideology, in a fictional context, represents a classic contest between good and evil. From where I (and most people reading this blog) stand, anyone intent on destroying civilization as we know it, would be considered the ultimate evil.
What is most difficult for us to comprehend is that the perpetrators, the people ‘on the other side’ would see the roles reversed. Earlier this year (April 2015), I wrote a post titled Morality is totally in the eye of the beholder, where I explained how two different cultures in the same country (India) could have completely opposing views concerning a crime against a young woman, who was raped and murdered on a bus returning from seeing a movie with her boyfriend. One view was that the girl was the victim of a crime and the other view was that the girl was responsible for her own fate.
Many people have trouble believing that otherwise ordinary people, who commit evil acts in the form of atrocities, would see themselves as not being evil. We have an enormous capacity to justify to ourselves the most heinous acts, and no where is this more evident, than when one believes they are performing the ‘Will of God’. This is certainly the case with IS and their followers.
Unfortunately, this has led to a backlash in the West against all Muslims. In particular, we see both in social media and mainstream media, and even amongst mainstream politicians, a sentiment that Islam is fundamentally flawed and needs to be reformed. It seems to me that they are unaware that there is already a battle happening within Islam, where militant bodies like IS and Boko Haram and the Taliban represent the worse and a young schoolgirl from Pakistan represents the best.
Ayaan Hirsi Ali (whom I wrote about in March 2011), said when she was in Australia many years ago, that Islam was not compatible with a secular society, which is certainly true if Islamists wish to establish a religious-based government. There is a group, Hizb ut-Tahrir, who is banned in most Western countries, but not UK or Australia, and whose stated aim is to form a caliphate and whose political agenda, including the introduction of Sharia law, would clearly conflict with Australian law. But the truth is that there are many Muslims living an active and productive life in Australia, whilst still practising their religion. A secular society is not an atheistic society, yet is religiously nondependent by definition. In other words, there is room for variety in religious practice and that is what we see. Extremists of any religious persuasion are generally not well received in a pluralist multicultural society, yet that is the fear that is driving the debate in many secular societies.
Over a year ago (Aug 2014) I wrote a post titled Don’t judge all Muslims the same, based on another article I read in Philosophy Now (Issue 104, May/Jun 2014) by Terri Murray (Master of Theology, Heythrop College, London) who made a very salient point differentiating cultural values and ideals from individual ones. In particular, she asserted that an individual’s rights overrules the so-called rights of a culture or a community. Therefore, misogynistic issues like female genital mutilation, honour killings, child marriage, all of which are illegal in Australia, are abuses of individual rights that may be condoned, even considered normal practice, in some cultures.
Getting back to my original subject matter, like the case of the Indian girl (a medical graduate) who was murdered for going on a date, this really is a battle between past and future. IS and the Taliban and their variant Islamic ideologies represent a desire to regain a past that has no relevance in the 21st Century – it’s mediaeval, not only in concept but also in practice. One of the consequences of the Internet is that it has become a vehicle for both sides. So young women in far off countries are learning that there is another world where education can lead to a better life. And this is the key: education of women, as Malala has brought to the world’s attention, is the only true way forward. It’s curious that women are what these regimes seem to fear most, including IS, whose greatest fear is to be killed by a female Kurdish warrior, because then they won’t get to Paradise.
Tuesday, 1 December 2015
Why narcissists are a danger to themselves and others
I expect everyone has met a narcissist, though, like all personality disorders, there are degrees of severity, from the generally harmless egotistical know-it-all to the megalomaniac, who takes control of an entire nation. In between those extremes is the person who somehow self-destructs while claiming it’s everyone else’s fault. They’re the ones who are captain of the ship and totally in control, even when it runs aground, but suddenly claim it’s no longer their fault. I’m talking metaphorically, but this happened quite literally and spectacularly, a couple of years back, as most of you will remember.
The major problem with narcissists is not their self-aggrandisement and over-inflated opinion of their own worth, but their distorted view of reality.
Narcissists have a tendency to self-destruct, not on purpose, but because their view of reality, based on their overblown sense of self-justification, becomes so distorted that they lose perspective and then control, even though everyone around them can see the truth, but are generally powerless to intervene.
They are particularly disastrous in politics but are likely to rise to power when things are going badly, because they are charismatic and their self-belief becomes contagious. Someone said (I don’t know who) that when things are going badly society turns on itself – they were referring to the European witch hunts, which coincided with economic and environmental tribulations. The recent GFC creates ripe conditions for charismatic leaders to feed a population’s paranoia and promise miracle solutions with no basis in rationality. Look at what happened in Europe following the Great Depression of the 20th Century: World War 2. And who started it? Probably the most famous narcissist in recent history. The key element that they have in common with the aforementioned witch-hunters is that they can find someone to blame and, frighteningly, they are believed.
Narcissists make excellent villains as I’ve demonstrated in my own fiction. But one must be careful of whom we demonise lest we become as spiteful and destructive as those we wish not to emulate. Seriously, we should not take them seriously; then all their self-importance and self-aggrandisement becomes comical. Unfortunately, they tend to divide society between those who see themselves as victims and those who see the purported culprits as the victims. In other words, they divide nations when they should be uniting them.
But there are exceptions. Having read Steve Jobs’ biography (by Walter Isaacson) I would say he had narcissistic tendencies, yet he was eminently successful. Many people have commented on his ‘reality-distortion field’, which I’ve already argued is a narcissistic trait, and he could be very egotistical at times, according to anecdotal evidence. Yet he could form deep relationships despite being very contrary in his dealings with his colleagues – building them up one moment and tearing them down the next. But Jobs was driven to strive for perfection, both aesthetically and functionally, and he sought out people who had the same aspiration. He was, of course, extraordinarily charismatic, intelligent and somewhat eccentric. He was a Buddhist, which may have tempered his narcissistic tendencies; but I’m just speculating – I never met him or worked with him – I just used and admired his products like many others. Anyway, I would cite Jobs as an example of a narcissist who broke the mould – he didn’t self-destruct, quite the opposite, in fact.
Addendum: When I wrote this I had recently read Isaacson's biography of Steve Jobs, but I've since seen a documentary and he came perilously close to self-destruction. He was called before a Senate Committee under charges of fraud. He was giving his employees backdated shares (I think that was the charge, from memory). Anyway, according to the documentary, he only avoided prison because it would have destroyed the share price of Apple, which was the biggest company on the share market at the time. I don't know how true this is, but it rings true.
The major problem with narcissists is not their self-aggrandisement and over-inflated opinion of their own worth, but their distorted view of reality.
Narcissists have a tendency to self-destruct, not on purpose, but because their view of reality, based on their overblown sense of self-justification, becomes so distorted that they lose perspective and then control, even though everyone around them can see the truth, but are generally powerless to intervene.
They are particularly disastrous in politics but are likely to rise to power when things are going badly, because they are charismatic and their self-belief becomes contagious. Someone said (I don’t know who) that when things are going badly society turns on itself – they were referring to the European witch hunts, which coincided with economic and environmental tribulations. The recent GFC creates ripe conditions for charismatic leaders to feed a population’s paranoia and promise miracle solutions with no basis in rationality. Look at what happened in Europe following the Great Depression of the 20th Century: World War 2. And who started it? Probably the most famous narcissist in recent history. The key element that they have in common with the aforementioned witch-hunters is that they can find someone to blame and, frighteningly, they are believed.
Narcissists make excellent villains as I’ve demonstrated in my own fiction. But one must be careful of whom we demonise lest we become as spiteful and destructive as those we wish not to emulate. Seriously, we should not take them seriously; then all their self-importance and self-aggrandisement becomes comical. Unfortunately, they tend to divide society between those who see themselves as victims and those who see the purported culprits as the victims. In other words, they divide nations when they should be uniting them.
But there are exceptions. Having read Steve Jobs’ biography (by Walter Isaacson) I would say he had narcissistic tendencies, yet he was eminently successful. Many people have commented on his ‘reality-distortion field’, which I’ve already argued is a narcissistic trait, and he could be very egotistical at times, according to anecdotal evidence. Yet he could form deep relationships despite being very contrary in his dealings with his colleagues – building them up one moment and tearing them down the next. But Jobs was driven to strive for perfection, both aesthetically and functionally, and he sought out people who had the same aspiration. He was, of course, extraordinarily charismatic, intelligent and somewhat eccentric. He was a Buddhist, which may have tempered his narcissistic tendencies; but I’m just speculating – I never met him or worked with him – I just used and admired his products like many others. Anyway, I would cite Jobs as an example of a narcissist who broke the mould – he didn’t self-destruct, quite the opposite, in fact.
Addendum: When I wrote this I had recently read Isaacson's biography of Steve Jobs, but I've since seen a documentary and he came perilously close to self-destruction. He was called before a Senate Committee under charges of fraud. He was giving his employees backdated shares (I think that was the charge, from memory). Anyway, according to the documentary, he only avoided prison because it would have destroyed the share price of Apple, which was the biggest company on the share market at the time. I don't know how true this is, but it rings true.
Tuesday, 24 November 2015
The Centenary of Einstein’s General Theory of Relativity
This month (November 2015) marks 100 years since Albert Einstein published his milestone paper on the General Theory of Relativity, which not only eclipsed Newton’s equally revolutionary Theory of Universal Gravitation, but is still the cornerstone of every cosmological theory that has been developed and disseminated since.
It needs to be pointed out that Einstein’s ‘annus mirabilis’ (miraculous year), as it’s been called, occurred 10 years earlier in 1905, when he published 3 groundbreaking papers that elevated him from a patent clerk in Bern to a candidate for the Nobel Prize (eventually realised of course). The 3 papers were his Special Theory of Relativity, his explanation of the photo-electric effect using the newly coined concept, photon of light, and a statistical analysis of Brownian motion, which effectively proved that molecules made of atoms really exist and were not just a convenient theoretical concept.
Given the anniversary, it seemed appropriate that I should write something on the topic, despite my limited knowledge and despite the plethora of books that have been published to recognise the feat. The best I’ve read is The Road to Relativity; The History and Meaning of Einstein’s “The Foundation of General Relativity” (the original title of his paper) by Hanoch Gutfreund and Jurgen Renn. They have managed to include an annotated copy of Einstein’s original handwritten manuscript with a page by page exposition. But more than that, they take us on Einstein’s mental journey and, in particular, how he found the mathematical language to portray the intuitive ideas in his head and yet work within the constraints he believed were necessary for it to work.
The constraints were not inconsiderable and include: the equivalence of inertial and gravitational mass; the conservation of energy and momentum under transformation between frames of reference both in rotational and linear motion; and the ability to reduce his theory mathematically to Newton’s theory when relativistic effects were negligible.
Einstein’s epiphany, that led him down the particular path he took, was the realisation that one experienced no force when one was in free fall, contrary to Newton’s theory and contrary to our belief that gravity is a force. Free fall subjectively feels no different to being in orbit around a planet. The aptly named ‘vomit comet’ is an aeroplane that goes into free fall in order to create the momentary sense of weightlessness that one would experience in space.
Einstein learnt from his study of Maxwell’s equations for electromagnetic radiation, that mathematics could sometimes provide a counter-intuitive insight, like the constant speed of light.
In fact, Einstein had to learn new mathematics (for him) and engaged the help of his close friend, Marcel Grossman, who led him through the technical travails of tensor calculus using Riemann geometry. It would seem, from what I can understand of his mental journey, that it was the mathematics, as much as any other insight, that led Einstein to realise that space-time is curved and not Euclidean as we all generally believe. To quote Gutfreund and Renn:
[Einstein] realised that the four-dimensional spacetime of general relativity no longer fitted the framework of Euclidean geometry… The geometrization of general relativity and the understanding of gravity as being due to the curvature of spacetime is a result of the further development and not a presupposition of Einstein’s formulation of the theory.
By Euclidean, one means space is flat and light travels in perfectly straight lines. One of the confirmations of Einstein’s theory was that he predicted that light passing close to the Sun would be literally bent and so a star in the background would appear to shift as the Sun approached the same line of sight for an observer on Earth as for the star. This could only be seen during an eclipse and was duly observed by Arthur Eddington in 1919 on the island of Principe near Africa.
Einstein’s formulations led him to postulate that it’s the geometry of space that gives us gravity and the geometry, which is curved, is caused by massive objects. In other words, it’s mass that curves space and it’s the curvature of space that causes mass to move, as John Wheeler famously and succinctly expounded.
It may sound back-to-front, but, for me, Einstein’s Special Theory of Relativity only makes sense in the context of his General Theory, even though they were formulated in the reverse order. To understand what I’m talking about, I need to explain geodesics.
When you fly long distance on a plane, the path projected onto a flat map looks curved. You may have noticed this when they show the path on a screen in the cabin while you’re in flight. The point is that when you fly long distance you are travelling over a curved surface, because, obviously, the Earth is a sphere, and the shortest distance between 2 points (cities) lies on what’s called a great circle. A great circle is the one circle that goes through both points that is the largest circle possible. Now, I know that sounds paradoxical, but the largest circle provides the shortest distance over the surface (we are not talking about tunnels) that one can travel and there is only one, therefore there is one shortest path. This shortest path is called the geodesic that connects those 2 points.
A geodesic in gravitation is the shortest distance in spacetime between 2 points and that is what one follows when one is in free fall. At the risk of information overload, I’m going to introduce another concept which is essential for understanding the physics of a geodesic in gravity.
One of the most fundamental principles discovered in physics is the principle of least action (formulated mathematically as a Lagrangian which is the difference between kinetic and potential energy). The most commonly experienced example would be refraction of light through glass or water, because light travels at different velocities in air, water and glass (slower through glass or water than air). The extremely gifted 17th Century amateur mathematician, Pierre de Fermat (actually a lawyer) conjectured that the light travels the shortest path, meaning it takes the least time, and the refractive index (Snell’s law) can be deduced mathematically from this principle. In the 20th Century, Richard Feynman developed his path integral method of quantum mechanics from the least action principle, and, in effect, confirmed Fermat’s principle.
Now, when one applies the principle of least action to a projectile in a gravitational field (like a thrown ball) one finds that it too takes the shortest path, but paradoxically this is the path of longest relativistic time (not unlike the paradox of the largest circle described earlier).
Richard Feynman gives a worked example in his excellent book, Six Not-So-Easy Pieces. In relativity, time can be subjective, so that a moving clock always appears to be running slow compared to a stationary clock, but, because motion is relative, the perception is reversed for the other clock. However, as Feynman points out:
The time measured by a moving clock is called its “proper time”. In free fall, the trajectory makes the proper time of an object a maximum.
In other words, the geodesic is the trajectory or path of longest relativistic time. Any variant from the geodesic will result in the clock’s proper time being shorter, which means time literally slows down. So special relativity is not symmetrical in a gravitational field and there is a gravitational field everywhere in space. As Gutfreund and Renn point out, Einstein himself acknowledged that he had effectively replaced the fictional aether with gravity.
This is most apparent when one considers a black hole. Every massive body has an escape velocity which is the velocity a projectile must achieve to become free of a body’s gravitational field. Obviously, the escape velocity for Earth is larger than the escape velocity for the moon and considerably less than the escape velocity of the Sun. Not so obvious, although logical from what we know, the escape velocity is independent of the projectile’s mass and therefore also applies to light (photons). We know that all body’s fall at exactly the same rate in a gravitational field. In other words, a geodesic applies equally to all bodies irrespective of their mass. In the case of a black hole, the escape velocity exceeds the speed of light, and, in fact, becomes the speed of light at its event horizon. At the event horizon time stops for an external observer because the light is red-shifted to infinity. One of the consequences of Einstein’s theory is that clocks travel slower in a stronger gravitational field, and, at the event horizon, gravity is so strong the clock stops.
To appreciate why clocks slow down and rods become shorter (in the direction of motion), with respect to an observer, one must understand the consequences of the speed of light being constant. If light is a wave then the equation for a wave is very fundamental:
v = f λ , where v is velocity, f is the frequency and λ is the wavelength.
In the case of light the equation becomes c = f λ , where c is the speed of light.
One can see that if c stays constant then f and λ can change to accommodate it. Frequency measures time and wavelength measures distance. One can see how frequency can become stretched or compressed by motion if c remains constant, depending whether an observer is travelling away from a source of radiation or towards it. This is called the Doppler effect, and on a cosmic scale it tells us that the Universe is expanding, because virtually all galaxies in all directions are travelling away from us. If a geodesic is the path of maximum proper time, we have a reference for determining relativistic effects, and we can use the Doppler effect to determine if a light source is moving relative to an observer, even though the speed of light is always c.
I won’t go into it here, but the famous twin paradox can be explained by taking into account both relativistic and Doppler effects for both parties – the one travelling and the one left at home.
It needs to be pointed out that Einstein’s ‘annus mirabilis’ (miraculous year), as it’s been called, occurred 10 years earlier in 1905, when he published 3 groundbreaking papers that elevated him from a patent clerk in Bern to a candidate for the Nobel Prize (eventually realised of course). The 3 papers were his Special Theory of Relativity, his explanation of the photo-electric effect using the newly coined concept, photon of light, and a statistical analysis of Brownian motion, which effectively proved that molecules made of atoms really exist and were not just a convenient theoretical concept.
Given the anniversary, it seemed appropriate that I should write something on the topic, despite my limited knowledge and despite the plethora of books that have been published to recognise the feat. The best I’ve read is The Road to Relativity; The History and Meaning of Einstein’s “The Foundation of General Relativity” (the original title of his paper) by Hanoch Gutfreund and Jurgen Renn. They have managed to include an annotated copy of Einstein’s original handwritten manuscript with a page by page exposition. But more than that, they take us on Einstein’s mental journey and, in particular, how he found the mathematical language to portray the intuitive ideas in his head and yet work within the constraints he believed were necessary for it to work.
The constraints were not inconsiderable and include: the equivalence of inertial and gravitational mass; the conservation of energy and momentum under transformation between frames of reference both in rotational and linear motion; and the ability to reduce his theory mathematically to Newton’s theory when relativistic effects were negligible.
Einstein’s epiphany, that led him down the particular path he took, was the realisation that one experienced no force when one was in free fall, contrary to Newton’s theory and contrary to our belief that gravity is a force. Free fall subjectively feels no different to being in orbit around a planet. The aptly named ‘vomit comet’ is an aeroplane that goes into free fall in order to create the momentary sense of weightlessness that one would experience in space.
Einstein learnt from his study of Maxwell’s equations for electromagnetic radiation, that mathematics could sometimes provide a counter-intuitive insight, like the constant speed of light.
In fact, Einstein had to learn new mathematics (for him) and engaged the help of his close friend, Marcel Grossman, who led him through the technical travails of tensor calculus using Riemann geometry. It would seem, from what I can understand of his mental journey, that it was the mathematics, as much as any other insight, that led Einstein to realise that space-time is curved and not Euclidean as we all generally believe. To quote Gutfreund and Renn:
[Einstein] realised that the four-dimensional spacetime of general relativity no longer fitted the framework of Euclidean geometry… The geometrization of general relativity and the understanding of gravity as being due to the curvature of spacetime is a result of the further development and not a presupposition of Einstein’s formulation of the theory.
By Euclidean, one means space is flat and light travels in perfectly straight lines. One of the confirmations of Einstein’s theory was that he predicted that light passing close to the Sun would be literally bent and so a star in the background would appear to shift as the Sun approached the same line of sight for an observer on Earth as for the star. This could only be seen during an eclipse and was duly observed by Arthur Eddington in 1919 on the island of Principe near Africa.
Einstein’s formulations led him to postulate that it’s the geometry of space that gives us gravity and the geometry, which is curved, is caused by massive objects. In other words, it’s mass that curves space and it’s the curvature of space that causes mass to move, as John Wheeler famously and succinctly expounded.
It may sound back-to-front, but, for me, Einstein’s Special Theory of Relativity only makes sense in the context of his General Theory, even though they were formulated in the reverse order. To understand what I’m talking about, I need to explain geodesics.
When you fly long distance on a plane, the path projected onto a flat map looks curved. You may have noticed this when they show the path on a screen in the cabin while you’re in flight. The point is that when you fly long distance you are travelling over a curved surface, because, obviously, the Earth is a sphere, and the shortest distance between 2 points (cities) lies on what’s called a great circle. A great circle is the one circle that goes through both points that is the largest circle possible. Now, I know that sounds paradoxical, but the largest circle provides the shortest distance over the surface (we are not talking about tunnels) that one can travel and there is only one, therefore there is one shortest path. This shortest path is called the geodesic that connects those 2 points.
A geodesic in gravitation is the shortest distance in spacetime between 2 points and that is what one follows when one is in free fall. At the risk of information overload, I’m going to introduce another concept which is essential for understanding the physics of a geodesic in gravity.
One of the most fundamental principles discovered in physics is the principle of least action (formulated mathematically as a Lagrangian which is the difference between kinetic and potential energy). The most commonly experienced example would be refraction of light through glass or water, because light travels at different velocities in air, water and glass (slower through glass or water than air). The extremely gifted 17th Century amateur mathematician, Pierre de Fermat (actually a lawyer) conjectured that the light travels the shortest path, meaning it takes the least time, and the refractive index (Snell’s law) can be deduced mathematically from this principle. In the 20th Century, Richard Feynman developed his path integral method of quantum mechanics from the least action principle, and, in effect, confirmed Fermat’s principle.
Now, when one applies the principle of least action to a projectile in a gravitational field (like a thrown ball) one finds that it too takes the shortest path, but paradoxically this is the path of longest relativistic time (not unlike the paradox of the largest circle described earlier).
Richard Feynman gives a worked example in his excellent book, Six Not-So-Easy Pieces. In relativity, time can be subjective, so that a moving clock always appears to be running slow compared to a stationary clock, but, because motion is relative, the perception is reversed for the other clock. However, as Feynman points out:
The time measured by a moving clock is called its “proper time”. In free fall, the trajectory makes the proper time of an object a maximum.
In other words, the geodesic is the trajectory or path of longest relativistic time. Any variant from the geodesic will result in the clock’s proper time being shorter, which means time literally slows down. So special relativity is not symmetrical in a gravitational field and there is a gravitational field everywhere in space. As Gutfreund and Renn point out, Einstein himself acknowledged that he had effectively replaced the fictional aether with gravity.
This is most apparent when one considers a black hole. Every massive body has an escape velocity which is the velocity a projectile must achieve to become free of a body’s gravitational field. Obviously, the escape velocity for Earth is larger than the escape velocity for the moon and considerably less than the escape velocity of the Sun. Not so obvious, although logical from what we know, the escape velocity is independent of the projectile’s mass and therefore also applies to light (photons). We know that all body’s fall at exactly the same rate in a gravitational field. In other words, a geodesic applies equally to all bodies irrespective of their mass. In the case of a black hole, the escape velocity exceeds the speed of light, and, in fact, becomes the speed of light at its event horizon. At the event horizon time stops for an external observer because the light is red-shifted to infinity. One of the consequences of Einstein’s theory is that clocks travel slower in a stronger gravitational field, and, at the event horizon, gravity is so strong the clock stops.
To appreciate why clocks slow down and rods become shorter (in the direction of motion), with respect to an observer, one must understand the consequences of the speed of light being constant. If light is a wave then the equation for a wave is very fundamental:
v = f λ , where v is velocity, f is the frequency and λ is the wavelength.
In the case of light the equation becomes c = f λ , where c is the speed of light.
One can see that if c stays constant then f and λ can change to accommodate it. Frequency measures time and wavelength measures distance. One can see how frequency can become stretched or compressed by motion if c remains constant, depending whether an observer is travelling away from a source of radiation or towards it. This is called the Doppler effect, and on a cosmic scale it tells us that the Universe is expanding, because virtually all galaxies in all directions are travelling away from us. If a geodesic is the path of maximum proper time, we have a reference for determining relativistic effects, and we can use the Doppler effect to determine if a light source is moving relative to an observer, even though the speed of light is always c.
I won’t go into it here, but the famous twin paradox can be explained by taking into account both relativistic and Doppler effects for both parties – the one travelling and the one left at home.
This is an exposition I wrote on the twin paradox.
Saturday, 14 November 2015
The Unreasonable Effectiveness of Mathematics
I originally called this post: Two miracles that are fundamental to the Universe and our place in it. The miracles I’m referring to will not be found in any scripture and God is not a necessary participant, with the emphasis on necessary. I am one of those rare dabblers in philosophy who argues that science is neutral on the subject of God. A definition of miracle is required, so for the purpose of this discussion, I call a miracle something that can’t be explained, yet has profound and far-reaching consequences. ‘Something’, in this context, could be described as a concordance of unexpected relationships in completely different realms.
This is one of those posts that will upset people on both sides of the religious divide, I’m sure, but it’s been rattling around in my head ever since I re-read Eugene P. Wigner’s seminal essay, The Unreasonable Effectiveness of Mathematics in the Natural Sciences. I came across it (again) in a collection of essays under the collective title, Math Angst, contained in a volume called The World Treasury of Physics, Astronomy and Mathematics edited by Timothy Ferris (1991). This is a collection of essays and excerpts by some of the greatest minds in physics, mathematics and cosmology in the 20th Century.
Back to Wigner, in discussing the significance of complex numbers in quantum mechanics, specifically Hilbert’s space, he remarks:
‘…complex numbers are far from natural or simple and they cannot be suggested by physical observations. Furthermore, the use of complex numbers in this case is not a calculated trick of applied mathematics but comes close to being a necessity in the formulation of the laws of quantum mechanics.’
It is well known, among physicists, that in the language of mathematics, quantum mechanics not only makes perfect sense but is one of the most successful physical theories ever. But in ordinary language it is hard to make sense of it in any way that ordinary people would comprehend it.
It is in this context that Wigner makes the following statement in the next paragraph following the quote above:
‘It is difficult to avoid the impression that a miracle confronts us here… or the two miracles of the existence of laws of nature and of the human mind’s capacity to divine them.’
Hence the 2 miracles I refer to in my introduction. The key that links the 2 miracles is mathematics. A number of physicists: Paul Davies, Roger Penrose, John Barrow (they’re just the ones I’ve read); have commented on the inordinate correspondence we find between mathematics and regularities found in natural phenomena that have been dubbed ‘laws of nature’.
The first miracle is that mathematics seems to underpin everything we know and learn about the Universe, including ourselves. As Barrow has pointed out, mathematics allows us to predict the makeup of fundamental elements in the first 3 minutes of the Universe. It provides us with the field equations of Einstein’s general theory of relativity, Maxwell’s equations for electromagnetic radiation, Schrodinger’s wave function in quantum mechanics and the four digit software code for all biological life we call DNA.
The second miracle is that the human mind is uniquely evolved to access mathematics to an extraordinarily deep and meaningful degree that has nothing to do with our everyday prosaic survival but everything to do with our ability to comprehend the Universe in all the facets I listed above.
The 2 miracles combined give us the greatest mystery of the Universe, which I’ve stated many times on this blog: It created the means to understand itself, through us.
So where does God fit into this? Interestingly, I would argue that when it comes to mathematics, God has no choice. Einstein once asked the rhetorical question, in correspondence with his friend, Paul Ehrenfest (if I recall it correctly): did God have any choice in determining the laws of the Universe? This question is probably unanswerable, but when it comes to mathematics, I would answer in the negative. If one looks at prime numbers (there are other examples, but primes are fundamental) it’s self-evident that they are self-selected by their very definition – God didn’t choose them.
The interesting thing about primes is that they are the ‘atoms’ of mathematics because all the other ‘natural’ numbers can be determined from all the primes, all the way to infinity. The other interesting thing is that Riemann’s hypothesis indicates that primes have a deep and unexpected relationship with some of the most esoteric areas of mathematics. So, if one was a religious person, one might suggest that this is surely the handiwork of God, yet God can’t even affect the fundamentals upon which all this rests.
Addendum: I changed the title to reflect the title of Wigner's essay, for web-search purposes.
This is one of those posts that will upset people on both sides of the religious divide, I’m sure, but it’s been rattling around in my head ever since I re-read Eugene P. Wigner’s seminal essay, The Unreasonable Effectiveness of Mathematics in the Natural Sciences. I came across it (again) in a collection of essays under the collective title, Math Angst, contained in a volume called The World Treasury of Physics, Astronomy and Mathematics edited by Timothy Ferris (1991). This is a collection of essays and excerpts by some of the greatest minds in physics, mathematics and cosmology in the 20th Century.
Back to Wigner, in discussing the significance of complex numbers in quantum mechanics, specifically Hilbert’s space, he remarks:
‘…complex numbers are far from natural or simple and they cannot be suggested by physical observations. Furthermore, the use of complex numbers in this case is not a calculated trick of applied mathematics but comes close to being a necessity in the formulation of the laws of quantum mechanics.’
It is well known, among physicists, that in the language of mathematics, quantum mechanics not only makes perfect sense but is one of the most successful physical theories ever. But in ordinary language it is hard to make sense of it in any way that ordinary people would comprehend it.
It is in this context that Wigner makes the following statement in the next paragraph following the quote above:
‘It is difficult to avoid the impression that a miracle confronts us here… or the two miracles of the existence of laws of nature and of the human mind’s capacity to divine them.’
Hence the 2 miracles I refer to in my introduction. The key that links the 2 miracles is mathematics. A number of physicists: Paul Davies, Roger Penrose, John Barrow (they’re just the ones I’ve read); have commented on the inordinate correspondence we find between mathematics and regularities found in natural phenomena that have been dubbed ‘laws of nature’.
The first miracle is that mathematics seems to underpin everything we know and learn about the Universe, including ourselves. As Barrow has pointed out, mathematics allows us to predict the makeup of fundamental elements in the first 3 minutes of the Universe. It provides us with the field equations of Einstein’s general theory of relativity, Maxwell’s equations for electromagnetic radiation, Schrodinger’s wave function in quantum mechanics and the four digit software code for all biological life we call DNA.
The second miracle is that the human mind is uniquely evolved to access mathematics to an extraordinarily deep and meaningful degree that has nothing to do with our everyday prosaic survival but everything to do with our ability to comprehend the Universe in all the facets I listed above.
The 2 miracles combined give us the greatest mystery of the Universe, which I’ve stated many times on this blog: It created the means to understand itself, through us.
So where does God fit into this? Interestingly, I would argue that when it comes to mathematics, God has no choice. Einstein once asked the rhetorical question, in correspondence with his friend, Paul Ehrenfest (if I recall it correctly): did God have any choice in determining the laws of the Universe? This question is probably unanswerable, but when it comes to mathematics, I would answer in the negative. If one looks at prime numbers (there are other examples, but primes are fundamental) it’s self-evident that they are self-selected by their very definition – God didn’t choose them.
The interesting thing about primes is that they are the ‘atoms’ of mathematics because all the other ‘natural’ numbers can be determined from all the primes, all the way to infinity. The other interesting thing is that Riemann’s hypothesis indicates that primes have a deep and unexpected relationship with some of the most esoteric areas of mathematics. So, if one was a religious person, one might suggest that this is surely the handiwork of God, yet God can’t even affect the fundamentals upon which all this rests.
Addendum: I changed the title to reflect the title of Wigner's essay, for web-search purposes.
Subscribe to:
Posts (Atom)