Yes, this is a bit tongue-in-cheek, but like most things tongue-in-cheek it just might contain an element of truth. I’m not a cosmologist or even a physicist, so this is just me being playful yet serious in as much as anyone can be philosophically serious about the origins of Everything, otherwise known as the Universe.
Now I must make a qualification, lest people think I’m leading them down the garden path. When people think of ‘God’s equation’, they most likely think of some succinct equation or set of equations (like Maxwell’s equations) from which everything we know about the Universe can be derived mathematically. For many people this is a desired outcome, founded on the belief that one day we will have a TOE (Theory Of Everything) – itself a misnomer – which will incorporate all the known laws of the Universe in one succinct theory. Specifically, said theory will unite the Electromagnetic force, the so-called Weak force, the so-called Strong force and Gravity as all being derived from a common ‘field’. Personally, I think that’s a chimera, but I’d be happy to be proven wrong. Many physicists believe some version of String Theory or M Theory will eventually give us that goal. I should point out that the Weak force has already been united with the Electromagnetic force.
So what do I mean by the sobriquet, God’s equation? Last week I watched a lecture by Allan Adams as part of MIT Open Courseware (8.04, Spring 2013) titled Lecture 6: Time Evolution and the Schrodinger Equation, in which Adams made a number of pertinent points that led me to consider that perhaps Schrodinger’s Equation (SE) deserved such a title. Firstly, I need to point out that Adams himself makes no such claim, and I don’t expect many others would concur.
Many of you may already know that I wrote a post on Schrodinger’s Equation nearly 5 years ago and it has become, by far, the most popular post I’ve written. Of course Schrodinger’s Equation is not the last word in quantum mechanics –more like a starting point. By incorporating relativity we have Dirac’s equation, which predicted anti-matter – in fact, it’s a direct consequence of relativity and SE. In fact, Schrodinger himself, followed by Klein-Gordon, also had a go at it and rejected it because it gave answers with negative energy. But Richard Feynman (and independently, Ernst Stuckelberg) pointed out that this was mathematically equivalent to ordinary particles travelling backwards in time. Backwards in time, is not an impossibility in the quantum world, and Feynman even incorporated it into his famous QED (Quantum Electro-Dynamics) which won him a joint Nobel Prize with Julian Schwinger and Sin-Itiro Tomonaga in 1965. QED, by the way, incorporates SE (just read Feynman’s book on the subject).
This allows me to segue back into Adams’ lecture, which, as the title suggests, discusses the role of time in SE and quantum mechanics generally. You see ‘time’ is a bit of an enigma in QM.
Adams’ lecture, in his own words, is to provide a ‘grounding’ so he doesn’t go into details (mathematically) and this suited me. Nevertheless, he throws terms around like eigenstates, operators and wave functions, so familiarity with these terms would be essential to following him. Of those terms, the only one I will use is wave function, because it is the key to SE and arguably the key to all of QM.
Right at the start of the lecture (his Point 1), Adams makes the salient point that the Wave function, Ψ, contains ‘everything you need to know about the system’. Only a little further into his lecture (his Point 6) he asserts that SE is ‘not derived, it’s posited’. Yet it’s completely ‘deterministic’ and experimentally accurate. Now (as discussed by some of the students in the comments) to say it’s ‘deterministic’ is a touch misleading given that it only gives us probabilities which are empirically accurate (more on that later). But it’s a remarkable find that Schrodinger formulated a mathematical expression based on a hunch that all quantum objects, be they light or matter, should obey a wave function.
But it’s at the 50-55min stage (of his 1hr 22min lecture) that Adams delivers his most salient point when he explains so-called ‘stationary states’. Basically, they’re called stationary states because time remains invariant (doesn’t change) for SE which is what gives us ‘superposition’. As Adams points out, the only thing that changes in time in SE is the phase of the wave function, which allows us to derive the probability of finding the particle in ‘classical’ space and time. Classical space and time is the real physical world that we are all familiar with. Now this is what QM is all about, so I will elaborate.
Adams effectively confirmed for me something I had already deduced: superposition (the weird QM property that something can exist simultaneously in various positions prior to being ‘observed’) is a direct consequence of time being invariant or existing ‘outside’ of QM (which is how it’s usually explained). Now Adams makes the specific point that these ‘stationary states’ only exist in QM and never exist in the ‘Real’ world that we all experience. We never experience superposition in ‘classical physics’ (which is the scientific pseudonym for ‘real world’). This highlights for me that QM and the physical world are complementary, not just versions of each other. And this is incorporated in SE, because, as Adams shows on his blackboard, superposition can be derived from SE, and when we make a measurement or observation, superposition and SE both disappear. In other words, the quantum state and the classical state do not co-exist: either you have a wave function in Hilbert space or you have a physical interaction called a ‘wave collapse’ or, as Adams prefers to call it, ‘decoherence’. (Hilbert space is a theoretical space of possibly infinite dimensions where the wave function theoretically exists in its superpositional manifestation.)
Adams calls the so-called Copenhagen interpretation of QM the “Cop Out” interpretation which he wrote on the board and underlined. He prefers ‘decoherence’ which is how he describes the interaction of the QM wave function with the physical world. My own view is that the QM wave function represents all the future possibilities, only one of which will be realised. Therefore the wave function is a description of the future yet to exist, except as probabilities; hence the God equation.
As I’ve expounded in previous posts, the most popular interpretation at present seems to be the so-called ‘many worlds’ interpretation where all superpositional states exist in parallel universes. The most vigorous advocate of this view is David Deutsch, who wrote about it in a not-so-recent issue of New Scientist (3 Oct 2015, pp.30-31). I also reviewed his book, Fabric of Reality, in September 2012. In New Scientist, Deutsch advocated for a non-probabilistic version of QM, because he knows that reconciling the many worlds interpretation with probabilities is troublesome, especially if there are an infinite number of them. However, without probabilities, SE becomes totally ineffective in making predictions about the real world. It was Max Born who postulated the ingenious innovation of squaring the modulus of the wave function (actually multiplying it with its complex conjugate, as I explain here) which provides the probabilities that make SE relevant to the physical world.
As I’ve explained elsewhere, the world is fundamentally indeterministic due to asymmetries in time caused by both QM and chaos theory. Events become irreversible after QM decoherence, and also in chaos theory because the initial conditions are indeterminable. Now Deutsch argues that chaos theory can be explained by his many worlds view of QM, and mathematician, Ian Stewart, suggests that maybe QM can be explained by chaos theory as I expound here. Both these men are intellectual giants compared to me, yet I think they’re both wrong. As I’ve explained above, I think that the quantum world and the classical world are complementary. The logical extension of Deutch’s view, by his own admission, requires the elimination of probabilities, making SE ineffectual. And Stewart’s circuitous argument to explain QM probabilities with chaos theory eliminates superposition, for which we have indirect empirical evidence (using entanglement, which is well researched). Actually, I think superposition is a consequence of the wave function effectively being everywhere at once or 'permeates all of space' (to quote Richard Ewles in MATHS 1001).
If I’m right in stating that QM and classical physics are complementary (and Adams seems to make the same point, albeit not so explicitly) then a TOE may be impossible. In other words, I don't think classical physics is a special case of QM, which is the current orthodoxy among physicists.
Addendum 1: Since writing this, I've come to the conclusion that QM and, therefore, the wave function describe the future - an idea endorsed by non-other than Freeman Dyson, who was instrumental in formulating QED with Richard Feynman.
Addendum 2: I've amended the conclusion in my 2nd last paragraph, discussing Deutch's and Stewart's respective 'theories', and mentioning entanglement in passing. Schrodinger once said (in a missive to Einstein, from memory) that entanglement is what QM is all about. Entanglement effectively challenges Einstein's conclusion that simultaneity is a non sequitur according to his special theory of relativity (and he's right, providing there's no causal relationship between events). I contend that neither Deutch nor Stewart can resolve entanglement with their respective 'alternative' theories, and neither of them address it from what I've read.
Philosophy, at its best, challenges our long held views, such that we examine them more deeply than we might otherwise consider.
Paul P. Mealing
- Paul P. Mealing
- Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
Tuesday, 19 January 2016
Tuesday, 12 January 2016
How to write a story so it reads like a movie in your head
I’ve written about writing a few times now, including Writing’s 3 Essential Skills (Jul. 2013) and How to Create an Imaginary, Believable World (Aug. 2010), the last one being a particularly popular post. Also, I taught a creative writing course in 2009 and have given a couple of talks on the subject, but never intentionally to provide advice on how to make a story read like a movie in your head.
This post has arisen from a conversation I had when I realised I had effectively taught myself how to do this. It’s not something that I deliberately set out to do but I believe I achieved it inadvertently and comments from some readers appear to confirm this. At YABooksCentral, a teenage reviewer has made the point specifically, and many others have said that my book (Elvene) would make a good movie, including a filmmaker. Many have said that they ‘could see everything’ in their mind’s eye.
Very early in my writing career (though it’s never been my day job) I took some screenwriting courses and even wrote a screenplay. I found that this subconsciously influenced my prose writing in ways that I never foresaw and that I will now explain. The formatting of a screenplay doesn’t lend itself to fluidity, with separate headings for every scene and dialogue in blocks interspersed with occasional brief descriptive passages. Yet a well written screenplay lets you see the movie in your mind’s eye and you should write it as you’d imagine it appearing on a screen. However, contrary to what you might think, this is not the way to write a novel. Do not write a novel as if watching a movie. Have I confused you? Well, bear this in mind and hopefully it will all make sense before the end.
Significantly, a screenplay needs to be written in ‘real time’, which means descriptions are minimalistic and exposition non-existent (although screenwriters routinely smuggle exposition into their dialogue). Also, all the characterisation is in the dialogue and the action – you don’t need physical descriptions of a character, including their attire, unless it’s significant; just gender, generic age and ethnicity (if it’s important). It was this minimalistic approach that I subconsciously imported into my prose fiction.
There is one major difference between writing a screenplay and writing a novel and the two subsequent methods require different states of mind. In writing a screenplay you can only write what is seen and heard on the screen, whereas a novel can be written entirely (though not necessarily) from inside a character’s head. I hope this clarifies the point I made earlier. Now, as someone once pointed out to me (fellow blogger, Eli Horowitz) movies can take you into a character’s head through voiceover, flashbacks and dream sequences. But, even so, the screenplay would only record what is seen and heard on the screen, and these are exceptions, not the norm. Whereas, in a novel, getting inside a character’s head is the norm.
To finally address the question implicit in my heading, there are really only 2 ‘tricks’ for want of a better term: write the story in real time and always from some character’s point of view. Even description can be given through a character’s eyes, and the reader subconsciously becomes an actor. By inhabiting a character’s mind, the reader becomes fully immersed in the story.
Now I need to say something about scenes, because, contrary to popular belief, scenes are the smallest component of a story, not words or sentences or paragraphs. It’s best to think of the words on the page like the notes on a musical score. When you listen to a piece of music, the written score is irrelevant, and, even if you read the score, you wouldn’t hear the music anyway (unless, perhaps, if you’re a musician or a composer). Similarly, the story takes place in the reader’s mind where the words on the page conjure up images and emotions without conscious effort.
In a screenplay a scene has a specific definition, defined by a change in location or time. I use the same definition when writing prose. There are subtle methods for expanding and contracting time psychologically in a movie, and these can also be applied to prose fiction. I’ve made the point before that the language of story is the language of dreams, and in dreams, as in stories, sudden changes in location and time are not aberrant. In fact, I would argue that if we didn’t dream, stories wouldn’t work because our minds would continuously and subconsciously struggle with the logic.
This post has arisen from a conversation I had when I realised I had effectively taught myself how to do this. It’s not something that I deliberately set out to do but I believe I achieved it inadvertently and comments from some readers appear to confirm this. At YABooksCentral, a teenage reviewer has made the point specifically, and many others have said that my book (Elvene) would make a good movie, including a filmmaker. Many have said that they ‘could see everything’ in their mind’s eye.
Very early in my writing career (though it’s never been my day job) I took some screenwriting courses and even wrote a screenplay. I found that this subconsciously influenced my prose writing in ways that I never foresaw and that I will now explain. The formatting of a screenplay doesn’t lend itself to fluidity, with separate headings for every scene and dialogue in blocks interspersed with occasional brief descriptive passages. Yet a well written screenplay lets you see the movie in your mind’s eye and you should write it as you’d imagine it appearing on a screen. However, contrary to what you might think, this is not the way to write a novel. Do not write a novel as if watching a movie. Have I confused you? Well, bear this in mind and hopefully it will all make sense before the end.
Significantly, a screenplay needs to be written in ‘real time’, which means descriptions are minimalistic and exposition non-existent (although screenwriters routinely smuggle exposition into their dialogue). Also, all the characterisation is in the dialogue and the action – you don’t need physical descriptions of a character, including their attire, unless it’s significant; just gender, generic age and ethnicity (if it’s important). It was this minimalistic approach that I subconsciously imported into my prose fiction.
There is one major difference between writing a screenplay and writing a novel and the two subsequent methods require different states of mind. In writing a screenplay you can only write what is seen and heard on the screen, whereas a novel can be written entirely (though not necessarily) from inside a character’s head. I hope this clarifies the point I made earlier. Now, as someone once pointed out to me (fellow blogger, Eli Horowitz) movies can take you into a character’s head through voiceover, flashbacks and dream sequences. But, even so, the screenplay would only record what is seen and heard on the screen, and these are exceptions, not the norm. Whereas, in a novel, getting inside a character’s head is the norm.
To finally address the question implicit in my heading, there are really only 2 ‘tricks’ for want of a better term: write the story in real time and always from some character’s point of view. Even description can be given through a character’s eyes, and the reader subconsciously becomes an actor. By inhabiting a character’s mind, the reader becomes fully immersed in the story.
Now I need to say something about scenes, because, contrary to popular belief, scenes are the smallest component of a story, not words or sentences or paragraphs. It’s best to think of the words on the page like the notes on a musical score. When you listen to a piece of music, the written score is irrelevant, and, even if you read the score, you wouldn’t hear the music anyway (unless, perhaps, if you’re a musician or a composer). Similarly, the story takes place in the reader’s mind where the words on the page conjure up images and emotions without conscious effort.
In a screenplay a scene has a specific definition, defined by a change in location or time. I use the same definition when writing prose. There are subtle methods for expanding and contracting time psychologically in a movie, and these can also be applied to prose fiction. I’ve made the point before that the language of story is the language of dreams, and in dreams, as in stories, sudden changes in location and time are not aberrant. In fact, I would argue that if we didn’t dream, stories wouldn’t work because our minds would continuously and subconsciously struggle with the logic.
Tuesday, 5 January 2016
Free will revisited
I’ve written quite a lot on this in the past, so one may wonder what I could add.
I’ve just read Mark Balaguer’s book, Free Will, which I won when Philosophy Now published my answer to their Question of the Month in their last issue (No 111, December 2015). It’s the fourth time I’ve won a book from them (out of 5 submissions).
It’s a well written book, not overly long or over-technical in a philosophical sense, so very readable whilst being well argued. Balaguer makes it clear from the outset where he stands on this issue, by continually referring to those who argue against free will as ‘the enemies of free will’. Whilst this makes him sound combative, the tone of his arguments are measured and not antagonistic. In his conclusion, he makes the important distinction that in ‘blocking’ arguments against free will, he’s not proving that free will exists.
He makes the distinction between what he calls Hume-style free will and Non-predetermined free will (NDP), which is a term I believe he’s coined for himself. Hume-style free will, is otherwise known as ‘compatibilism’, which means it’s compatible with determinism. In other words, even if everything in the world is deterministic from the Big Bang onwards, it doesn’t rule out you having free will. I know it sounds like a contradiction, but I think it’s to do with the fact that a completely deterministic universe doesn’t conflict with the subjective sense we all have of having free will. As I’ve expressed in numerous posts on this blog, I think there is ample evidence that the completely deterministic universe is a furphy, so compatibilism is not relevant as far as I’m concerned.
Balaguer also coins another term, ‘torn decision’, which he effectively uses as a litmus test for free will. In a glossary in the back he gives a definition which I’ve truncated:
A torn decision is a conscious decision in which you have multiple options and you’re torn as to which option is best.
He gives the example of choosing between chocolate or strawberry flavoured ice cream and not making a decision until you’re forced to, so you make it while you’re still ‘torn’. This is the example he keeps coming back to throughout the book.
In recent times, experiments in neuro-science have provided what some people believe are ‘slam-dunk’ arguments against free will, because scientists have been able to predict with 60% accuracy what decision a subject will make seconds before they make it, simply by measuring neuron activity in certain parts of the brain. Balaguer provides the most cogent arguments I’ve come across challenging these contentions. In particular, the Haynes studies, which showed neuron activity up to 10 seconds prior to the conscious decision. Balaguer points out that the neuron activity for these studies occurs in the PC and BA10 areas of the brain, which are associated with the ‘generation of plans’ and the ‘storage of plans’ respectively. He makes the point (in greater elaboration than I do here) that we should not be surprised if we subconsciously use our ‘planning’ areas of the brain whilst trying to make ‘torn decisions’. The other experiment and their counterparts, known as the Libet studies (since the 1960s) showed neuron activity half a second prior to conscious decision-making and was termed the ‘readiness potential’. Balaguer argues that there is ‘no evidence’ that the readiness potential causes the decision. Even so, it could be argued that, like the Haynes studies, it is subconscious activity happening prior to the conscious decision.
It is readily known (as Balaguer explicates) that much of our thinking is subconscious. We all have the experience of solving a problem subconsciously so it comes to us spontaneously when we don’t expect it to. And anyone who has pursued some artistic endeavour (like writing fiction) knows that a lot of it is subconscious so that the story and its characters appear on the page with seemingly divine-like spontaneity.
Backtracking to so-called Hume-style free will, it does have a relevance if one considers that our ‘wants’ - what we wish to do - are determined by our desires and needs. We assume that most of the animal kingdom behave on this principle. Few people (including Balaguer) discuss other sentient creatures when they discuss free will, yet I’ve long believed that consciousness and free will go hand-in-hand. In other words, I really can’t see the point of consciousness without free will. If everything is determined subconsciously, without the need to think, then why have we evolved to think?
But humans take thinking to a new level compared to every other species on the planet, so that we introspect and cogitate and reason and internally debate our way to many a decision.
Back in Feb., 2009, I reviewed Douglas Hofstadter’s Pulitzer prize winning book, Godel, Escher, Bach where, among other topics, I discussed consciousness, as that’s one of the themes of his book. Hofstadter coins the term ‘strange loop’. This is what I wrote back then:
By strange loop, Hofstadter means that we can effectively look at all the levels of our thinking except the ground level, which is our neurons. In between we have symbols, which is language, which we can discuss and analyse in a dispassionate way, just like I’m doing now. I can talk about my own thoughts and ideas as if they weren’t mine at all. Consciousness, in Hofstadter’s model (for want of a better word) is the top level, and neurons are the hardware level. In between we have the software (symbols) which is effectively language.
I was quick to point out that ‘software’ in this context is a metaphor – I don’t believe that language is really software, even though we ‘download’ it from generation to generation and it is indispensable to human reasoning, which we call thinking.
The point I’d make is that this is a 2 way process: the neurons are essential to thoughts, yet our thoughts I expect can affect neurons. I believe there is evidence that we can and do rewire our brains simply by exercising our mental faculties, even in later years, and surely exercising consciously is the very definition of will.
I’ve just read Mark Balaguer’s book, Free Will, which I won when Philosophy Now published my answer to their Question of the Month in their last issue (No 111, December 2015). It’s the fourth time I’ve won a book from them (out of 5 submissions).
It’s a well written book, not overly long or over-technical in a philosophical sense, so very readable whilst being well argued. Balaguer makes it clear from the outset where he stands on this issue, by continually referring to those who argue against free will as ‘the enemies of free will’. Whilst this makes him sound combative, the tone of his arguments are measured and not antagonistic. In his conclusion, he makes the important distinction that in ‘blocking’ arguments against free will, he’s not proving that free will exists.
He makes the distinction between what he calls Hume-style free will and Non-predetermined free will (NDP), which is a term I believe he’s coined for himself. Hume-style free will, is otherwise known as ‘compatibilism’, which means it’s compatible with determinism. In other words, even if everything in the world is deterministic from the Big Bang onwards, it doesn’t rule out you having free will. I know it sounds like a contradiction, but I think it’s to do with the fact that a completely deterministic universe doesn’t conflict with the subjective sense we all have of having free will. As I’ve expressed in numerous posts on this blog, I think there is ample evidence that the completely deterministic universe is a furphy, so compatibilism is not relevant as far as I’m concerned.
Balaguer also coins another term, ‘torn decision’, which he effectively uses as a litmus test for free will. In a glossary in the back he gives a definition which I’ve truncated:
A torn decision is a conscious decision in which you have multiple options and you’re torn as to which option is best.
He gives the example of choosing between chocolate or strawberry flavoured ice cream and not making a decision until you’re forced to, so you make it while you’re still ‘torn’. This is the example he keeps coming back to throughout the book.
In recent times, experiments in neuro-science have provided what some people believe are ‘slam-dunk’ arguments against free will, because scientists have been able to predict with 60% accuracy what decision a subject will make seconds before they make it, simply by measuring neuron activity in certain parts of the brain. Balaguer provides the most cogent arguments I’ve come across challenging these contentions. In particular, the Haynes studies, which showed neuron activity up to 10 seconds prior to the conscious decision. Balaguer points out that the neuron activity for these studies occurs in the PC and BA10 areas of the brain, which are associated with the ‘generation of plans’ and the ‘storage of plans’ respectively. He makes the point (in greater elaboration than I do here) that we should not be surprised if we subconsciously use our ‘planning’ areas of the brain whilst trying to make ‘torn decisions’. The other experiment and their counterparts, known as the Libet studies (since the 1960s) showed neuron activity half a second prior to conscious decision-making and was termed the ‘readiness potential’. Balaguer argues that there is ‘no evidence’ that the readiness potential causes the decision. Even so, it could be argued that, like the Haynes studies, it is subconscious activity happening prior to the conscious decision.
It is readily known (as Balaguer explicates) that much of our thinking is subconscious. We all have the experience of solving a problem subconsciously so it comes to us spontaneously when we don’t expect it to. And anyone who has pursued some artistic endeavour (like writing fiction) knows that a lot of it is subconscious so that the story and its characters appear on the page with seemingly divine-like spontaneity.
Backtracking to so-called Hume-style free will, it does have a relevance if one considers that our ‘wants’ - what we wish to do - are determined by our desires and needs. We assume that most of the animal kingdom behave on this principle. Few people (including Balaguer) discuss other sentient creatures when they discuss free will, yet I’ve long believed that consciousness and free will go hand-in-hand. In other words, I really can’t see the point of consciousness without free will. If everything is determined subconsciously, without the need to think, then why have we evolved to think?
But humans take thinking to a new level compared to every other species on the planet, so that we introspect and cogitate and reason and internally debate our way to many a decision.
Back in Feb., 2009, I reviewed Douglas Hofstadter’s Pulitzer prize winning book, Godel, Escher, Bach where, among other topics, I discussed consciousness, as that’s one of the themes of his book. Hofstadter coins the term ‘strange loop’. This is what I wrote back then:
By strange loop, Hofstadter means that we can effectively look at all the levels of our thinking except the ground level, which is our neurons. In between we have symbols, which is language, which we can discuss and analyse in a dispassionate way, just like I’m doing now. I can talk about my own thoughts and ideas as if they weren’t mine at all. Consciousness, in Hofstadter’s model (for want of a better word) is the top level, and neurons are the hardware level. In between we have the software (symbols) which is effectively language.
I was quick to point out that ‘software’ in this context is a metaphor – I don’t believe that language is really software, even though we ‘download’ it from generation to generation and it is indispensable to human reasoning, which we call thinking.
The point I’d make is that this is a 2 way process: the neurons are essential to thoughts, yet our thoughts I expect can affect neurons. I believe there is evidence that we can and do rewire our brains simply by exercising our mental faculties, even in later years, and surely exercising consciously is the very definition of will.
Tuesday, 15 December 2015
The battle for the future of Islam
There are many works of fiction featuring battles between ‘Good’ and ‘Evil’, yet it would not be distorting the truth to say that we are witnessing one now, though I think it is largely misconstrued by those of us who are on the sidelines. We see it as a conflict between Islam and the West, when it’s actually within Islam itself. This came home to me when I recently saw the biographical movie, He Named Me Malala (pronounce Ma-la-li, by the way).
Malala is well known as the 14 year old Pakistani school girl, shot in the head on a school bus by the Taliban for her outspoken views on education for girls in Pakistan. Now 18 years old (when the film was made) she has since won the Nobel Peace Prize and spoken in the United Nations, as well as having audiences with world leaders, like Barak Obama. In a recent interview with Emma Watson (on Emma’s Facebook page) she appeared much wiser than her years. In the movie, amongst her family, she behaves like an ordinary teenager with ‘crushes’ on famous sports stars. In effect, her personal battle with the Taliban represents in microcosm a much wider battle between past and future that is occurring on the world stage within Islam. A battle for the hearts and minds of Muslims all over the world.
IS or ISIS or Daesh has arisen out of conflicts between Shiites and Sunnis in both Iraq and Syria, but the declaration of a Caliphate has led to a much more serious, even sinister, connotation, because its followers believe they are fulfilling a prophecy which will only be resolved with the biblical end of the world. I’m not an Islamic scholar, so I’m quoting from Audrey Borowski, currently doing a PhD at the University of London, who holds a MSt (Masters degree) in Islamic Studies from Oxford University. She asserts: ‘…one of the Prophet Muhammad’s earliest hadith (sayings) locates the fateful showdown between Christians and Muslims that heralds the apocalypse in the city of Dabiq in Syria.’
“The Hour will not be established until the Romans (Christians) land at Dabiq. Then an army from Medina of the best people on the earth at that time… will fight them.”
She wrote an article of some length in Philosophy Now (Issue 111, Dec. 2015/Jan. 2016) titled Al Qaeda and ISIS; From Revolution to Apocalypse.
The point is that if someone believes they are in a fight for the end of the world, then destroying entire populations and cities is not off the table. They could resort to any tactic, like contaminating water supplies of entire cities or destroying food crops on a large scale. I alluded in the introduction that this apocalyptic ideology, in a fictional context, represents a classic contest between good and evil. From where I (and most people reading this blog) stand, anyone intent on destroying civilization as we know it, would be considered the ultimate evil.
What is most difficult for us to comprehend is that the perpetrators, the people ‘on the other side’ would see the roles reversed. Earlier this year (April 2015), I wrote a post titled Morality is totally in the eye of the beholder, where I explained how two different cultures in the same country (India) could have completely opposing views concerning a crime against a young woman, who was raped and murdered on a bus returning from seeing a movie with her boyfriend. One view was that the girl was the victim of a crime and the other view was that the girl was responsible for her own fate.
Many people have trouble believing that otherwise ordinary people, who commit evil acts in the form of atrocities, would see themselves as not being evil. We have an enormous capacity to justify to ourselves the most heinous acts, and no where is this more evident, than when one believes they are performing the ‘Will of God’. This is certainly the case with IS and their followers.
Unfortunately, this has led to a backlash in the West against all Muslims. In particular, we see both in social media and mainstream media, and even amongst mainstream politicians, a sentiment that Islam is fundamentally flawed and needs to be reformed. It seems to me that they are unaware that there is already a battle happening within Islam, where militant bodies like IS and Boko Haram and the Taliban represent the worse and a young schoolgirl from Pakistan represents the best.
Ayaan Hirsi Ali (whom I wrote about in March 2011), said when she was in Australia many years ago, that Islam was not compatible with a secular society, which is certainly true if Islamists wish to establish a religious-based government. There is a group, Hizb ut-Tahrir, who is banned in most Western countries, but not UK or Australia, and whose stated aim is to form a caliphate and whose political agenda, including the introduction of Sharia law, would clearly conflict with Australian law. But the truth is that there are many Muslims living an active and productive life in Australia, whilst still practising their religion. A secular society is not an atheistic society, yet is religiously nondependent by definition. In other words, there is room for variety in religious practice and that is what we see. Extremists of any religious persuasion are generally not well received in a pluralist multicultural society, yet that is the fear that is driving the debate in many secular societies.
Over a year ago (Aug 2014) I wrote a post titled Don’t judge all Muslims the same, based on another article I read in Philosophy Now (Issue 104, May/Jun 2014) by Terri Murray (Master of Theology, Heythrop College, London) who made a very salient point differentiating cultural values and ideals from individual ones. In particular, she asserted that an individual’s rights overrules the so-called rights of a culture or a community. Therefore, misogynistic issues like female genital mutilation, honour killings, child marriage, all of which are illegal in Australia, are abuses of individual rights that may be condoned, even considered normal practice, in some cultures.
Getting back to my original subject matter, like the case of the Indian girl (a medical graduate) who was murdered for going on a date, this really is a battle between past and future. IS and the Taliban and their variant Islamic ideologies represent a desire to regain a past that has no relevance in the 21st Century – it’s mediaeval, not only in concept but also in practice. One of the consequences of the Internet is that it has become a vehicle for both sides. So young women in far off countries are learning that there is another world where education can lead to a better life. And this is the key: education of women, as Malala has brought to the world’s attention, is the only true way forward. It’s curious that women are what these regimes seem to fear most, including IS, whose greatest fear is to be killed by a female Kurdish warrior, because then they won’t get to Paradise.
Malala is well known as the 14 year old Pakistani school girl, shot in the head on a school bus by the Taliban for her outspoken views on education for girls in Pakistan. Now 18 years old (when the film was made) she has since won the Nobel Peace Prize and spoken in the United Nations, as well as having audiences with world leaders, like Barak Obama. In a recent interview with Emma Watson (on Emma’s Facebook page) she appeared much wiser than her years. In the movie, amongst her family, she behaves like an ordinary teenager with ‘crushes’ on famous sports stars. In effect, her personal battle with the Taliban represents in microcosm a much wider battle between past and future that is occurring on the world stage within Islam. A battle for the hearts and minds of Muslims all over the world.
IS or ISIS or Daesh has arisen out of conflicts between Shiites and Sunnis in both Iraq and Syria, but the declaration of a Caliphate has led to a much more serious, even sinister, connotation, because its followers believe they are fulfilling a prophecy which will only be resolved with the biblical end of the world. I’m not an Islamic scholar, so I’m quoting from Audrey Borowski, currently doing a PhD at the University of London, who holds a MSt (Masters degree) in Islamic Studies from Oxford University. She asserts: ‘…one of the Prophet Muhammad’s earliest hadith (sayings) locates the fateful showdown between Christians and Muslims that heralds the apocalypse in the city of Dabiq in Syria.’
“The Hour will not be established until the Romans (Christians) land at Dabiq. Then an army from Medina of the best people on the earth at that time… will fight them.”
She wrote an article of some length in Philosophy Now (Issue 111, Dec. 2015/Jan. 2016) titled Al Qaeda and ISIS; From Revolution to Apocalypse.
The point is that if someone believes they are in a fight for the end of the world, then destroying entire populations and cities is not off the table. They could resort to any tactic, like contaminating water supplies of entire cities or destroying food crops on a large scale. I alluded in the introduction that this apocalyptic ideology, in a fictional context, represents a classic contest between good and evil. From where I (and most people reading this blog) stand, anyone intent on destroying civilization as we know it, would be considered the ultimate evil.
What is most difficult for us to comprehend is that the perpetrators, the people ‘on the other side’ would see the roles reversed. Earlier this year (April 2015), I wrote a post titled Morality is totally in the eye of the beholder, where I explained how two different cultures in the same country (India) could have completely opposing views concerning a crime against a young woman, who was raped and murdered on a bus returning from seeing a movie with her boyfriend. One view was that the girl was the victim of a crime and the other view was that the girl was responsible for her own fate.
Many people have trouble believing that otherwise ordinary people, who commit evil acts in the form of atrocities, would see themselves as not being evil. We have an enormous capacity to justify to ourselves the most heinous acts, and no where is this more evident, than when one believes they are performing the ‘Will of God’. This is certainly the case with IS and their followers.
Unfortunately, this has led to a backlash in the West against all Muslims. In particular, we see both in social media and mainstream media, and even amongst mainstream politicians, a sentiment that Islam is fundamentally flawed and needs to be reformed. It seems to me that they are unaware that there is already a battle happening within Islam, where militant bodies like IS and Boko Haram and the Taliban represent the worse and a young schoolgirl from Pakistan represents the best.
Ayaan Hirsi Ali (whom I wrote about in March 2011), said when she was in Australia many years ago, that Islam was not compatible with a secular society, which is certainly true if Islamists wish to establish a religious-based government. There is a group, Hizb ut-Tahrir, who is banned in most Western countries, but not UK or Australia, and whose stated aim is to form a caliphate and whose political agenda, including the introduction of Sharia law, would clearly conflict with Australian law. But the truth is that there are many Muslims living an active and productive life in Australia, whilst still practising their religion. A secular society is not an atheistic society, yet is religiously nondependent by definition. In other words, there is room for variety in religious practice and that is what we see. Extremists of any religious persuasion are generally not well received in a pluralist multicultural society, yet that is the fear that is driving the debate in many secular societies.
Over a year ago (Aug 2014) I wrote a post titled Don’t judge all Muslims the same, based on another article I read in Philosophy Now (Issue 104, May/Jun 2014) by Terri Murray (Master of Theology, Heythrop College, London) who made a very salient point differentiating cultural values and ideals from individual ones. In particular, she asserted that an individual’s rights overrules the so-called rights of a culture or a community. Therefore, misogynistic issues like female genital mutilation, honour killings, child marriage, all of which are illegal in Australia, are abuses of individual rights that may be condoned, even considered normal practice, in some cultures.
Getting back to my original subject matter, like the case of the Indian girl (a medical graduate) who was murdered for going on a date, this really is a battle between past and future. IS and the Taliban and their variant Islamic ideologies represent a desire to regain a past that has no relevance in the 21st Century – it’s mediaeval, not only in concept but also in practice. One of the consequences of the Internet is that it has become a vehicle for both sides. So young women in far off countries are learning that there is another world where education can lead to a better life. And this is the key: education of women, as Malala has brought to the world’s attention, is the only true way forward. It’s curious that women are what these regimes seem to fear most, including IS, whose greatest fear is to be killed by a female Kurdish warrior, because then they won’t get to Paradise.
Tuesday, 1 December 2015
Why narcissists are a danger to themselves and others
I expect everyone has met a narcissist, though, like all personality disorders, there are degrees of severity, from the generally harmless egotistical know-it-all to the megalomaniac, who takes control of an entire nation. In between those extremes is the person who somehow self-destructs while claiming it’s everyone else’s fault. They’re the ones who are captain of the ship and totally in control, even when it runs aground, but suddenly claim it’s no longer their fault. I’m talking metaphorically, but this happened quite literally and spectacularly, a couple of years back, as most of you will remember.
The major problem with narcissists is not their self-aggrandisement and over-inflated opinion of their own worth, but their distorted view of reality.
Narcissists have a tendency to self-destruct, not on purpose, but because their view of reality, based on their overblown sense of self-justification, becomes so distorted that they lose perspective and then control, even though everyone around them can see the truth, but are generally powerless to intervene.
They are particularly disastrous in politics but are likely to rise to power when things are going badly, because they are charismatic and their self-belief becomes contagious. Someone said (I don’t know who) that when things are going badly society turns on itself – they were referring to the European witch hunts, which coincided with economic and environmental tribulations. The recent GFC creates ripe conditions for charismatic leaders to feed a population’s paranoia and promise miracle solutions with no basis in rationality. Look at what happened in Europe following the Great Depression of the 20th Century: World War 2. And who started it? Probably the most famous narcissist in recent history. The key element that they have in common with the aforementioned witch-hunters is that they can find someone to blame and, frighteningly, they are believed.
Narcissists make excellent villains as I’ve demonstrated in my own fiction. But one must be careful of whom we demonise lest we become as spiteful and destructive as those we wish not to emulate. Seriously, we should not take them seriously; then all their self-importance and self-aggrandisement becomes comical. Unfortunately, they tend to divide society between those who see themselves as victims and those who see the purported culprits as the victims. In other words, they divide nations when they should be uniting them.
But there are exceptions. Having read Steve Jobs’ biography (by Walter Isaacson) I would say he had narcissistic tendencies, yet he was eminently successful. Many people have commented on his ‘reality-distortion field’, which I’ve already argued is a narcissistic trait, and he could be very egotistical at times, according to anecdotal evidence. Yet he could form deep relationships despite being very contrary in his dealings with his colleagues – building them up one moment and tearing them down the next. But Jobs was driven to strive for perfection, both aesthetically and functionally, and he sought out people who had the same aspiration. He was, of course, extraordinarily charismatic, intelligent and somewhat eccentric. He was a Buddhist, which may have tempered his narcissistic tendencies; but I’m just speculating – I never met him or worked with him – I just used and admired his products like many others. Anyway, I would cite Jobs as an example of a narcissist who broke the mould – he didn’t self-destruct, quite the opposite, in fact.
Addendum: When I wrote this I had recently read Isaacson's biography of Steve Jobs, but I've since seen a documentary and he came perilously close to self-destruction. He was called before a Senate Committee under charges of fraud. He was giving his employees backdated shares (I think that was the charge, from memory). Anyway, according to the documentary, he only avoided prison because it would have destroyed the share price of Apple, which was the biggest company on the share market at the time. I don't know how true this is, but it rings true.
The major problem with narcissists is not their self-aggrandisement and over-inflated opinion of their own worth, but their distorted view of reality.
Narcissists have a tendency to self-destruct, not on purpose, but because their view of reality, based on their overblown sense of self-justification, becomes so distorted that they lose perspective and then control, even though everyone around them can see the truth, but are generally powerless to intervene.
They are particularly disastrous in politics but are likely to rise to power when things are going badly, because they are charismatic and their self-belief becomes contagious. Someone said (I don’t know who) that when things are going badly society turns on itself – they were referring to the European witch hunts, which coincided with economic and environmental tribulations. The recent GFC creates ripe conditions for charismatic leaders to feed a population’s paranoia and promise miracle solutions with no basis in rationality. Look at what happened in Europe following the Great Depression of the 20th Century: World War 2. And who started it? Probably the most famous narcissist in recent history. The key element that they have in common with the aforementioned witch-hunters is that they can find someone to blame and, frighteningly, they are believed.
Narcissists make excellent villains as I’ve demonstrated in my own fiction. But one must be careful of whom we demonise lest we become as spiteful and destructive as those we wish not to emulate. Seriously, we should not take them seriously; then all their self-importance and self-aggrandisement becomes comical. Unfortunately, they tend to divide society between those who see themselves as victims and those who see the purported culprits as the victims. In other words, they divide nations when they should be uniting them.
But there are exceptions. Having read Steve Jobs’ biography (by Walter Isaacson) I would say he had narcissistic tendencies, yet he was eminently successful. Many people have commented on his ‘reality-distortion field’, which I’ve already argued is a narcissistic trait, and he could be very egotistical at times, according to anecdotal evidence. Yet he could form deep relationships despite being very contrary in his dealings with his colleagues – building them up one moment and tearing them down the next. But Jobs was driven to strive for perfection, both aesthetically and functionally, and he sought out people who had the same aspiration. He was, of course, extraordinarily charismatic, intelligent and somewhat eccentric. He was a Buddhist, which may have tempered his narcissistic tendencies; but I’m just speculating – I never met him or worked with him – I just used and admired his products like many others. Anyway, I would cite Jobs as an example of a narcissist who broke the mould – he didn’t self-destruct, quite the opposite, in fact.
Addendum: When I wrote this I had recently read Isaacson's biography of Steve Jobs, but I've since seen a documentary and he came perilously close to self-destruction. He was called before a Senate Committee under charges of fraud. He was giving his employees backdated shares (I think that was the charge, from memory). Anyway, according to the documentary, he only avoided prison because it would have destroyed the share price of Apple, which was the biggest company on the share market at the time. I don't know how true this is, but it rings true.
Tuesday, 24 November 2015
The Centenary of Einstein’s General Theory of Relativity
This month (November 2015) marks 100 years since Albert Einstein published his milestone paper on the General Theory of Relativity, which not only eclipsed Newton’s equally revolutionary Theory of Universal Gravitation, but is still the cornerstone of every cosmological theory that has been developed and disseminated since.
It needs to be pointed out that Einstein’s ‘annus mirabilis’ (miraculous year), as it’s been called, occurred 10 years earlier in 1905, when he published 3 groundbreaking papers that elevated him from a patent clerk in Bern to a candidate for the Nobel Prize (eventually realised of course). The 3 papers were his Special Theory of Relativity, his explanation of the photo-electric effect using the newly coined concept, photon of light, and a statistical analysis of Brownian motion, which effectively proved that molecules made of atoms really exist and were not just a convenient theoretical concept.
Given the anniversary, it seemed appropriate that I should write something on the topic, despite my limited knowledge and despite the plethora of books that have been published to recognise the feat. The best I’ve read is The Road to Relativity; The History and Meaning of Einstein’s “The Foundation of General Relativity” (the original title of his paper) by Hanoch Gutfreund and Jurgen Renn. They have managed to include an annotated copy of Einstein’s original handwritten manuscript with a page by page exposition. But more than that, they take us on Einstein’s mental journey and, in particular, how he found the mathematical language to portray the intuitive ideas in his head and yet work within the constraints he believed were necessary for it to work.
The constraints were not inconsiderable and include: the equivalence of inertial and gravitational mass; the conservation of energy and momentum under transformation between frames of reference both in rotational and linear motion; and the ability to reduce his theory mathematically to Newton’s theory when relativistic effects were negligible.
Einstein’s epiphany, that led him down the particular path he took, was the realisation that one experienced no force when one was in free fall, contrary to Newton’s theory and contrary to our belief that gravity is a force. Free fall subjectively feels no different to being in orbit around a planet. The aptly named ‘vomit comet’ is an aeroplane that goes into free fall in order to create the momentary sense of weightlessness that one would experience in space.
Einstein learnt from his study of Maxwell’s equations for electromagnetic radiation, that mathematics could sometimes provide a counter-intuitive insight, like the constant speed of light.
In fact, Einstein had to learn new mathematics (for him) and engaged the help of his close friend, Marcel Grossman, who led him through the technical travails of tensor calculus using Riemann geometry. It would seem, from what I can understand of his mental journey, that it was the mathematics, as much as any other insight, that led Einstein to realise that space-time is curved and not Euclidean as we all generally believe. To quote Gutfreund and Renn:
[Einstein] realised that the four-dimensional spacetime of general relativity no longer fitted the framework of Euclidean geometry… The geometrization of general relativity and the understanding of gravity as being due to the curvature of spacetime is a result of the further development and not a presupposition of Einstein’s formulation of the theory.
By Euclidean, one means space is flat and light travels in perfectly straight lines. One of the confirmations of Einstein’s theory was that he predicted that light passing close to the Sun would be literally bent and so a star in the background would appear to shift as the Sun approached the same line of sight for an observer on Earth as for the star. This could only be seen during an eclipse and was duly observed by Arthur Eddington in 1919 on the island of Principe near Africa.
Einstein’s formulations led him to postulate that it’s the geometry of space that gives us gravity and the geometry, which is curved, is caused by massive objects. In other words, it’s mass that curves space and it’s the curvature of space that causes mass to move, as John Wheeler famously and succinctly expounded.
It may sound back-to-front, but, for me, Einstein’s Special Theory of Relativity only makes sense in the context of his General Theory, even though they were formulated in the reverse order. To understand what I’m talking about, I need to explain geodesics.
When you fly long distance on a plane, the path projected onto a flat map looks curved. You may have noticed this when they show the path on a screen in the cabin while you’re in flight. The point is that when you fly long distance you are travelling over a curved surface, because, obviously, the Earth is a sphere, and the shortest distance between 2 points (cities) lies on what’s called a great circle. A great circle is the one circle that goes through both points that is the largest circle possible. Now, I know that sounds paradoxical, but the largest circle provides the shortest distance over the surface (we are not talking about tunnels) that one can travel and there is only one, therefore there is one shortest path. This shortest path is called the geodesic that connects those 2 points.
A geodesic in gravitation is the shortest distance in spacetime between 2 points and that is what one follows when one is in free fall. At the risk of information overload, I’m going to introduce another concept which is essential for understanding the physics of a geodesic in gravity.
One of the most fundamental principles discovered in physics is the principle of least action (formulated mathematically as a Lagrangian which is the difference between kinetic and potential energy). The most commonly experienced example would be refraction of light through glass or water, because light travels at different velocities in air, water and glass (slower through glass or water than air). The extremely gifted 17th Century amateur mathematician, Pierre de Fermat (actually a lawyer) conjectured that the light travels the shortest path, meaning it takes the least time, and the refractive index (Snell’s law) can be deduced mathematically from this principle. In the 20th Century, Richard Feynman developed his path integral method of quantum mechanics from the least action principle, and, in effect, confirmed Fermat’s principle.
Now, when one applies the principle of least action to a projectile in a gravitational field (like a thrown ball) one finds that it too takes the shortest path, but paradoxically this is the path of longest relativistic time (not unlike the paradox of the largest circle described earlier).
Richard Feynman gives a worked example in his excellent book, Six Not-So-Easy Pieces. In relativity, time can be subjective, so that a moving clock always appears to be running slow compared to a stationary clock, but, because motion is relative, the perception is reversed for the other clock. However, as Feynman points out:
The time measured by a moving clock is called its “proper time”. In free fall, the trajectory makes the proper time of an object a maximum.
In other words, the geodesic is the trajectory or path of longest relativistic time. Any variant from the geodesic will result in the clock’s proper time being shorter, which means time literally slows down. So special relativity is not symmetrical in a gravitational field and there is a gravitational field everywhere in space. As Gutfreund and Renn point out, Einstein himself acknowledged that he had effectively replaced the fictional aether with gravity.
This is most apparent when one considers a black hole. Every massive body has an escape velocity which is the velocity a projectile must achieve to become free of a body’s gravitational field. Obviously, the escape velocity for Earth is larger than the escape velocity for the moon and considerably less than the escape velocity of the Sun. Not so obvious, although logical from what we know, the escape velocity is independent of the projectile’s mass and therefore also applies to light (photons). We know that all body’s fall at exactly the same rate in a gravitational field. In other words, a geodesic applies equally to all bodies irrespective of their mass. In the case of a black hole, the escape velocity exceeds the speed of light, and, in fact, becomes the speed of light at its event horizon. At the event horizon time stops for an external observer because the light is red-shifted to infinity. One of the consequences of Einstein’s theory is that clocks travel slower in a stronger gravitational field, and, at the event horizon, gravity is so strong the clock stops.
To appreciate why clocks slow down and rods become shorter (in the direction of motion), with respect to an observer, one must understand the consequences of the speed of light being constant. If light is a wave then the equation for a wave is very fundamental:
v = f λ , where v is velocity, f is the frequency and λ is the wavelength.
In the case of light the equation becomes c = f λ , where c is the speed of light.
One can see that if c stays constant then f and λ can change to accommodate it. Frequency measures time and wavelength measures distance. One can see how frequency can become stretched or compressed by motion if c remains constant, depending whether an observer is travelling away from a source of radiation or towards it. This is called the Doppler effect, and on a cosmic scale it tells us that the Universe is expanding, because virtually all galaxies in all directions are travelling away from us. If a geodesic is the path of maximum proper time, we have a reference for determining relativistic effects, and we can use the Doppler effect to determine if a light source is moving relative to an observer, even though the speed of light is always c.
I won’t go into it here, but the famous twin paradox can be explained by taking into account both relativistic and Doppler effects for both parties – the one travelling and the one left at home.
It needs to be pointed out that Einstein’s ‘annus mirabilis’ (miraculous year), as it’s been called, occurred 10 years earlier in 1905, when he published 3 groundbreaking papers that elevated him from a patent clerk in Bern to a candidate for the Nobel Prize (eventually realised of course). The 3 papers were his Special Theory of Relativity, his explanation of the photo-electric effect using the newly coined concept, photon of light, and a statistical analysis of Brownian motion, which effectively proved that molecules made of atoms really exist and were not just a convenient theoretical concept.
Given the anniversary, it seemed appropriate that I should write something on the topic, despite my limited knowledge and despite the plethora of books that have been published to recognise the feat. The best I’ve read is The Road to Relativity; The History and Meaning of Einstein’s “The Foundation of General Relativity” (the original title of his paper) by Hanoch Gutfreund and Jurgen Renn. They have managed to include an annotated copy of Einstein’s original handwritten manuscript with a page by page exposition. But more than that, they take us on Einstein’s mental journey and, in particular, how he found the mathematical language to portray the intuitive ideas in his head and yet work within the constraints he believed were necessary for it to work.
The constraints were not inconsiderable and include: the equivalence of inertial and gravitational mass; the conservation of energy and momentum under transformation between frames of reference both in rotational and linear motion; and the ability to reduce his theory mathematically to Newton’s theory when relativistic effects were negligible.
Einstein’s epiphany, that led him down the particular path he took, was the realisation that one experienced no force when one was in free fall, contrary to Newton’s theory and contrary to our belief that gravity is a force. Free fall subjectively feels no different to being in orbit around a planet. The aptly named ‘vomit comet’ is an aeroplane that goes into free fall in order to create the momentary sense of weightlessness that one would experience in space.
Einstein learnt from his study of Maxwell’s equations for electromagnetic radiation, that mathematics could sometimes provide a counter-intuitive insight, like the constant speed of light.
In fact, Einstein had to learn new mathematics (for him) and engaged the help of his close friend, Marcel Grossman, who led him through the technical travails of tensor calculus using Riemann geometry. It would seem, from what I can understand of his mental journey, that it was the mathematics, as much as any other insight, that led Einstein to realise that space-time is curved and not Euclidean as we all generally believe. To quote Gutfreund and Renn:
[Einstein] realised that the four-dimensional spacetime of general relativity no longer fitted the framework of Euclidean geometry… The geometrization of general relativity and the understanding of gravity as being due to the curvature of spacetime is a result of the further development and not a presupposition of Einstein’s formulation of the theory.
By Euclidean, one means space is flat and light travels in perfectly straight lines. One of the confirmations of Einstein’s theory was that he predicted that light passing close to the Sun would be literally bent and so a star in the background would appear to shift as the Sun approached the same line of sight for an observer on Earth as for the star. This could only be seen during an eclipse and was duly observed by Arthur Eddington in 1919 on the island of Principe near Africa.
Einstein’s formulations led him to postulate that it’s the geometry of space that gives us gravity and the geometry, which is curved, is caused by massive objects. In other words, it’s mass that curves space and it’s the curvature of space that causes mass to move, as John Wheeler famously and succinctly expounded.
It may sound back-to-front, but, for me, Einstein’s Special Theory of Relativity only makes sense in the context of his General Theory, even though they were formulated in the reverse order. To understand what I’m talking about, I need to explain geodesics.
When you fly long distance on a plane, the path projected onto a flat map looks curved. You may have noticed this when they show the path on a screen in the cabin while you’re in flight. The point is that when you fly long distance you are travelling over a curved surface, because, obviously, the Earth is a sphere, and the shortest distance between 2 points (cities) lies on what’s called a great circle. A great circle is the one circle that goes through both points that is the largest circle possible. Now, I know that sounds paradoxical, but the largest circle provides the shortest distance over the surface (we are not talking about tunnels) that one can travel and there is only one, therefore there is one shortest path. This shortest path is called the geodesic that connects those 2 points.
A geodesic in gravitation is the shortest distance in spacetime between 2 points and that is what one follows when one is in free fall. At the risk of information overload, I’m going to introduce another concept which is essential for understanding the physics of a geodesic in gravity.
One of the most fundamental principles discovered in physics is the principle of least action (formulated mathematically as a Lagrangian which is the difference between kinetic and potential energy). The most commonly experienced example would be refraction of light through glass or water, because light travels at different velocities in air, water and glass (slower through glass or water than air). The extremely gifted 17th Century amateur mathematician, Pierre de Fermat (actually a lawyer) conjectured that the light travels the shortest path, meaning it takes the least time, and the refractive index (Snell’s law) can be deduced mathematically from this principle. In the 20th Century, Richard Feynman developed his path integral method of quantum mechanics from the least action principle, and, in effect, confirmed Fermat’s principle.
Now, when one applies the principle of least action to a projectile in a gravitational field (like a thrown ball) one finds that it too takes the shortest path, but paradoxically this is the path of longest relativistic time (not unlike the paradox of the largest circle described earlier).
Richard Feynman gives a worked example in his excellent book, Six Not-So-Easy Pieces. In relativity, time can be subjective, so that a moving clock always appears to be running slow compared to a stationary clock, but, because motion is relative, the perception is reversed for the other clock. However, as Feynman points out:
The time measured by a moving clock is called its “proper time”. In free fall, the trajectory makes the proper time of an object a maximum.
In other words, the geodesic is the trajectory or path of longest relativistic time. Any variant from the geodesic will result in the clock’s proper time being shorter, which means time literally slows down. So special relativity is not symmetrical in a gravitational field and there is a gravitational field everywhere in space. As Gutfreund and Renn point out, Einstein himself acknowledged that he had effectively replaced the fictional aether with gravity.
This is most apparent when one considers a black hole. Every massive body has an escape velocity which is the velocity a projectile must achieve to become free of a body’s gravitational field. Obviously, the escape velocity for Earth is larger than the escape velocity for the moon and considerably less than the escape velocity of the Sun. Not so obvious, although logical from what we know, the escape velocity is independent of the projectile’s mass and therefore also applies to light (photons). We know that all body’s fall at exactly the same rate in a gravitational field. In other words, a geodesic applies equally to all bodies irrespective of their mass. In the case of a black hole, the escape velocity exceeds the speed of light, and, in fact, becomes the speed of light at its event horizon. At the event horizon time stops for an external observer because the light is red-shifted to infinity. One of the consequences of Einstein’s theory is that clocks travel slower in a stronger gravitational field, and, at the event horizon, gravity is so strong the clock stops.
To appreciate why clocks slow down and rods become shorter (in the direction of motion), with respect to an observer, one must understand the consequences of the speed of light being constant. If light is a wave then the equation for a wave is very fundamental:
v = f λ , where v is velocity, f is the frequency and λ is the wavelength.
In the case of light the equation becomes c = f λ , where c is the speed of light.
One can see that if c stays constant then f and λ can change to accommodate it. Frequency measures time and wavelength measures distance. One can see how frequency can become stretched or compressed by motion if c remains constant, depending whether an observer is travelling away from a source of radiation or towards it. This is called the Doppler effect, and on a cosmic scale it tells us that the Universe is expanding, because virtually all galaxies in all directions are travelling away from us. If a geodesic is the path of maximum proper time, we have a reference for determining relativistic effects, and we can use the Doppler effect to determine if a light source is moving relative to an observer, even though the speed of light is always c.
I won’t go into it here, but the famous twin paradox can be explained by taking into account both relativistic and Doppler effects for both parties – the one travelling and the one left at home.
This is an exposition I wrote on the twin paradox.
Subscribe to:
Posts (Atom)