Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Monday, 14 March 2016

Argument, reason and belief

Recently in New Scientist (5 March 2016) there was a review of a book, The Persuaders: The Hidden Industry that Wants to Change Your Mind by James Garvey (which I must read), which tackles the issue of why argument by reason often fails. I’ve experienced this first hand on this blog, which has led me to the conclusion that you can’t explain something to someone who doesn’t want it explained. The book referenced above is more about how propaganda works, apparently, but that is not what I wish to discuss.

In the same vein, I’ve recently watched a number of YouTube videos covering excerpts from debates and interviews with scientists on the apparent conflict between science and religion. I say apparent because not all scientists are atheists and not all theologians are creationists, yet that is the impression one often gets from watching these.

The scientists I’ve been watching include Richard Dawkins, Neil deGrasse Tyson, Bill Nye (aka the science guy) and Michio Kaku. I think Tyson presents the best arguments (from the small sample I watched) but Michio Kaku comes closest to presenting my own philosophical point of view. In one debate (see links at bottom) he says the ‘God question’ is ‘undecidable’ and predicts that, unlike many scientific questions of today, the God question will be no further advanced in 100 years time than it is in the current debate.

The issue, from my perspective, is that science and religion deal with completely different things. Science is an epistemology – it’s a study of the natural world in all its manifestations. Religion is something deeply personal and totally subjective, and that includes God. God is something people find inside themselves which is why said God has so many different personalities and prejudices depending on who the believer is. I’ve argued this before, so I won’t repeat myself.

At least 2 of the scientists I reference above (Dawkins and Tyson) point out something I’ve said myself: once you bring God into epistemology to explain some phenomenon that science can’t currently explain, you are saying we have come to the end of science. History has revealed many times over that something that was inexplicable in the past becomes explicable in the future. As I’ve said more than once on this blog: only people from the future can tell us how ignorant we are in the present. Tyson made the point that the apposite titled God-of-the-Gaps is actually a representation of our ignorance – a point I’ve made myself.

This does not mean that God does not exist; it means that God can’t help us with our science. People who argue that science can be replaced with Scripture are effectively arguing that science should be replaced by ignorance. The Old Testament was written by people who wanted to tell a story of their origins and it evolved into a text to scare people into believing that they are born intrinsically evil. At least that’s how it was presented to me as a child.

Of all the videos I watched, the most telling was an excerpt from a debate between Bill Nye (the science guy) and Ken Ham (the architect of the Creation Museum in Petersburg, Kentucky). Ham effectively argued that science can only be done in the present. So-called historical science, giving the age of the Earth or the Universe, or its origins, cannot be determined using the same methods we use for current scientific investigations. When asked if any evidence could change his beliefs, he said there was no such evidence and the Bible was the sole criterion for his beliefs.

And this segues back into my introduction: you cannot explain something to someone who doesn’t want it explained. When I argue with someone or even present an argument on this blog, I don’t expect to change people’s points of view to mine; I expect to make them think. Hence the little aphorism in the blog’s title block.

One of the points made in the New Scientist review, referenced in my opening, is that people rarely if ever change their point of view even when presented with indisputable evidence or a proof. This is true even among scientists. We all try to hang on to our pet theories for as long as possible until they are no longer tenable. I’m as guilty of this as anyone, even though I’m not a scientist.

One of the things that helps perpetuate our stubbornness is confirmation bias (mentioned by New Scientist) whereby we tend to only read or listen to people whom we agree with. We do this with politics all the time. But I have read contrary points of view, usually given to me by people who think I’m biased. I’ve even read C.S. Lewis. What I find myself doing in these instances is arguing in my head with the authors. To give another example, I once read a book by Colin McGinn (Basic Structures of Reality) that only affirmed for me that people who don’t understand science shouldn’t write books about it, yet I still read it and even wrote a review of it on Amazon UK.

There is a thing called philosophy and it’s been married to science for many centuries. Despite what some people claim (Richard Dawkins and Stephen Hawking to mention 2) I can’t see a divorce any time soon. To use a visual metaphor, our knowledge is like an island surrounded by a sea of unsolved mysteries. The island keeps expanding but the sea is infinite. The island is science and the shoreline is philosophy. To extend the metaphor, our pet theories reside on the beach.

Noson Yanofsky, a Professor of Computer and Information Science in New York, wrote an excellent book called The Outer Limits of Reason, whereby he explained how we will never know everything – it’s cognitively and physically impossible. History has demonstrated that every generation believes that we almost know everything that science can reveal, yet every revelation only reveals new mysteries.

This is a video of Michio Kaku and Richard Dawkins, amongst others, giving their views on science, God and religion.

This is a short video of Leonard Susskind explaining 2 types of agnosticism, one of which he seems to concur with.

Saturday, 27 February 2016

In Nature, paradox is the norm, not the exception

I’ve just read Marcus Chown’s outstanding book for people wanting their science served without equations, Quantum Theory Cannot Hurt You. As the title suggests, half the book covers QM and half the book covers relativity. Chown is a regular contributor to New Scientist, and this book reflects his journalistic ease at discussing esoteric topics in physics. He says, right at the beginning, that he brings his own interpretation to these topics but it’s an erudite and well informed one.

No where is Nature’s paradoxical nature more apparent than the constant speed of light, which was predicted by Maxwell’s equations, not empirical evidence. Of course this paradox was resolved by Einstein’s theories of relativity; both of them (the special theory and the general theory). Other paradoxes that seem built into the Universe are not so easily resolved, but I will come to them.

As Chown explicates, the constant speed of light has the same psychological effect as if it was infinite and the Lorentz transformation, which is the mathematical device used to describe relativistic effects, tends to infinity at its limit (the limit being the speed of light). If one could travel at the speed of light, a light beam would appear stationary and time would literally stand still. In fact, this is what Einstein imagined in one of his famous thought experiments that led him to his epiphanic theory.

The paradox is easily demonstrated if one imagines a spacecraft travelling at very high speed, which could be measured as a fraction of the speed of light. This craft transmits a laser both in front of it and behind it. Intuition tells us that someone ahead of the craft who is stationary relative to the craft (say on Earth) receives the signal at the speed of light plus the fraction that it is travelling relative to Earth. On the other hand, if the spacecraft was travelling away from Earth at the same relative speed, one would expect to measure the laser as being the speed of light minus the relative speed of the craft. However, contrary to intuition, the speed of light is exactly the same in both cases which is the same as measured by anyone on the spacecraft itself. The paradox is resolved by Einstein’s theory of special relativity that tells us that whilst the speed of light is constant for both observers (one on the spacecraft and one on Earth) their measurements of time and length will not be the same, which is entirely counter-intuitive. This is not only revealed in the mathematics but has been demonstrated by comparing clocks in real spacecraft compared to Earth. In fact, the Sat-Nav you use in your car or on your phone, takes relativistic effects into account to give you the accuracy you’ve become acquainted with. (Refer Addendum 2 below, which explains the role of the Doppler effect in determining moving light sources.)

However, there are other paradoxes associated with relativity that have not been resolved, including time itself. Chown touches on this and so did I, not so long ago, in a post titled, What is now? According to relativity, there is no objective now, and Chown goes so far as to say: ‘”now” is a fictitious concept’ (quotation marks in the original). He quotes Einstein: “For us physicists, the distinction between past, present and future is only an Illusion.” And Chown calls it ‘one of [Nature’s] great unsolved mysteries.’

Reading this, one may consider that Nature’s paradoxes are simply a consequence of the contradiction between our subjective perceptions and the reality that physics reveals. However, there are paradoxes within the physics itself. For example, we give an age to the Universe which does suggest that there is a universal “now”, and quantum entanglement (which Chown discusses separately) implies that simultaneity can occur over any distance in the Universe.

Quantum mechanics, of course, is so paradoxical that no one can agree on what it really means. Do we live in a multiverse, where every possibility predicted mathematically by QM exists, of which we experience only one? Or do things only exist when they are ‘observed’? Or is there a ‘hidden reality’ which the so-called real ‘classical’ world interacts with? I discussed this quite recently, so I will keep this discussion brief. If there is a multiverse (which many claim is the only ‘logical’ explanation) then they interfere with each other (as Chown points out) and some even cancel each other out completely, for every single quantum event. But another paradox, which goes to the heart of modern physics, is that quantum theory and Einstein’s general theory of relativity cannot be reconciled in their current forms. As Chown points out, String Theory is seen as the best bet but it requires 10 dimensions of which all but 3 cannot be detected with current technology.

Now I’m going to talk about something completely different, which everyone experiences, but which is also a paradox when analysed scientifically. I’m referring to free will, and like many of the topics I’ve touched on above, I discussed this recently as well. The latest issue of Philosophy Now (Issue 112, February / March 2016) has ‘Free Will’ as its theme. There is a very good editorial by Grant Bartley who discusses on one page all the various schools of thought on this issue. He makes the point, that I’ve made many times myself: ‘Why would consciousness evolve if it didn’t do anything?’ He also makes this statement: ‘So if there is free will, then there must be some way for a mind to direct the state of its brain.’ However, all the science tells us that the ‘mind’ is completely dependent on the ‘state of its brain’ so the reverse effect must be an illusion.

This interpretation would be consistent with the notion I mooted earlier that paradoxes are simply the consequence of our subjective experience contradicting the physical reality. However, as I pointed out in my above-referenced post, there are examples of the mind affecting states of the brain. In New Scientist (13 February 2016) Anil Ananthaswamy reviews Eliezer Sternberg’s Neurologic: The brain’s hidden rationale behind our irrational behaviour (which I haven’t read). According to Ananthaswamy, Sternberg discusses in depth the roles of the conscious and subconscious and concludes that the unconscious ‘can get things wrong’. He then asks the question: ‘Can the conscious right some of these wrongs? Can it influence the unconscious? Yes, says Sternberg.’ He gives the example of British athlete, Steve Backley ‘imagining the perfect [javelin] throw over and over again’ even though a sprained ankle stopped him from practicing, and he won Silver in the 1996 Atlanta Olympics.

My point is that paradoxes are a regular feature of the Universe at many levels, from quantum mechanics to time to consciousness. In fact, consciousness is arguably the least understood phenomenon in the entire Universe, yet, without it, the Universe’s existence would be truly meaningless. Consciousness is subjectivity incarnate yet we attempt to explain it with complete objectivity. Does that make it a paradox or an illusion?


Addendum 1: Since writing this post, I came across this video of John Searle discussing the paradox of free will. He introduces the subject by saying that no progress has been made on this topic in the last 100 years. Unlike my argument, he discusses the apparent contradiction between free will and cause and effect.

Addendum 2: It should be pointed out that the Doppler effect allows an observer to know if a light source is moving towards them or away from them. In other words, there is change in frequency even though there isn't a change in velocity (of the light). It's for this reason that we know the Universe is expanding with galaxies moving away from us.

Tuesday, 2 February 2016

Creation Science: a non sequitur

A friend of mine – someone whom I’d go to for help – leant me a ‘Creation’ magazine to prove that there are creationists who are real scientists. And, I have to admit, my friend was right: the magazine was full of contributors who had degrees in science, including one who has a PhD and honours and works at a run-of-the-mill university; but who wrote the following howler: ‘Cosmology is unscientific because you can’t do an experiment in cosmology.’ I wonder if said writer would be willing to say that to Australian Nobel Prize winner, Brian Schmidt. Only humans can be living contradictions.

Creation science is an epistemological contradiction – there’s no such thing – by definition. Science does not include magic – I can’t imagine anyone who would disagree with that, but I might be wrong. Replacing a scientific theory with supernaturally enhanced magic is anti-science, yet creationists call it science – as the Americans like to say: go figure.

The magazine was enlightening in that the sole criterion for these ‘scientists’ as to the validity of any scientific knowledge was whether or not it agreed with the Bible. If this was literally true, we would still be believing that the Sun goes round the Earth, rather than the other way round. After all, the Book of Joshua tells us how God stopped the Sun moving in the sky. It doesn’t say that God stopped the Earth spinning, which is what he would have had to do.

One contributor to the magazine even allows for ‘evolution’ after ‘creation’, because God programmed ‘subroutines’ into DNA, but was quick to point out that this does ‘not contradict the Bible’. Interesting to note that DNA wouldn’t even have been discovered if all scientists were creationists (like the author).

Why do you think the ‘Dark Ages’ are called the dark ages? Because science, otherwise known as ‘natural philosophy’, was considered pagan, as the Greeks’ neo-Platonist philosophy upon which it was based was pagan. Someone once pointed out that Hypatia’s murder by a Christian mob (around 400AD) signalled the start of the dark ages, which lasted until around 1200, when Fibonacci introduced the West to the Hindu-Arabic system of numbers. In fact, it is the Muslims who kept that knowledge in the interim, otherwise it may well have been lost to us forever.

So science and Christianity have a long history of contention that goes back centuries before Copernicus, Galileo and Darwin. If anything, the gap has got wider, not closer; they’ve only managed to co-exist by staying out of each other’s way.

There are many religious texts in the world, a part of our collective cultural and literary legacy, but none of them are scientific or mathematical texts, which also boast diverse cultural origins. It is an intellectual conceit (even deceit) to substitute religious teaching for scientifically gained knowledge. Of course scientifically gained knowledge is always changing, advancing, being overtaken and is never over. In fact, I would contend that science will never be complete, as history has demonstrated, so there will always be arguments for supernatural intervention, otherwise known as the ‘God-of-the-Gaps’. Godel’s Incompleteness theorem infers that mathematics is a never-ending epistemological mine, and I believe that the same goes for science.

Did I hear someone say: what about Intelligent Design (ID)? Well, it’s still supernatural intervention, isn’t it? Same scenario, different description.

Religion is not an epistemology, it’s a way of life. Whichever way you look at it, it’s completely subjective. Religion is part of your inner world, and that includes God. So the idea that the God you’ve found within yourself is also the Creator of the entire Universe is a non sequitur. Because everyone’s idea of God is unique to them.

Tuesday, 19 January 2016

Is this the God equation?

Yes, this is a bit tongue-in-cheek, but like most things tongue-in-cheek it just might contain an element of truth. I’m not a cosmologist or even a physicist, so this is just me being playful yet serious in as much as anyone can be philosophically serious about the origins of Everything, otherwise known as the Universe.

Now I must make a qualification, lest people think I’m leading them down the garden path. When people think of ‘God’s equation’, they most likely think of some succinct equation or set of equations (like Maxwell’s equations) from which everything we know about the Universe can be derived mathematically. For many people this is a desired outcome, founded on the belief that one day we will have a TOE (Theory Of Everything) – itself a misnomer – which will incorporate all the known laws of the Universe in one succinct theory. Specifically, said theory will unite the Electromagnetic force, the so-called Weak force, the so-called Strong force and Gravity as all being derived from a common ‘field’. Personally, I think that’s a chimera, but I’d be happy to be proven wrong. Many physicists believe some version of String Theory or M Theory will eventually give us that goal. I should point out that the Weak force has already been united with the Electromagnetic force.

So what do I mean by the sobriquet, God’s equation? Last week I watched a lecture by Allan Adams as part of MIT Open Courseware (8.04, Spring 2013) titled Lecture 6: Time Evolution and the Schrodinger Equation, in which Adams made a number of pertinent points that led me to consider that perhaps Schrodinger’s Equation (SE) deserved such a title. Firstly, I need to point out that Adams himself makes no such claim, and I don’t expect many others would concur.

Many of you may already know that I wrote a post on Schrodinger’s Equation nearly 5 years ago and it has become, by far, the most popular post I’ve written. Of course Schrodinger’s Equation is not the last word in quantum mechanics –more like a starting point. By incorporating relativity we have Dirac’s equation, which predicted anti-matter – in fact, it’s a direct consequence of relativity and SE. In fact, Schrodinger himself, followed by Klein-Gordon, also had a go at it and rejected it because it gave answers with negative energy. But Richard Feynman (and independently, Ernst Stuckelberg) pointed out that this was mathematically equivalent to ordinary particles travelling backwards in time. Backwards in time, is not an impossibility in the quantum world, and Feynman even incorporated it into his famous QED (Quantum Electro-Dynamics) which won him a joint Nobel Prize with Julian Schwinger and Sin-Itiro Tomonaga in 1965. QED, by the way, incorporates SE (just read Feynman’s book on the subject).

This allows me to segue back into Adams’ lecture, which, as the title suggests, discusses the role of time in SE and quantum mechanics generally. You see ‘time’ is a bit of an enigma in QM.

Adams’ lecture, in his own words, is to provide a ‘grounding’ so he doesn’t go into details (mathematically) and this suited me. Nevertheless, he throws terms around like eigenstates, operators and wave functions, so familiarity with these terms would be essential to following him. Of those terms, the only one I will use is wave function, because it is the key to SE and arguably the key to all of QM.

Right at the start of the lecture (his Point 1), Adams makes the salient point that the Wave function, Ψ, contains ‘everything you need to know about the system’. Only a little further into his lecture (his Point 6) he asserts that SE is ‘not derived, it’s posited’. Yet it’s completely ‘deterministic’ and experimentally accurate. Now (as discussed by some of the students in the comments) to say it’s ‘deterministic’ is a touch misleading given that it only gives us probabilities which are empirically accurate (more on that later). But it’s a remarkable find that Schrodinger formulated a mathematical expression based on a hunch that all quantum objects, be they light or matter, should obey a wave function.

But it’s at the 50-55min stage (of his 1hr 22min lecture) that Adams delivers his most salient point when he explains so-called ‘stationary states’. Basically, they’re called stationary states because time remains invariant (doesn’t change) for SE which is what gives us ‘superposition’. As Adams points out, the only thing that changes in time in SE is the phase of the wave function, which allows us to derive the probability of finding the particle in ‘classical’ space and time. Classical space and time is the real physical world that we are all familiar with. Now this is what QM is all about, so I will elaborate.

Adams effectively confirmed for me something I had already deduced: superposition (the weird QM property that something can exist simultaneously in various positions prior to being ‘observed’) is a direct consequence of time being invariant or existing ‘outside’ of QM (which is how it’s usually explained). Now Adams makes the specific point that these ‘stationary states’ only exist in QM and never exist in the ‘Real’ world that we all experience. We never experience superposition in ‘classical physics’ (which is the scientific pseudonym for ‘real world’). This highlights for me that QM and the physical world are complementary, not just versions of each other. And this is incorporated in SE, because, as Adams shows on his blackboard, superposition can be derived from SE, and when we make a measurement or observation, superposition and SE both disappear. In other words, the quantum state and the classical state do not co-exist: either you have a wave function in Hilbert space or you have a physical interaction called a ‘wave collapse’ or, as Adams prefers to call it, ‘decoherence’. (Hilbert space is a theoretical space of possibly infinite dimensions where the wave function theoretically exists in its superpositional manifestation.)

Adams calls the so-called Copenhagen interpretation of QM the “Cop Out” interpretation which he wrote on the board and underlined. He prefers ‘decoherence’ which is how he describes the interaction of the QM wave function with the physical world. My own view is that the QM wave function represents all the future possibilities, only one of which will be realised. Therefore the wave function is a description of the future yet to exist, except as probabilities; hence the God equation.

As I’ve expounded in previous posts, the most popular interpretation at present seems to be the so-called ‘many worlds’ interpretation where all superpositional states exist in parallel universes. The most vigorous advocate of this view is David Deutsch, who wrote about it in a not-so-recent issue of New Scientist (3 Oct 2015, pp.30-31). I also reviewed his book, Fabric of Reality, in September 2012. In New Scientist, Deutsch advocated for a non-probabilistic version of QM, because he knows that reconciling the many worlds interpretation with probabilities is troublesome, especially if there are an infinite number of them. However, without probabilities, SE becomes totally ineffective in making predictions about the real world. It was Max Born who postulated the ingenious innovation of squaring the modulus of the wave function (actually multiplying it with its complex conjugate, as I explain here) which provides the probabilities that make SE relevant to the physical world.

As I’ve explained elsewhere, the world is fundamentally indeterministic due to asymmetries in time caused by both QM and chaos theory. Events become irreversible after QM decoherence, and also in chaos theory because the initial conditions are indeterminable. Now Deutsch argues that chaos theory can be explained by his many worlds view of QM, and mathematician, Ian Stewart, suggests that maybe QM can be explained by chaos theory as I expound here. Both these men are intellectual giants compared to me, yet I think they’re both wrong. As I’ve explained above, I think that the quantum world and the classical world are complementary. The logical extension of Deutch’s view, by his own admission, requires the elimination of probabilities, making SE ineffectual. And Stewart’s circuitous argument to explain QM probabilities with chaos theory eliminates superposition, for which we have indirect empirical evidence (using entanglement, which is well researched). Actually, I think superposition is a consequence of the wave function effectively being everywhere at once or 'permeates all of space' (to quote Richard Ewles in MATHS 1001).

If I’m right in stating that QM and classical physics are complementary (and Adams seems to make the same point, albeit not so explicitly) then a TOE may be impossible. In other words, I don't think classical physics is a special case of QM, which is the current orthodoxy among physicists.


Addendum 1: Since writing this, I've come to the conclusion that QM and, therefore, the wave function describe the future - an idea endorsed by non-other than Freeman Dyson, who was instrumental in formulating QED with Richard Feynman.

Addendum 2: I've amended the conclusion in my 2nd last paragraph, discussing Deutch's and Stewart's respective 'theories', and mentioning entanglement in passing. Schrodinger once said (in a missive to Einstein, from memory) that entanglement is what QM is all about. Entanglement effectively challenges Einstein's conclusion that simultaneity is a non sequitur according to his special theory of relativity (and he's right, providing there's no causal relationship between events). I contend that neither Deutch nor Stewart can resolve entanglement with their respective 'alternative' theories, and neither of them address it from what I've read.

Tuesday, 12 January 2016

How to write a story so it reads like a movie in your head

I’ve written about writing a few times now, including Writing’s 3 Essential Skills (Jul. 2013) and How to Create an Imaginary, Believable World (Aug. 2010), the last one being a particularly popular post. Also, I taught a creative writing course in 2009 and have given a couple of talks on the subject, but never intentionally to provide advice on how to make a story read like a movie in your head.

This post has arisen from a conversation I had when I realised I had effectively taught myself how to do this. It’s not something that I deliberately set out to do but I believe I achieved it inadvertently and comments from some readers appear to confirm this. At YABooksCentral, a teenage reviewer has made the point specifically, and many others have said that my book (Elvene) would make a good movie, including a filmmaker. Many have said that they ‘could see everything’ in their mind’s eye.

Very early in my writing career (though it’s never been my day job) I took some screenwriting courses and even wrote a screenplay. I found that this subconsciously influenced my prose writing in ways that I never foresaw and that I will now explain. The formatting of a screenplay doesn’t lend itself to fluidity, with separate headings for every scene and dialogue in blocks interspersed with occasional brief descriptive passages. Yet a well written screenplay lets you see the movie in your mind’s eye and you should write it as you’d imagine it appearing on a screen. However, contrary to what you might think, this is not the way to write a novel. Do not write a novel as if watching a movie. Have I confused you? Well, bear this in mind and hopefully it will all make sense before the end.

Significantly, a screenplay needs to be written in ‘real time’, which means descriptions are minimalistic and exposition non-existent (although screenwriters routinely smuggle exposition into their dialogue). Also, all the characterisation is in the dialogue and the action – you don’t need physical descriptions of a character, including their attire, unless it’s significant; just gender, generic age and ethnicity (if it’s important). It was this minimalistic approach that I subconsciously imported into my prose fiction.

There is one major difference between writing a screenplay and writing a novel and the two subsequent methods require different states of mind. In writing a screenplay you can only write what is seen and heard on the screen, whereas a novel can be written entirely (though not necessarily) from inside a character’s head. I hope this clarifies the point I made earlier. Now, as someone once pointed out to me (fellow blogger, Eli Horowitz) movies can take you into a character’s head through voiceover, flashbacks and dream sequences. But, even so, the screenplay would only record what is seen and heard on the screen, and these are exceptions, not the norm. Whereas, in a novel, getting inside a character’s head is the norm.

To finally address the question implicit in my heading, there are really only 2 ‘tricks’ for want of a better term: write the story in real time and always from some character’s point of view. Even description can be given through a character’s eyes, and the reader subconsciously becomes an actor. By inhabiting a character’s mind, the reader becomes fully immersed in the story.

Now I need to say something about scenes, because, contrary to popular belief, scenes are the smallest component of a story, not words or sentences or paragraphs. It’s best to think of the words on the page like the notes on a musical score. When you listen to a piece of music, the written score is irrelevant, and, even if you read the score, you wouldn’t hear the music anyway (unless, perhaps, if you’re a musician or a composer). Similarly, the story takes place in the reader’s mind where the words on the page conjure up images and emotions without conscious effort.

In a screenplay a scene has a specific definition, defined by a change in location or time. I use the same definition when writing prose. There are subtle methods for expanding and contracting time psychologically in a movie, and these can also be applied to prose fiction. I’ve made the point before that the language of story is the language of dreams, and in dreams, as in stories, sudden changes in location and time are not aberrant. In fact, I would argue that if we didn’t dream, stories wouldn’t work because our minds would continuously and subconsciously struggle with the logic.

Tuesday, 5 January 2016

Free will revisited

I’ve written quite a lot on this in the past, so one may wonder what I could add.

I’ve just read Mark Balaguer’s book, Free Will, which I won when Philosophy Now published my answer to their Question of the Month in their last issue (No 111, December 2015). It’s the fourth time I’ve won a book from them (out of 5 submissions).

It’s a well written book, not overly long or over-technical in a philosophical sense, so very readable whilst being well argued. Balaguer makes it clear from the outset where he stands on this issue, by continually referring to those who argue against free will as ‘the enemies of free will’. Whilst this makes him sound combative, the tone of his arguments are measured and not antagonistic. In his conclusion, he makes the important distinction that in ‘blocking’ arguments against free will, he’s not proving that free will exists.

He makes the distinction between what he calls Hume-style free will and Non-predetermined free will (NDP), which is a term I believe he’s coined for himself. Hume-style free will, is otherwise known as ‘compatibilism’, which means it’s compatible with determinism. In other words, even if everything in the world is deterministic from the Big Bang onwards, it doesn’t rule out you having free will. I know it sounds like a contradiction, but I think it’s to do with the fact that a completely deterministic universe doesn’t conflict with the subjective sense we all have of having free will. As I’ve expressed in numerous posts on this blog, I think there is ample evidence that the completely deterministic universe is a furphy, so compatibilism is not relevant as far as I’m concerned.

Balaguer also coins another term, ‘torn decision’, which he effectively uses as a litmus test for free will. In a glossary in the back he gives a definition which I’ve truncated:

A torn decision is a conscious decision in which you have multiple options and you’re torn as to which option is best.

He gives the example of choosing between chocolate or strawberry flavoured ice cream and not making a decision until you’re forced to, so you make it while you’re still ‘torn’. This is the example he keeps coming back to throughout the book.

In recent times, experiments in neuro-science have provided what some people believe are ‘slam-dunk’ arguments against free will, because scientists have been able to predict with 60% accuracy what decision a subject will make seconds before they make it, simply by measuring neuron activity in certain parts of the brain. Balaguer provides the most cogent arguments I’ve come across challenging these contentions. In particular, the Haynes studies, which showed neuron activity up to 10 seconds prior to the conscious decision. Balaguer points out that the neuron activity for these studies occurs in the PC and BA10 areas of the brain, which are associated with the ‘generation of plans’ and the ‘storage of plans’ respectively. He makes the point (in greater elaboration than I do here) that we should not be surprised if we subconsciously use our ‘planning’ areas of the brain whilst trying to make ‘torn decisions’. The other experiment and their counterparts, known as the Libet studies (since the 1960s) showed neuron activity half a second prior to conscious decision-making and was termed the ‘readiness potential’.  Balaguer argues that there is ‘no evidence’ that the readiness potential causes the decision. Even so, it could be argued that, like the Haynes studies, it is subconscious activity happening prior to the conscious decision.

It is readily known (as Balaguer explicates) that much of our thinking is subconscious. We all have the experience of solving a problem subconsciously so it comes to us spontaneously when we don’t expect it to. And anyone who has pursued some artistic endeavour (like writing fiction) knows that a lot of it is subconscious so that the story and its characters appear on the page with seemingly divine-like spontaneity.

Backtracking to so-called Hume-style free will, it does have a relevance if one considers that our ‘wants’ - what we wish to do - are determined by our desires and needs. We assume that most of the animal kingdom behave on this principle. Few people (including Balaguer) discuss other sentient creatures when they discuss free will, yet I’ve long believed that consciousness and free will go hand-in-hand. In other words, I really can’t see the point of consciousness without free will. If everything is determined subconsciously, without the need to think, then why have we evolved to think?

But humans take thinking to a new level compared to every other species on the planet, so that we introspect and cogitate and reason and internally debate our way to many a decision.

Back in Feb., 2009, I reviewed Douglas Hofstadter’s Pulitzer prize winning book, Godel, Escher, Bach where, among other topics, I discussed consciousness, as that’s one of the themes of his book. Hofstadter coins the term ‘strange loop’. This is what I wrote back then:

By strange loop, Hofstadter means that we can effectively look at all the levels of our thinking except the ground level, which is our neurons. In between we have symbols, which is language, which we can discuss and analyse in a dispassionate way, just like I’m doing now. I can talk about my own thoughts and ideas as if they weren’t mine at all. Consciousness, in Hofstadter’s model (for want of a better word) is the top level, and neurons are the hardware level. In between we have the software (symbols) which is effectively language.

I was quick to point out that ‘software’ in this context is a metaphor – I don’t believe that language is really software, even though we ‘download’ it from generation to generation and it is indispensable to human reasoning, which we call thinking.

The point I’d make is that this is a 2 way process: the neurons are essential to thoughts, yet our thoughts I expect can affect neurons. I believe there is evidence that we can and do rewire our brains simply by exercising our mental faculties, even in later years, and surely exercising consciously is the very definition of will.