Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
Showing posts with label Free will. Show all posts
Showing posts with label Free will. Show all posts

Friday 22 December 2017

Who and what do you think you are?

I think it’s pretty normal when you start reading a book (talking non-fiction), you tend to take a stance, very early on, of general agreement or opposition. It’s not unlike the well known but often unconscious effect when you appraise someone in the first 10-30 seconds of meeting them.

And this is the case with Yuval Noah Harari’s Homo Deus, in which I found myself constantly arguing with him in the first 70+ pages of its 450+ page length. For a start, I disagree with his thesis (for want of a better term) that our universal pursuit of ‘happiness’ is purely a sensory-based experience, independent of the cause. From what I’ve observed, and experienced personally, the pursuit of sensory pleasure for its own sake leads to disillusionment at best and self-destruction at worst. A recent bio-pic I saw of Eric Clapton (Life in 12 Bars) illustrates this point rather dramatically. I won’t discuss his particular circumstances – just go and see the film; it’s a warts and all confessional.

If one goes as far back as Aristotle, he wrote an entire book on the subject of ‘eudaimonia’ – living a ‘good life’, effectively – under the title, Ethics. Eudaimonia is generally translated as ‘happiness’ but ‘fulfilment’ or ‘contentment’ may be a better translation, though even they can be contentious, if one reads various scholarly appraisals. I’ve argued in the past that the most frustrating endeavours can be the most rewarding – just ask anyone who has raised children. Generally, I find that the more effort one exerts during a process of endeavour, the better the emotional reward in the end. Reward without sacrifice is not much of a reward. Ask anyone who’s won a sporting grand final, or, for that matter, written a novel.

This is a book that will challenge most people’s beliefs somewhere within its pages, and for that reason alone, it’s worth reading. In fact, many people will find it depressing, because a recurring theme or subtext of the book is that in the future humans will become virtually redundant. Redundant may be too strong a word, but leaving aside the obvious possibility that future jobs currently performed by humans may be taken over by AI, Harari claims that our very notion of ‘free will’ and our almost ‘religious’ belief in the sanctity of individualism will become obsolete ideals. He addresses this towards the end of  the book, so I’ll do the same. It’s a thick tome with a lot of ideas well presented, so I will concentrate on those that I feel most compelled to address or challenge.

Like my recent review of Jeremy Lent’s The Patterning Instinct, there is a lot that I agree upon in Homo Deus, and I’m the first to admit that many of Harari’s arguments unnerved me because they challenge some of my deeply held beliefs. Given the self-ascribed aphorism that heads my blog, this makes his book a worthy opus for discussion.

Fundamentally, Harari argues that we are really nothing more than biochemical algorithms and he provides very compelling arguments to justify this. Plus he devotes an entire chapter deconstructing the widely held and cherished notion that we have free will. I’ve written more than a few posts on the subject of free will in the past, and this is probably the pick of them. Leaving that aside for the moment, I don’t believe one can divorce free will from consciousness. Harari also provides a lengthy discussion on consciousness, where I found myself largely agreeing with him because he predominantly uses arguments that I’ve used myself. Basically, he argues that consciousness is an experience so subjective that we cannot objectively determine if someone else is conscious or not – it’s a condition we take on trust. He also argues that AI does not have to become conscious to become more intelligent than humans; a point that many people seem to overlook or just misconstrue. Despite what many people like to believe or think, science really can’t explain consciousness. At best it provides correlations between neuron activity in our brains and certain behaviours and ‘thoughts’.

Harari argues very cogently that science has all but proved the non-existence of free will and gives various examples like the famous experiments demonstrating that scientists can determine someone’s unconscious decision before the subject consciously decides. Or split brain experiments demonstrating that people who have had their corpus callosum surgically severed (the neural connection between the left and right hemispheres) behave as if they have 2 brains and 2 ‘selves’. But possibly the most disturbing are those experiments where scientists have turned rats literally into robots by implanting electrodes in their brains and then running a maze by remotely controlling them as if they were, in fact, robots and not animals.

Harari also makes the relevant point, overlooked by many, that true randomness, which lies at the heart of quantum mechanics, and seems to underpin all of reality, does not axiomatically provide free will. He argues that neuron activity in our brains, which gives us thoughts and intentions (which we call decisions), is a combination of reactions to emotions and drives (all driven by biochemical algorithms) and pure randomness. According to Harari, science has shown, at all levels, that free will is an illusion. If it is an illusion then it’s a very important one. Studies have shown that people who have been disavowed of their free will suffer psychologically. We know this from the mental health issues that people suffer when hope is severely curtailed in circumstances beyond their control. The fact is I don’t know of anyone who doesn’t want to believe that they are responsible for their own destiny within the limitations of their abilities and the rules of the society in which they live.

Harari makes the point himself, in a completely different section of the book, that given all behaviours, emotions and desires are algorithmically determined by bio-chemicals, then consciousness appears redundant. I’ve made the point before that there are organic entities that do respond biochemically to their environment without consciousness and we call them plants or vegetation. I’ve argued consistently that free will is an attribute of consciousness. Given the overall theme of Harari’s book, I would contend that AI will never have consciousness and therefore will never have free will.

In a not-so-recent post, I argued how beliefs drive science. Many have made the point that most people basically determine a belief heuristically or intuitively and then do their best to rationalise it. Even genius mathematicians (like John Nash) start with a hunch and then employ their copious abilities in logic and deduction to prove themselves right.

My belief in free will is fundamental to my existentialist philosophy and is grounded more on my experience than on arguments based in science or philosophy. I like to believe that the person I am today is a creation of my own making. I base this claim on the fact that I am a different person to the one who grew up in a troubled childhood. I am far from perfect yet I am a better person and, most importantly, someone who is far more comfortable in their own skin than I was with my younger self. The notion that I did this without ‘free will’ is one I find hard to construe.

Having said that, I’ve also made the point in previous posts that memory is essential to consciousness and a sense of self. I’ve suffered from temporary memory loss (TGA or transient global amnesia) so I know what it’s like to effectively lose one’s mind. It’s disorientating, even scary, and it demonstrates how tenuous our grip on reality can be. So I’m aware, better than most, that memory is the key to continuity.

Harari’s book is far more than a discussion on consciousness and free will. Like Lent’s The Patterning Instinct (reviewed here), he discusses the historical evolvement of culture and its relevance to how we see ourselves. But his emphasis is different to Lent’s and he talks about 20th Century politics in secular societies as effectively replacing religion. In fact, he defines religion (using examples) as what gives us meaning. He differentiates between spirituality and religion, arguing that there is a huge ‘gap’ between them. According to Harari, spirituality is about ‘the journey’, which reminds me of my approach to writing fiction, but what he means is that people who undertake ‘spiritual’ journeys are iconoclasts. I actually agree that religion is all about giving meaning to our lives, and I think that in secular societies, humanist liberalism has replaced religion in that role for many people, which is what Harari effectively argues over many pages.

Politically, he argues that in the 20th Century we had a number of experiments, including the 2 extremes of communism and fascism, both of which led to totalitarian dictatorships; as well as socialist and free market capitalism, which are effectively the left and right of democracies in Western countries. He explains how capitalism and debt go hand in hand to provide all the infrastructure and technological marvels we take for granted and why economic growth is the mantra of all politicians. He argues that knowledge growth is replacing population growth as the engine of economic growth whilst acknowledging that the planet won’t cope. Unlike Jeremy Lent, he doesn’t discuss the unlearned lessons of civilization collapse in the past - most famously, the Roman Empire.

I think that is most likely a topic for another post, so I will return to the thesis that religion gives us meaning. I believe I’ve spent my entire life searching for meaning and that I’ve found at least part of the answer in mathematics. I say ‘part’ because mathematics provides meaning for the Universe but not for me. In another post (discussing Eugene Wigner’s famous essay) I talked about the 2 miracles: that the Universe is comprehensible and that same Universe gave rise to an intelligence that could access that comprehensibility. The medium that allows both these miracles to occur is, of course, mathematics.

So, in some respects, virtually irrelevant to Harari’s tome, mathematics is my religion. As for meaning for myself, I think we all look for purpose, and purpose can be found in relationships, in projects and in just living. Curiously, Harari, towards the very end of his book, argues that ‘dataism’ will be the new religion, because data drives algorithms and encompasses everything from biological life forms to art forms like music. All digital data can be distilled into zeros and ones, but the mathematics of the Universe is not algorithmic, though others might disagree. In other words, I don’t believe we live inside a universe-size computer simulation.

The subtitle of Harari’s book is A Brief History of Tomorrow, and basically he argues that our lives will be run by AI algorithms that will be more clever than our biochemical algorithms. He contends that, contrary to expectations, the more specialist a job is the more likely it will be taken over by an algorithm. This does not only include obvious candidates like medical prognoses and stockmarket decisions (already happening) but corporate takeover decisions, in-the-field military decisions, board appointments and project planning decisions. Harari argues that there will be a huge class of people he calls the ‘useless class’, which would be most of us.

And this is where he argues that our liberal individualistic freedom ideals will become obsolete, because algorithms will understand us better than we do. This is premised on the idea that our biochemical algorithms, that unbeknownst to us, already control everything we do, will be overrun by AI algorithms in ways that we won’t be conscious of.  He gives the example of Angelina Jolie opting to have a double mastectomy based, not on any symptoms she had, but on the 87% probability she would get breast cancer calculated by an algorithm that looked at her genetic data. Harari extrapolates this further by predicting that in the future we will all have biomedical monitoring to a Google-like database that will recommend all our medical decisions. What’s more the inequality gap will widen because wealthy people will be genetically enhanced ‘techno-humans’ and, whilst it will trickle down, the egalitarian liberalist ideal will vanish.

Most of us find this a scary scenario, yet Harari argues that it’s virtually inescapable based on the direction we are heading, whereby algorithms are already attempting to influence our decisions in voting, purchasing and lifestyle choices. He points out that Facebook has already demonstrated that it has enough information on its users to profile them better than their friends, and sometimes even their families and spouses. So this is Orwellian, only without the police state.

All in all, this is a brave new world, but I don’t think it’s inevitable. Reading his book, it’s all about agency. He argues that we will give up our autonomous agency to algorithms, only it will be a process by stealth, starting with the ‘smart’ agents we already have on our devices that are like personal assistants. I’ve actually explored this in my own fiction, whereby there is a symbiosis between humans and AI (refer below).

Life experiences are what inform us and, through a process of cumulative ordeals and achievements, create the persona we present to the world and ourselves. Future life experiences of future generations will no doubt include interactions with AI. As a Sci-Fi writer, I’ve attempted to imagine that at some level: portraying a super-intelligent-machine interface with a heroine space pioneer. In the same story I juxtaposed my heroine with an imaginary indigenous culture that was still very conscious of their place in the greater animal kingdom. My contention is that we are losing that perspective at our own peril. Harari alludes to this throughout his opus, but doesn’t really address it. I think our belief in our individualism with our own dreams and sense of purpose is essential to our psychological health, which is why I’m always horrified when I see oppression, whether it be political or marital or our treatment of refugees. I read Harari’s book as a warning, which aligns with his admission that it’s not prophecy.


Addendum:  I haven't really expressed my own views on consciousness explicitly, because I've done that elsewhere, when I reviewed Douglas Hofstadter's iconoclastic and award-winning book, Godel Escher Bach.

Saturday 27 February 2016

In Nature, paradox is the norm, not the exception

I’ve just read Marcus Chown’s outstanding book for people wanting their science served without equations, Quantum Theory Cannot Hurt You. As the title suggests, half the book covers QM and half the book covers relativity. Chown is a regular contributor to New Scientist, and this book reflects his journalistic ease at discussing esoteric topics in physics. He says, right at the beginning, that he brings his own interpretation to these topics but it’s an erudite and well informed one.

No where is Nature’s paradoxical nature more apparent than the constant speed of light, which was predicted by Maxwell’s equations, not empirical evidence. Of course this paradox was resolved by Einstein’s theories of relativity; both of them (the special theory and the general theory). Other paradoxes that seem built into the Universe are not so easily resolved, but I will come to them.

As Chown explicates, the constant speed of light has the same psychological effect as if it was infinite and the Lorentz transformation, which is the mathematical device used to describe relativistic effects, tends to infinity at its limit (the limit being the speed of light). If one could travel at the speed of light, a light beam would appear stationary and time would literally stand still. In fact, this is what Einstein imagined in one of his famous thought experiments that led him to his epiphanic theory.

The paradox is easily demonstrated if one imagines a spacecraft travelling at very high speed, which could be measured as a fraction of the speed of light. This craft transmits a laser both in front of it and behind it. Intuition tells us that someone ahead of the craft who is stationary relative to the craft (say on Earth) receives the signal at the speed of light plus the fraction that it is travelling relative to Earth. On the other hand, if the spacecraft was travelling away from Earth at the same relative speed, one would expect to measure the laser as being the speed of light minus the relative speed of the craft. However, contrary to intuition, the speed of light is exactly the same in both cases which is the same as measured by anyone on the spacecraft itself. The paradox is resolved by Einstein’s theory of special relativity that tells us that whilst the speed of light is constant for both observers (one on the spacecraft and one on Earth) their measurements of time and length will not be the same, which is entirely counter-intuitive. This is not only revealed in the mathematics but has been demonstrated by comparing clocks in real spacecraft compared to Earth. In fact, the Sat-Nav you use in your car or on your phone, takes relativistic effects into account to give you the accuracy you’ve become acquainted with. (Refer Addendum 2 below, which explains the role of the Doppler effect in determining moving light sources.)

However, there are other paradoxes associated with relativity that have not been resolved, including time itself. Chown touches on this and so did I, not so long ago, in a post titled, What is now? According to relativity, there is no objective now, and Chown goes so far as to say: ‘”now” is a fictitious concept’ (quotation marks in the original). He quotes Einstein: “For us physicists, the distinction between past, present and future is only an Illusion.” And Chown calls it ‘one of [Nature’s] great unsolved mysteries.’

Reading this, one may consider that Nature’s paradoxes are simply a consequence of the contradiction between our subjective perceptions and the reality that physics reveals. However, there are paradoxes within the physics itself. For example, we give an age to the Universe which does suggest that there is a universal “now”, and quantum entanglement (which Chown discusses separately) implies that simultaneity can occur over any distance in the Universe.

Quantum mechanics, of course, is so paradoxical that no one can agree on what it really means. Do we live in a multiverse, where every possibility predicted mathematically by QM exists, of which we experience only one? Or do things only exist when they are ‘observed’? Or is there a ‘hidden reality’ which the so-called real ‘classical’ world interacts with? I discussed this quite recently, so I will keep this discussion brief. If there is a multiverse (which many claim is the only ‘logical’ explanation) then they interfere with each other (as Chown points out) and some even cancel each other out completely, for every single quantum event. But another paradox, which goes to the heart of modern physics, is that quantum theory and Einstein’s general theory of relativity cannot be reconciled in their current forms. As Chown points out, String Theory is seen as the best bet but it requires 10 dimensions of which all but 3 cannot be detected with current technology.

Now I’m going to talk about something completely different, which everyone experiences, but which is also a paradox when analysed scientifically. I’m referring to free will, and like many of the topics I’ve touched on above, I discussed this recently as well. The latest issue of Philosophy Now (Issue 112, February / March 2016) has ‘Free Will’ as its theme. There is a very good editorial by Grant Bartley who discusses on one page all the various schools of thought on this issue. He makes the point, that I’ve made many times myself: ‘Why would consciousness evolve if it didn’t do anything?’ He also makes this statement: ‘So if there is free will, then there must be some way for a mind to direct the state of its brain.’ However, all the science tells us that the ‘mind’ is completely dependent on the ‘state of its brain’ so the reverse effect must be an illusion.

This interpretation would be consistent with the notion I mooted earlier that paradoxes are simply the consequence of our subjective experience contradicting the physical reality. However, as I pointed out in my above-referenced post, there are examples of the mind affecting states of the brain. In New Scientist (13 February 2016) Anil Ananthaswamy reviews Eliezer Sternberg’s Neurologic: The brain’s hidden rationale behind our irrational behaviour (which I haven’t read). According to Ananthaswamy, Sternberg discusses in depth the roles of the conscious and subconscious and concludes that the unconscious ‘can get things wrong’. He then asks the question: ‘Can the conscious right some of these wrongs? Can it influence the unconscious? Yes, says Sternberg.’ He gives the example of British athlete, Steve Backley ‘imagining the perfect [javelin] throw over and over again’ even though a sprained ankle stopped him from practicing, and he won Silver in the 1996 Atlanta Olympics.

My point is that paradoxes are a regular feature of the Universe at many levels, from quantum mechanics to time to consciousness. In fact, consciousness is arguably the least understood phenomenon in the entire Universe, yet, without it, the Universe’s existence would be truly meaningless. Consciousness is subjectivity incarnate yet we attempt to explain it with complete objectivity. Does that make it a paradox or an illusion?


Addendum 1: Since writing this post, I came across this video of John Searle discussing the paradox of free will. He introduces the subject by saying that no progress has been made on this topic in the last 100 years. Unlike my argument, he discusses the apparent contradiction between free will and cause and effect.

Addendum 2: It should be pointed out that the Doppler effect allows an observer to know if a light source is moving towards them or away from them. In other words, there is change in frequency even though there isn't a change in velocity (of the light). It's for this reason that we know the Universe is expanding with galaxies moving away from us.

Tuesday 5 January 2016

Free will revisited

I’ve written quite a lot on this in the past, so one may wonder what I could add.

I’ve just read Mark Balaguer’s book, Free Will, which I won when Philosophy Now published my answer to their Question of the Month in their last issue (No 111, December 2015). It’s the fourth time I’ve won a book from them (out of 5 submissions).

It’s a well written book, not overly long or over-technical in a philosophical sense, so very readable whilst being well argued. Balaguer makes it clear from the outset where he stands on this issue, by continually referring to those who argue against free will as ‘the enemies of free will’. Whilst this makes him sound combative, the tone of his arguments are measured and not antagonistic. In his conclusion, he makes the important distinction that in ‘blocking’ arguments against free will, he’s not proving that free will exists.

He makes the distinction between what he calls Hume-style free will and Non-predetermined free will (NDP), which is a term I believe he’s coined for himself. Hume-style free will, is otherwise known as ‘compatibilism’, which means it’s compatible with determinism. In other words, even if everything in the world is deterministic from the Big Bang onwards, it doesn’t rule out you having free will. I know it sounds like a contradiction, but I think it’s to do with the fact that a completely deterministic universe doesn’t conflict with the subjective sense we all have of having free will. As I’ve expressed in numerous posts on this blog, I think there is ample evidence that the completely deterministic universe is a furphy, so compatibilism is not relevant as far as I’m concerned.

Balaguer also coins another term, ‘torn decision’, which he effectively uses as a litmus test for free will. In a glossary in the back he gives a definition which I’ve truncated:

A torn decision is a conscious decision in which you have multiple options and you’re torn as to which option is best.

He gives the example of choosing between chocolate or strawberry flavoured ice cream and not making a decision until you’re forced to, so you make it while you’re still ‘torn’. This is the example he keeps coming back to throughout the book.

In recent times, experiments in neuro-science have provided what some people believe are ‘slam-dunk’ arguments against free will, because scientists have been able to predict with 60% accuracy what decision a subject will make seconds before they make it, simply by measuring neuron activity in certain parts of the brain. Balaguer provides the most cogent arguments I’ve come across challenging these contentions. In particular, the Haynes studies, which showed neuron activity up to 10 seconds prior to the conscious decision. Balaguer points out that the neuron activity for these studies occurs in the PC and BA10 areas of the brain, which are associated with the ‘generation of plans’ and the ‘storage of plans’ respectively. He makes the point (in greater elaboration than I do here) that we should not be surprised if we subconsciously use our ‘planning’ areas of the brain whilst trying to make ‘torn decisions’. The other experiment and their counterparts, known as the Libet studies (since the 1960s) showed neuron activity half a second prior to conscious decision-making and was termed the ‘readiness potential’.  Balaguer argues that there is ‘no evidence’ that the readiness potential causes the decision. Even so, it could be argued that, like the Haynes studies, it is subconscious activity happening prior to the conscious decision.

It is readily known (as Balaguer explicates) that much of our thinking is subconscious. We all have the experience of solving a problem subconsciously so it comes to us spontaneously when we don’t expect it to. And anyone who has pursued some artistic endeavour (like writing fiction) knows that a lot of it is subconscious so that the story and its characters appear on the page with seemingly divine-like spontaneity.

Backtracking to so-called Hume-style free will, it does have a relevance if one considers that our ‘wants’ - what we wish to do - are determined by our desires and needs. We assume that most of the animal kingdom behave on this principle. Few people (including Balaguer) discuss other sentient creatures when they discuss free will, yet I’ve long believed that consciousness and free will go hand-in-hand. In other words, I really can’t see the point of consciousness without free will. If everything is determined subconsciously, without the need to think, then why have we evolved to think?

But humans take thinking to a new level compared to every other species on the planet, so that we introspect and cogitate and reason and internally debate our way to many a decision.

Back in Feb., 2009, I reviewed Douglas Hofstadter’s Pulitzer prize winning book, Godel, Escher, Bach where, among other topics, I discussed consciousness, as that’s one of the themes of his book. Hofstadter coins the term ‘strange loop’. This is what I wrote back then:

By strange loop, Hofstadter means that we can effectively look at all the levels of our thinking except the ground level, which is our neurons. In between we have symbols, which is language, which we can discuss and analyse in a dispassionate way, just like I’m doing now. I can talk about my own thoughts and ideas as if they weren’t mine at all. Consciousness, in Hofstadter’s model (for want of a better word) is the top level, and neurons are the hardware level. In between we have the software (symbols) which is effectively language.

I was quick to point out that ‘software’ in this context is a metaphor – I don’t believe that language is really software, even though we ‘download’ it from generation to generation and it is indispensable to human reasoning, which we call thinking.

The point I’d make is that this is a 2 way process: the neurons are essential to thoughts, yet our thoughts I expect can affect neurons. I believe there is evidence that we can and do rewire our brains simply by exercising our mental faculties, even in later years, and surely exercising consciously is the very definition of will.

Friday 22 April 2011

Sentience, free will and AI

In the 2 April 2011 edition of New Scientist, the editorial was titled Rights for robots; We will know when it’s time to recognise artificial cognition. Implicit in the header and explicit in the text is the idea that robots will one day have sentience just like us. In fact they highlighted one passage: “We should look to the way people treat machines and have faith in our ability to detect consciousness.”

I am a self-confessed heretic on this subject because I don’t believe machine intelligence will ever be sentient, and I’m happy to stick my neck out in this forum so that one day I can possibly be proven wrong. One of the points of argument that the editorial makes is that ‘there is no agreed definition of consciousness’ and ‘there’s no way to tell that you aren’t the only conscious being in a world of zombies.’ In other words, you really don’t know if the person right next to you is conscious (or in a dream) so you’ll be forced to give a cognitive robot the same benefit of the doubt. I disagree.

Around the same time as reading this, I took part in a discussion on Rust Belt Philosophy about what sentience is. Firstly, I contend that sentience and consciousness are synonymous, and I think sentience is pretty pervasive in the animal kingdom. Does that mean that something that is unconscious is not sentient? Strictly speaking, yes, because I would define sentience as the ability to feel something, either emotionally or physically. Now, we often feel something emotionally when we dream, so arguably that makes one sentient when unconscious. But I see this as the exception that makes my definition more pertinent rather than the exception that proves me wrong.

In First Aid courses you are taught to squeeze someone’s fingers to see if they are conscious. So to feel something is directly correlated with consciousness and that’s also how I would define sentience. Much of the brain’s activity is subconscious even to the extent that problem-solving is often executed subliminally. I expect everyone has had the experience of trying to solve a puzzle, then leaving it for a period of time, only to solve it ‘spontaneously’ when they next encounter it. I believe the creative process often works in exactly the same way, which is why it feels so spontaneous and why we can’t explain it even after we’ve done it. This subconscious problem-solving is a well known cognitive phenomenon, so it’s not just a ‘folk theory’.

This complex subconscious activity observed in humans, I believe is quite different from the complex instinctive behaviour that we see in animals: birds building nests, bees building hives, spiders building webs, beavers building dams. These activities seem ‘hard-wired’, to borrow from the AI lexicon as we tend to do.

A bee does a complex dance to communicate where the honey is. No one believes that the bee cognitively works this out the way we would, so I expect it’s totally subconscious. So if a bee can perform complex behaviours without consciousness does that mean it doesn’t have consciousness at all? The obvious answer is yes, but let’s look at another scenario. The bee gets caught in a spider’s web and tries desperately to escape. Now I believe that in this situation the bee feels fear and, by my definition, that makes it sentient. This is an important point because it underpins virtually every other point I intend to make. Now, I don’t really know if the bee ‘feels’ anything at all, so it’s an assumption. But my assumption is that sentience, and therefore consciousness, started with feelings and not logic.

In last week’s issue of New Scientist, 16 April 2011, the cover features the topic, Free Will: The illusion we can’t live without. The article, written by freelance writer, Dan Jones, is headed The free will delusion. In effect, science argues quite strongly that free will is an illusion, but one we are reluctant to relinquish. Jones opens with a scenario in 2500 when free will has been scientifically disproved and human behaviour is totally predictable and deterministic. Now, I don’t think there’s really anything in the universe that’s totally predictable, including the remote possibility that Earth could one day be knocked off its orbit, but that’s the subject of another post. What’s more relevant to this discussion is Jones’ opening sentence where he says: ‘…neuroscientists know precisely how the hardware of the brain runs the software of the mind and dictates behaviour.’ Now, this is purely a piece of speculative fiction, so it’s not necessarily what Jones actually believes. But it’s the implicit assumption that the brain’s processes are identical to a computer’s that I find most interesting.

The gist of the article, by the way, is that when people really believe they have no free will, they behave very unempathetically towards others, amongst other aberrational behaviours. In other words, a belief in our ability to direct our own destiny is important to our psychological health. So, if the scientists are right, it’s best not to tell anyone. It’s ironic that telling people they have no free will makes them behave as if they don’t, when allowing them to believe they have free will gives their behaviour intentionality. Apparently, free will is a ‘state-of-mind’.

On a more recent post of Rust Belt Philosophy, I was reminded that, contrary to conventional wisdom, emotions play an important role in rational behaviour. Psychologists now generally believe that, without emotions, our decision-making ability is severely impaired. And, arguably, it’s emotions that play the key role in what we call free will. Certainly, it’s our emotions that are affected if we believe we have no control over our behaviour. Intentions are driven as much by emotion as they are by logic. In fact, most of us make decisions based on gut feelings and rationalise them accordingly. I’m not suggesting that we are all victims of our emotional needs like immature children, but that the interplay between emotions and rational thought are the key to our behaviours. More importantly, it’s our ability to ‘feel’ that not only separates us from machine intelligence in a physical sense, but makes our ‘thinking’ inherently different. It’s also what makes us sentient.

Many people believe that emotion can be programmed into computers to aid them in decision-making as well. I find this an interesting idea and I’ve explored it in my own fiction. If a computer reacted with horror every time we were to switch it off would that make it sentient? Actually, I don’t think it would, but it would certainly be interesting to see how people reacted. My point is that artificially giving AI emotions won’t make them sentient.

I believe feelings came first in the evolution of sentience, not logic, and I still don’t believe that there’s anything analogous to ‘software’ in the brain, except language and that’s specific to humans. We are the only species that ‘downloads’ a language to the next generation, but that doesn’t mean our brains run on algorithms.

So evidence in the animal kingdom, not just humans, suggests that sentience, and therefore consciousness, evolved from emotions, whereas computers have evolved from pure logic. Computers are still best at what we do worst, which is manipulate huge amounts of data. Which is why the human genome project actually took less time than predicted. And we still do best at what they do worst, which is make decisions based on a host of parameters including emotional factors as well as experiential ones.

Sunday 11 April 2010

To have or not to have free will

In some respects this post is a continuation of the last one. The following week’s issue of New Scientist (3 April 2010) had a cover story on ‘Frontiers of the Mind’ covering what it called Nine Big Brain Questions. One of these addressed the question of free will, which happened to be where my last post ended. In the commentary on question 8: How Powerful is the Subconscious? New Scientist refers to well-known studies demonstrating that neuron activity precedes conscious decision-making by 50 milliseconds. In fact, John-Dylan Haynes of the Bernstein Centre for Computational Neuroscience, Berlin, has ‘found brain activity up to 10 seconds before a conscious decision to move [a finger].’ To quote Haynes: “The conscious mind is not free. What we think of as ‘free will’ is actually found in the subconscious.”

New Scientist actually reported Haynes' work in this field back in their 19 April 2008 issue. Curiously, in the same issue, they carried an interview with Jill Bolte Taylor, who was recovering from a stroke, and claimed that she "was consciously choosing and rebuilding my brain to be what I wanted it to be". I wrote to New Scientist at the time, and the letter can still be found on the Net:

You report John-Dylan Haynes finding it possible to detect a decision to press a button up to 7 seconds before subjects are aware of deciding to do so (19 April, p 14). Haynes then concludes: "I think it says there is no free will."

In the same issue Michael Reilly interviews Jill Bolte Taylor, who says she "was consciously choosing and rebuilding my brain to be what I wanted it to be" while recovering from a stroke affecting her cerebral cortex (p 42) . Taylor obviously believes she was executing free will.

If free will is an illusion, Taylor's experience suggests that the brain can subconsciously rewire itself while giving us the illusion that it was our decision to make it do so. There comes a point where the illusion makes less sense than the reality.

To add more confusion, during the last week, I heard an interview with Norman Doidge MD, Research psychiatrist at the Columbia University Psychoanalytic Centre and the University of Toronto, who wrote the book, The Brain That Changes Itself. I haven’t read the book, but the interview was all about brain plasticity, and Doidge specifically asserts that we can physically change our brains, just through thought.

What Haynes' experimentation demonstrates is that consciousness is dependent on brain neuronal activity, and that’s exactly the point I made in my last post. Our subconscious becomes conscious when it goes ‘global’, so one would expect a time-lapse between a ‘local’ brain activity (that is subconscious) and the more global brain activity (that is conscious). But the weird part is that Taylor’s experience, and Doidge’s assertions, is that our conscious thoughts can also affect our brain at the neuron level. This reminds me of Douglas Hofstadter’s thesis that we all are a ‘strange loop’, that he introduced in his book, Godel, Escher, Bach, and then elaborated on in a book called I am a Strange Loop. I’ve read the former tome but not the latter one (refer my post on AI & Consciousness, Feb.2009).

We will learn more and more about consciousness, I’m sure, but I’m not at all sure that we will ever truly understand it. As John Searle points out in his book, Mind, at the end of the day, it is an experience, and a totally subjective experience at that. In regard to studying it and analysing it, we can only ever treat it as an objective phenomenon. The Dalai Lama makes the same point in his book, The Universe in a Single Atom.

People tend to think about this from a purely reductionist viewpoint: once we understand the correlation between neuron activity and conscious experience, the mystery stops being a mystery. But I disagree: I expect the more we understand, the bigger the mystery will become. If consciousness is no less weird than quantum mechanics, I’ll be very surprised. And we are already seeing quite a lot of weirdness, when consciousness is clearly dependent on neuronal activity, and yet the brain’s plasticity can be affected by conscious thought.

So where does this leave free will? Well, I don’t think that we are automatons, and I admit I would find it very depressing if that was the case. The last of the Nine Questions in last week’s New Scientist, asks: will AI ever become sentient? In its response, New Scientist reports on some of the latest developments in AI, where they talk about ‘subconscious’ and ‘conscious’ layers of activity (read software). Raul Arrables of the Carlos III University of Madrid, has developed ‘software agents’ called IDA (Intelligent Distribution Agent) and is currently working on LIDA (Learning IDA). By ‘subconcious’ and ‘conscious’ levels, the scientists are really talking about tiers of ‘decision-making’, or a hierarchic learning structure, which is an idea I’ve explored in my own fiction. At the top level, the AI has goals, which are effectively criteria of success or failure. At the lower level it explores various avenues until something is ‘found’ that can be passed onto the higher level. In effect, the higher level chooses the best option from the lower level. The scientists working on this 2 level arrangement, have even given their AI ‘emotions’, which are built-in biases that direct them in certain directions. I also explored this in my fiction, with the notion of artificial attachment to a human subject that would simulate loyalty.

But, even in my fiction, I tend to agree with Searle, that these are all simulations, which might conceivably convince a human that an AI entity really thinks like us. But I don’t believe the brain is a computer, so I think it will only ever be an analogy or a very good simulation.

Both this development in AI and the conscious/subconscious loop we seem to have in our own brains reminds me of the ‘Bayesian’ model of the brain developed by Karl Friston and also reported in New Scientist (31 May 2008). They mention it again in an unrelated article in last week’s issue – one of the little unremarkable reports they do – this time on how the brain predicts the future. Friston effectively argues that the brain, and therefore the mind, makes predictions and then modifies the predictions based on feedback. It’s effectively how the scientific method works as well, but we do it all the time in everyday encounters, without even thinking about it. But Friston argues that it works at the neuron level as well as the cognitive level. Neuron pathways are reinforced through use, which is a point that Norman Doidge makes in his interview. We now know that the brain literally rewires itself, based on repeated neuron firings.

Because we think in a language, which has become a default ‘software’ for ourselves, we tend to think that we really are just ‘wetware’ computers, yet we don’t share this ability with other species. We are the only species that ‘downloads’ a language to our progeny, independently of our genetic material. And our genetic material (DNA) really is software, as it is for every life form on the planet. We have a 4-letter code that provides the instructions to create an entire organism, materially and functionally – nature’s greatest magical trick.

One of the most important aspects of consciousness, not only in humans, but for most of the animal kingdom (one suspects) is that we all ‘feel’. I don’t expect an AI ever to feel anything, even if we programme it to have emotions.

But it is because we can all ‘feel’, that our lives mean so much to us. So, whether we have free will or not, what really matters is what we feel. And without feeling, I would argue that we would not only be not human, but not sentient.


Footnote: If you're interested in neuroscience at all, the interview linked above is well worth listening to, even though it's 40 mins long.

Saturday 3 April 2010

Consciousness explained (well, almost, sort of)

As anyone knows, who has followed this blog for any length of time, I’ve touched on this subject a number of times. It deals with so many issues, including the possibilities inherent in AI and the subject of free will (the latter being one of my earliest posts).

Just to clarify one point: I haven’t read Daniel C. Dennett’s book of the same name. Paul Davies once gave him the very generous accolade by referencing it as 1 of the 4 most influential books he’s read (in company with Douglas Hofstadter’s Godel, Escher, Bach). He said: “[It] may not live up to its claim… it definitely set the agenda for how we should think about thinking.” Then, in parenthesis, he quipped: “Some people say Dennett explained consciousness away.”

In an interview in Philosophy Now (early last year) Dennett echoed David Chalmers’ famous quote that “a thermostat thinks: it thinks it’s too hot, or it thinks it’s too cold, or it thinks the temperature is just right.” And I don’t think Dennett was talking metaphorically. This, by itself, doesn’t imbue a thermostat with consciousness, if one argues that most of our ‘thinking’ happens subconsciously.

I recently had a discussion with Larry Niven on his blog, on this very topic, where we to-and-fro’d over the merits of John Searle’s book, Mind. Needless to say, Larry and I have different, though mutually respectful, views on this subject.

In reference to Mind, Searle addresses that very quote by Chalmers by saying: “Consciousness is not spread out like jam on a piece of bread…” However, if one believes that consciousness is an ‘emergent’ property, it may very well be ‘spread out like jam on a piece of bread’, and evidence suggests, in fact, that this may well be the case.

This brings me to the reason for writing this post:New Scientist, 20 March 2010, pp.39-41; an article entitled Brain Chat by Anil Ananthaswarmy (consulting editor). The article refers to a theory proposed originally by Bernard Baars of The Neuroscience Institute in San Diego, California. In essence, Baars differentiated between ‘local’ brain activity and ‘global’ brain activity, since dubbed the ‘global workspace’ theory of consciousness.

According to the article, this has now been demonstrated by experiment, the details of which, I won’t go into. Essentially, it has been demonstrated that when a person thinks of something subconsciously, it is local in the brain, but when it becomes conscious it becomes more global: ‘…signals are broadcast to an assembly of neurons distributed across many different regions of the brain.’

One of the benefits, of this mechanism, is that if effectively filters out anything that’s irrelevant. What becomes conscious is what the brain considers important. What criterion the brain uses to determine this is not discussed. So this is not the explanation that people really want – it’s merely postulating a neuronal mechanism that correlates with consciousness as we experience it. Another benefit of this theory is that it explains why we can’t consider 2 conflicting images at once. Everyone has seen the duck/rabbit combination and there are numerous other examples. Try listening to a Bach contrapuntal fugue so that you listen to both melodies at once – you can’t. The brain mechanism (as proposed above) says that only one of these can go global, not both. It doesn’t explain, of course, how we manage to consciously ‘switch’ from one to the other.

However, both the experimental evidence and the theory, are consistent with something that we’ve known for a long time: a lot of our thinking happens subconsciously. Everyone has come across a puzzle that they can’t solve, then they walk away from it, or sleep on it overnight, and the next time they look at it, the solution just jumps out at them. Professor Robert Winston, demonstrated this once on TV, with himself as the guinea pig. He was trying to solve a visual puzzle (find an animal in a camouflaged background) and when he had that ‘Ah-ha’ experience, it showed up as a spike on his brain waves. Possibly the very signal of it going global, although I’m only speculating based on my new-found knowledge.

Mathematicians have this experience a lot, but so do artists. No artist knows where their art comes from. Writing a story, for me, is a lot like trying to solve a puzzle. Quite often, I have no better idea what’s going to happen than the reader does. As Woody Allen once said, it’s like you get to read it first. (Actually, he said it’s like you hear the joke first.) But his point is that all artists feel the creative act is like receiving something rather than creating it. So we all know that something is happening in the subconscious – a lot of our thinking happens where we’re unaware of it.

As I alluded to in my introduction, there are 2 issues that are closely related to consciousness, which are AI and free will. I’ve said enough about AI in previous posts, so I won’t digress, except to restate my position that I think AI will never exhibit consciousness. I also concede that one day someone may prove me wrong. It’s one aspect of consciousness that I believe will be resolved one day, one way or the other.

One rarely sees a discussion on consciousness that includes free will (Searle’s aforementioned book, Mind, is an exception, and he devotes an entire chapter to it). Science seems to have an aversion to free will (refer my post, Sep.07) which is perfectly understandable. Behaviours can only be explained by genes or environment or the interaction of the two – free will is a loose cannon and explains nothing. So for many scientists, and philosophers, free will is seen as a nice-to-have illusion.

Conciousness evolved, but if most of our thinking is subconscious, it begs the question: why? As I expounded on Larry’s blog, I believe that one day we will have AI that will ‘learn’; what Penrose calls ‘bottom-up’ AI. Some people might argue that we require consciousness for learning but insects demonstrate learning capabilities, albeit rudimentary compared to what we achieve. Insects may have consciousness, by the way, but learning can be achieved by reinforcement and punishment – we’ve seen it demonstrated in animals at all levels – they don’t have to be conscious of what they’re doing in order to learn.

So the only evolutionary reason I can see for consciousness is free will, and I’m not confining this to the human species. If, as science likes to claim, we don’t need, or indeed don’t have, free will, then arguably, we don’t need consciousness either.

To demonstrate what I mean, I will relate 2 stories of people reacting in an aggressive manner in a hostile situation, even though they were unconscious.

One case, was in the last 10 years, in Sydney, Australia (from memory) when a female security guard was knocked unconscious and her bag (of money) was taken from her. In front of witnesses, she got up, walked over to the guy (who was now in his car), pulled out her gun and shot him dead. She had no recollection of doing it. Now, you may say that’s a good defence, but I know of at least one other similar incident.

My father was a boxer, and when he first told me this story, I didn’t believe him, until I heard of other cases. He was knocked out, and when he came to, he was standing and the other guy was on the deck. He had to ask his second what happened. He gave up boxing after that, by the way.

The point is that both of those cases illustrate that humans can perform complicated acts of self-defence without being consciously cognisant of it. The question is: why is this the exception and not the norm?


Addendum: Nicholas Humphrey, whom I have possibly incorrectly criticised in the past, has an interesting evolutionary explanation: consciousness allows us to read other’s minds. Previously, I thought he authored an article in SEED magazine (2008) that argued that consciousness is an illusion, but I can only conclude that it must be someone else. Humphrey discovered ‘blindsight’ in a monkey (called Helen) with a surgically-removed visual cortex, which is an example of a subconscious phenomenon (sight) with no conscious correlation. (This specific phenomenon has since been found in humans as well, with damaged visual cortex.)


Addendum 2: I have since written a post called Consciousness unexplained in Dec. 2011 for those interested.

Thursday 14 May 2009

Socrates, Russell, Sartre, God and Taoism

An unlikely congregation, but bear with me and it will all become clear. Earlier this week I received 2 new books from Amazon UK: The Mind’s I, by Douglas R. Hofstadter and Daniel C. Dennett; and Fundamental Forces of Nature; The Story of Gauge Fields, by Kerson Huang.

Huang is a Chinese born American, now Professor of Physics, Emeritus, at MIT, and 79 years old when he published this book in 2007. The book covers all of physics, in a historical, therefore evolutionary, context, from Newtonian physics (F= ma) up to QED (quantum electrodynamics) and beyond, though it doesn’t include String Theory. The presentation is very unusual, with equations kept deliberately minimalist, yet he manages to explain, for example, the subtle difference between Faraday’s equations and Maxwell’s (an extra term effectively) that led to the prediction of electromagnetic waves propagating at the speed of light. He also introduces mathematical concepts like Lagrangians and Hamiltonians early in his treatise; an unusual approach.

Its relevance to the title of this post is at the end, where he quotes a Taoist poet, Qu Yuan (340-278 BC) who wrote a series of questions called Tian Wen (Ask Heaven):

At the primordial beginning

Who was the Reporter?

Before the universe took shape.

How could one measure it?

(Huang also provides the original Mandarin.)

Then he quotes Russell on mathematical beauty:

A beauty so cold and austere, like that of sculpture, without appeal to any part of our weaker nature, without gorgeous trappings or painting or music, yet sublimely pure, and capable of a stern perfection such as only the greatest art can show.

He follows this quote with the following rumination of his own:

Physics is truth. It sails down a trajectory in the space of Lagrangians, when the energy scale shrinks from that set by the Big Bang.

I sometimes think that God is in the mathematics; I’ll explain myself at the end.

But the subject of this post really comes from an essay written by Raymond M. Smullyan (in Dennett’s and Hofstadter’s book) titled, Is God a Taoist?. It’s very cleverly written in the style of a Socratic dialogue between God and a mortal, who wants God to relieve him of free will. It reminds me of Sartre’s seminal essay, Existentialism is a humanism, with its famous quote: ‘man is condemned to be free’. I once wrote an entire essay founded on that quote alone, but that’s not the subject of this post.

Smullyan manages to cover an array of topics, including free will and morality, in which, via a lengthy Socratic dialogue, he concludes that the real virtue of free will is that it mandates responsibility for the infliction of suffering on others. In other words, you know when you’ve done it, and you will feel guilt and remorse as a consequence. This is not a verbatim interpretation, just my own summary of it. The dialogue effectively gets the mortal to admit this when God offers to free him of all guilt associated with his ‘free will’. So the choice then of allowing God to rid him of free will, and its consequences, becomes a moral choice in itself, therefore turning the moral dilemma back on itself.

But it’s the particular Eastern references in this essay that appealed to me, in which Smullyan incorporates the idea of God as a process. (A concept I’ve flirted with myself, though Smullyan’s concept is more Eastern in influence.)

To quote Smullyan’s God character in the dialogue:

My role in the scheme of things... is neither to punish nor reward, but to aid the process by which all sentient beings achieve ultimate perfection.

Then to elaborate:

…it is inaccurate to speak of my role in the scheme of things. I am the scheme of things. Secondly, it is equally misleading to speak of my aiding the process of sentient beings attaining enlightenment. I am the process. The ancient Taoists were quite close when they said of me (whom they called “Tao”) that I do not do things, yet through me all things get done. In more modern terms, I am not the cause of Cosmic Process. I am the Cosmic Process itself.

Smullyan, then (as God) quotes the Mahayana Buddhists:

The best way of helping others is by first seeing the light [in]oneself.

He also addresses the issue of personality (of God)

But the so-called “personality” of a being is really more in the eyes of the beholder than in the being itself.

I hope I haven’t been too disparate in this rendition of someone else’s essay. Hofstadter provides his own commentary at the end, with particular reference to the role of free will which he describes thus: ‘a person is an amalgamation of many subpersons, all with wills of their own.’ He says: ‘It’s a common myth that each person is a unity.’ I assume he’s talking about split brains, but I won’t explore that issue here, as Smullyan’s essay has other resonances for me. (I admit I'm not doing justice to Hofstadter, but I don't want to get distracted; maybe another post.)

I’ve said in previous posts that God is an experience, which is one reason I claim religion is totally subjective, because it’s an experience that can’t be shared – it’s unique to the person who has it and only they can interpret it. The essay by Smullyan makes only passing reference to this idea of God (when he discusses personality). I believe he’s referring to a more universal concept, but in an Eastern context rather than a Western one.

I can’t help but make a connection between Huang’s book and Smullyan’s essay, because they both relate to 2 of my lifelong passions: science and religion. Mathematics has given us such extraordinary insights into the physical processes of the universe, at every level, and the idea of God as the process itself, in which we play a very small part is an appealing one. And calling it the Tao, effectively rids it of human personality.

Most people would make no connection between these 2 ideas, but I sometimes think I am a Pythagorean at heart. Mathematics is such a magical medium that one cannot dissociate it from God, especially if God is the Tao, and Tao is ‘the scheme of things’.

Saturday 15 September 2007

Free Will

Below is an argument that I formed and submitted to American Scientist in response to an essay by Gregory Graffin and William Provine, who conducted a survey amongst biology students on their beliefs in religion, God and free will. It was their argument on free will that evoked my response. When they say: 'it adds nothing to the science of human behaviour' (quoted below) they are right. As far as science is concerned, if human behaviour can't be explained by a combination of genetics and environment, then invoking 'free will' won't help. It's a bit like invoking God to explain evolution (see my blog posting on Intelligent Design), so I can understand their argument. When it comes to studying anything to do with consciousness, we can only examine the consequences caused by a conscious being interacting with its environment. It's not unlike the dilemma we face in quantum mechanics where we don't know what's happening until we take a measurement or make an observation. If we didn't experience consciousness as individuals we would probably claim that it didn't exist, because there is no direct evidence of it except through our own thoughts. And this also applies to free will, which, after all, is a manifestation of consciousness. Effectively, Graffin and Provine are saying that free will is an illusion created by the fact that we are conscious beings, but, if one takes their argument to its logical conclusion, all conscious thoughts are caused by an interaction of our genetic disposition with our environment. So what is the evolutionary purpose of consciousness if our thoughts are just an unnecessary by-product? 

Below is my original argument that I submitted to American Scientist

In the July-August 2007 issue of American Scientist (Evolution, Religion and Free Will) Gregory W. Graffin and William B. Provine contend that free will is non-existent because it ‘adds nothing to the science of human behaviour.’ This would follow logically from the premise that any idea, concept or belief that can’t be scientifically examined, measured or hypothetically tested, must be an illusion or a cultural relic. They point out that evolutionary biologists, who believe in free will, suffer from the misconception that choice and free will are synonymous. One always has a choice – it’s just that when it’s made it’s predetermined. I sense a contradiction. So there is no ‘intentionality’, which lies at the heart of consciousness as we experience it, and is discussed by John Searle in his book, MiND (2004). This leads to a conundrum: if all intentionality is predetermined, then why has evolution given us consciousness? It's hard to escape the conclusion that the 'illusion' of free will must therefore have evolutionary value – maybe that’s its contribution to the science of human behaviour.