Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
Showing posts sorted by date for query ghost in the machine. Sort by relevance Show all posts
Showing posts sorted by date for query ghost in the machine. Sort by relevance Show all posts

Saturday, 8 August 2009

Memetics

Susan Blackmore is a well-known proponent of ‘memes’, and she wrote an article in New Scientist, 1 August 2009, called The Third Replicator, which is about the rise of the machines. No, this has nothing to do with the so-called Singularity Prophecy (see my post of that title in April this year). I haven’t read any of Blackmore’s books, but I’ve read articles by her before. She’s very well respected in her field, which is evolutionary psychology. By the ‘Third Replicator’ she’s talking about the next level of evolution, following genes and memes: the evolution of machine knowledge, if I get the gist of her thesis. I find Blackmore a very erudite scholar and writer, but I have philosophical differences.

I’ve long had a problem with the term meme, partly because I believe it is over-used and over-interpreted, though I admit it is a handy metaphor. When I first heard the term meme, it was used in the context of cultural norms or social norms, so I thought why not use ‘social norms’ as we do in social psychology. Yes, they get passed on from generation to generation and they ‘mutate’, and one could even say that they compete, but the analogy with genes has a limit, and the limit is that there are no analogous phenotypes and genotypes with memes as there are with genes (I made the same point in a post on Human Nature in Nov.07). And Dawkins makes the exact same point, himself, in his discussion on memes in The God Delusion. Dawkins talks about ‘memplexity’ arising from a ‘meme-pool’, and in terms of cultural evolution one can see merit in the meme called meme, but I believe it ignores other relevant factors as I discuss below.

Earlier this year I referenced essays in Hofstadter and Dennett’s The Mind’s I (Subjectivity, Jun.09; and Taoism, May 09). One of the essays included is Dawkins’ Selfish Genes and Selfish Memes. In another New Scientist issue (18 July 2009), Fern Elsdon-Baker, head of the British Council’s Darwin Now project, is critical of what he calls the Dawkins dogma saying: ‘Metaphors that have done wonders for people’s understanding of evolution are now getting in the way’; and ‘Dawkins contribution is indisputable, but his narrow view of evolution is being called into question.’ Effectively, Elsdon-Baker is saying that the ‘selfish gene’ metaphor has limitations as well, which I won’t discuss here, but I certainly think the ‘selfish meme’ metaphor can be taken too literally. People tend to forget that neither genes nor memes have any ‘will’ (Dawkins would be the first to point this out) yet the qualifier, ‘selfish,’ implies just that. However, it’s a metaphor, remember, so there’s no contradiction. Now I know that everyone knows this, but in the case of memes, I think it’s necessary to state it explicitly, especially when Blackmore (and Dawkins) compare memes to biological parasites.

Getting back to Blackmore’s article: the first replicators are biological, being genes; the second replicators are human brains, because we replicate knowledge; and the third replicators will be computers because they will eventually replicate knowledge or information independently of us. This is an intriguing prediction and there’s little doubt that it will come to pass in some form or another. Machines will pass on ‘code’ analogous to the way we do, since DNA is effectively ‘code’, albeit written in molecules made from amino acids rather than binary arithmetic. But I think Blackmore means something else: machines will share knowledge and change it independently of us, which is a subtly different interpretation. In effect, she’s saying that computers will develop their own ‘culture’ independently of ours, in the same way that we have created culture independently of our biological genes. (I will return to this point later.)

And this is where the concept of meme originally came from: the idea that cultural evolution, specifically in the human species, overtook biological evolution. I first came across this idea, long before I’d heard of memes, when I read Arthur Koestler’s The Ghost in the Machine. Koestler gave his own analogy, which I’ve never forgotten. He made the point that the human brain really hasn’t change much since homo sapiens first started walking on the planet, but what we had managed to do with it had changed irrevocably. The analogy he gave was to imagine someone, say a usurer, living in medieval times, who used an abacus to work out their accounts; then one morning they woke up to find it had been replaced with an IBM mainframe computer. That is what the human brain was like when it first evolved – we really had no idea what it was capable of. But culturally we evolved independently of biological evolution, and from this observation Dawkins coined the term, meme, as an analogy to biological genes, and, in his own words, the unit of selection.

But reading Blackmore: ‘In all my previous work in memetics I have used the term “meme” to apply to any information that is copied between people…’. So, by this definition, the word meme covers everything that the human mind has ever invented, including stories, language, musical tunes, mathematics, people’s names, you name it. When you use one idea to encompass everything then the term tends to lose its veracity. I think there’s another way of looking at this, and it’s to do with examining the root cause of our accelerated accumulation of knowledge.

In response to a comment on a recent post (Storytelling, last month) I pointed out how our ability to create script effectively allows us to extend our long term memory, even across generations. Without script, as we observe in many indigenous cultures, dance and music allows the transmission of knowledge across generations orally. But it is this fundamental ability, amplified by the written word, that has really driven the evolution of culture, whether it be in scientific theories, mathematical ideas, stories, music, even history. Are all these things memes? By Blackmore’s definition (above) the answer is yes, but I think that’s stretching the analogy, if, for no other reason than many of these creations are designed, not selected. But leaving that aside, the ability to record knowledge for future generations has arguably been the real accelerant in the evolution of culture in all its manifestations. We can literally extend our memories across generations – something that no other species can do. So where does this leave memes? As I alluded to above, not everything generated by the human mind is memetic in my opinion, but I’ll address that at the end.

Going back to my original understanding of meme as a cultural or social norm, I can see its metaphorical value. I still see it as an analogy to genes – in other words, as a metaphor. Literally, memes are social norms, but they are better known for their metaphorical meaning as analogous to genes. If, on the other hand, memes are all knowledge - in other words, everything that is imbedded in human language - then the metaphor has been stretched too far to be meaningful in my view. A metaphor is an analogy without the conjunction, ‘like’, and analogies are the most common means to explain a new concept or idea to someone else. It is always possible that people can take a metaphor too literally, and I believe memes have suffered that fate.

As for the ‘third replicator’, it’s an intriguing and provocative idea. Will machines create a culture independently of human culture that will evolutionarily outstrip ours? It’s the stuff of science fiction, which, of course, doesn’t make it nonsense. I think there is the real possibility of machines evolving, and I’ve explored it in my own ventures into sci-fi, but how independent they will become of their creators (us) is yet to be seen. Certainly, I see the symbiotic relationship between us and technology only becoming more interdependent, which means that true independence may never actually occur.

However, the idea that machine-generated ideas will take on a life of their own is not entirely new. What Blackmore is suggesting is that such ideas won’t necessarily interact with humanity for selection and propagation. As she points out, we already have viruses and search engines that effectively do this, but it’s their interaction with humanity that eventually determines their value and their longevity, thus far. One can imagine, however, a virus remaining dormant and then becoming active later, like a recessive gene, so there: the metaphor has just been used. Because computers use code, analogous to DNA, then comparisons are unavoidable, but this is not what Blackmore is referring to.

Picture this purely SF scenario: we populate a planet with drones to ‘seed’ it for future life, so that for generations they have no human contact. Could they develop a culture? This is Asimov territory, and at this stage of technological development, it is dependent on the reader’s, or author’s, imagination.

One of Blackmore’s principal contentions is that memes have almost been our undoing as a species in the past, but we have managed to survive all the destructive ones so far. What she means is that some ideas have been so successful, yet so destructive, that they could have killed off the entire human race (any ideologue-based premise for global warfare would have sufficed). Her concern now is that the third replicator (machines) could create the same effect. In other words, AI could create a run-away idea that could ultimately be our undoing. Again, this has been explored in SF, including stories I’ve written myself. But, even in my stories, the ‘source’ of the ‘idea’ was originally human.

However, as far as human constructs go, we’re not out of the woods by a long shot, with the most likely contender being infinite economical growth. I suspect Blackmore would call it a meme but I would call it a paradigm. The problem is that a meme implies it’s successful because people select it, whereas I think paradigms are successful simply because they are successful at whatever they predict, like scientific theories and mathematical formulae, all of which are inherently un-memetic. In other words, they are not successful because we select them, but we select them because they are successful, which turns the meme idea on its head.

But whatever you want to call it, economic growth is so overwhelmingly successful: socially, productively, politically, on a micro and macro scale; that it is absolutely guaranteed to create a catastrophic failure if we continue to assume the Earth has infinite resources. But that’s a subject for another post. Of course, I hope I’m totally wrong, but I think that’s called denial. Which begs the question: is denial a meme?

Saturday, 11 April 2009

The Singularity Prophecy

This is not a singularity you find in black holes or at the origin of the universe – this is a metaphorical singularity entailing the breakthrough of artificial intelligence (AI) to transcend humanity. And prophecy is an apt term, because there are people who believe in this with near-religious conviction. As Wilson da Silva says, in reference to its most ambitious interpretation as a complete subjugation of humanity by machine, ‘It’s been called the “geek rapture”’.

Wilson da Silva is the editor of COSMOS, an excellent Australian science magazine I’ve subscribed to since its inception. The current April/May 2009 edition has essay length contributions on this topic from robotics expert, Rodney Brooks, economist, Robin Hanson, and science journalist, John Horgan, along with sound bites from people like Douglas Hofstadter and Steven Pinker (amongst others).

Where to start? I’d like to start with Rodney Brooks, an ex-pat Aussie, who is now Professor of Robotics at Massachusetts Institute of Technology. He’s also been Director of the same institute’s Computer Science and Artificial Intelligence Lab, and founder of Heartland Robotics Inc. and co-founder of iRobot Corp. Brooks brings a healthy tone of reality to this discussion after da Silva’s deliberately provocative introduction of the ‘Singularity’ as ‘Rapture’. (In a footnote, da Silva reassures us that he ‘does not expect to still be around to upload his consciousness to a computer.’)

So maybe I’ll backtrack slightly, and mention Raymond Kurzweil (also the referenced starting point for Brooks) who does want to upload (or download?) his consciousness into a computer before he dies, apparently (refer Addendum 2 below). It reminds me of a television discussion I saw in the 60s or 70s (in the days of black & white TV) of someone seriously considering cryogenically freezing their brain for future resurrection, when technology would catch up with their ambition for immortality. And let’s be honest: that’s what this is all about, at least as far as Kurzweil and his fellow proponents are concerned.

Steven Pinker makes the point that many of the science fiction fantasies of his childhood, like ‘jet-pack commuting’ or ‘underwater cities’, never came to fruition, and he would put this in the same bag. To quote: ‘Sheer processing power is not a pixie dust that magically solves all your problems.’

Back to Rodney Brooks, who is one of the best qualified to comment on this, and provides a healthy dose of scepticism, as well as perspective. For a start, Brooks points out how robotics hasn’t delivered on its early promises, including his own ambitions. Brooks expounds that current computer technology still can’t deliver the following childlike abilities: ‘object recognition of a 2 year-old; language capabilities of a 4 year-old; manual dexterity of a 6 year-old; and the social understanding of an 8 year-old.’ To quote: ‘[basic machine capability] may take 10 years or it may take 100. I really don’t know.’

Brooks states at the outset that he sees biological organisms, and therefore the brain, as a ‘machine’. But the analogy for interpretation has changed over time, depending on the technology of the age. During the 17th Century (Descartes’ time), the model was hydrodynamics, and in the 20th century it has gone from a telephone exchange, to a logic circuit, to a digital computer to even the world wide web (Brooks’ exposition in brief).

Brooks believes the singularity will be an evolutionary process, not a ‘big bang’ event. He sees the singularity as the gradual evolvement of machine intelligence till it becomes virtually identical to our own, including consciousness. Hofstadter expresses a similar belief, but he ‘…doubt[s] it will happen in the next couple of centuries.’ I have to admit that this is where I differ, as I don’t see machine intelligence becoming sentient, even though my view is in the minority. I provide an argument in an earlier post (The Ghost in the Machine, April 08) where I discuss Henry Markram’s ‘Blue Brain’ project, with a truckload dose of scepticism.

Robin Hanson is author of The Economics of the Singularity, and is Associate Professor of Economics at George Mason University in Virginia. He presents a graph of economic growth via ‘Average world GDP per capita’ on a logarithmic scale from 10,000BC to the last 4 weeks. Hanson explains how the world economy has made quantum leaps at historical points: specifically, the agricultural revolution, the industrial revolution and the most recently realised technological revolution. The ‘Singularity’ will be the next revolution, and it will dwarf all the economical advances made to date. I know I won’t do justice to Hanson’s thesis, but, to be honest, I don’t want to spend a lot of space on it.

For a start, all these disciples of the extreme version of the Singularity seem to forget how the other half live, or, more significantly, simply ignore the fact that the majority of the world’s population doesn’t live in a Western society. In fact, for the entire world to enjoy ‘Our’ standard of living would require 4 planet earths (ref: E.O. Wilson, amongst others). But I won’t go there, not on this post. Except to point out that many of the world’s people struggle to get a healthy water supply, and that is going to get worse before it gets better; just to provide a modicum of perspective for all the ‘rapture geeks’.

I’ve left John Horgan’s contribution to last, just as COSMOS does, because he provides the best realism check you could ask for. I’ve read all of Horgan’s books, but The End of Science is his best read, even though, once again, I disagree with his overall thesis. It’s a treasure because he interviews some of the best minds of the latter 20th Century, some of whom are no longer with us.

I was surprised and impressed by the depth of knowledge Horgan reveals on this subject. In particular, the limitations of our understanding of neurobiology and the inherent problems in creating direct neuron-machine interfaces. One of the most pertinent aspects, he discusses, is the sheer plasticity of the brain in its functionality. Just to give you a snippet: ‘…synaptic connections constantly form, strengthen, weaken and dissolve. Old neurons die and – evidence is overturning decades of dogma – new ones are born.’

There is a sense that the brain makes up neural codes as it goes along - my interpretation, not Horgan's - but he cites Steven Rose, neurobiologist at Britain's Open University, based in Milton Keyes: 'To interpret the neural activity corresponding to any moment ...scientists would need "access to [someone's] entire neural and hormonal life history" as well as to all [their] corresponding experiences.'

It’s really worth reading Horgan’s entire essay – I can’t do it justice in this space – he covers the whole subject and puts it into a perspective the ‘rapture geeks’ have yet to realise.

I happened to be re-reading John Searle’s Mind when I received this magazine, and I have to say that Searle’s book is still the best I’ve read on this subject. He calls it ‘an introduction’, even on the cover, and reiterates that point more than once during his detailed exposition. In effect, he’s trying to tell us how much we still don’t know.

I haven’t read Dennett’s Consciousness Explained, but I probably should. In the same issue of COSMOS, Paul Davies references Dennett’s book, along with Hofstadter’s Godel, Escher, Bach, as 2 of the 4 most influential books he’s read, and that’s high praise indeed. Davies says that while Dennett’s book ‘may not live up to its claim… it definitely set the agenda for how we should think about thinking.’ But he also adds, in parenthesis, that ‘some people say Dennett explained consciousness away’. I think Searle would agree.

Dennett is a formidable philosopher by anyone’s standards, and I’m certainly not qualified, academically or otherwise, to challenge him, but I obviously have a different philosophical perspective on consciousness to him. In a very insightful interview over 2 issues of Philosophy Now, Dennett elaborated on his influences, as well as his ideas. He made the statement that ‘a thermostat thinks’, which is a well known conjecture originally attributed to David Chalmers (according to Searle): it thinks it’s too hot, or it thinks it’s too cold, or it thinks the temperature is just right.

Searle attacks this proposition thus: ‘Consciousness is not spread out like jam on a piece of bread… If the thermostat is conscious, how about parts of the thermostat? Is there a separate consciousness to each screw? Each molecule? If so, how does their consciousness relate to the consciousness of the whole thermostat?’

The corollary to this interpretation and Dennett’s, is that consciousness is just a concept with no connection to anything real. If consciousness is an emergent property, an idea that Searle seems to avoid, then it may well be ‘spread out like jam on a piece of bread’.

To be fair to Searle (I don't want to misrepresent him when I know he'll never read this) he does see consciousness being on a different level to neuron activity (like Hofstadter) and he acknowledges that this is one of the factors that makes consciousness so misunderstood by both philosophers and others.

But I’m getting off the track. The most important contribution Searle makes, that is relevant to this whole discussion, is that consciousness has a ‘first person ontology’ yet we attempt to understand it solely as a ‘third person ontology’. Even the Dalai Lama makes this point, albeit in more prosaic language, in his book on science and religion, The Universe in a Single Atom. Personally, I find it hard to imagine that AI will ever make the transition from third person to first person ontology. But I may be wrong. To quote my own favourite saying: 'Only future generations can tell us how ignorant the current generation is'.

There are 2 aspects to the Singularity prophecy: we will become more like machines, and they will become more like us. This is something I’ve explored in my own fiction, and I will probably continue to do so in the future. But I think that machine intelligence will complement human intelligence rather than replace it. As we are already witnessing, computers are brilliant at the things we do badly and vice versa. I do see a convergence, but I also see no reason why the complementary nature of machine intelligence will not only continue, but actually improve. AI will get better at what it does best, and we will do the same. There is no reason, based on developments to date, to assume that we will become indistinguishable, Turing tests notwithstanding. In other words, I think there will always remain attributes uniquely human, as AI continues to dazzle us with abilities that are already beyond us.

P.S. I review Douglas Hofstadter's brilliant book, Godel, Escher, Bach: an Internal Golden Braid in a post I published in Feb.09: Artificial Intelligence & Consciousness.

Addendum: I'm led to believe that at least 2 of the essays cited above were originally published in IEEE Spectrum Magazine prior to COSMOS (ref: the authors themselves). Addendum 2: I watched the VBS.TV Video on Raymond Kurzweil, provided by a contributor below (Rory), and it seems his quest for longevity is via 'nanobots' rather than by 'computer-downloading his mind' as I implied above.

Saturday, 14 February 2009

Godel, Escher, Bach - Douglas Hofstadter's seminal tome

The original title of this post was Artificial Intelligence and Consciousness.

This is perhaps the hardest of subjects to tackle. I’ve just finished reading Douglas R. Hofstadter’s book, Godel, Escher, Bach: an Eternal Golden Braid, which attempts to address this very issue, even if in a rather unusual way.

Earlier in the same year (last year) I read Roger Penrose’s book, Shadows of the Mind, which addresses exactly the same issue. What is interesting is that, in both cases, the authors use Godel’s Incompleteness Theorem to support completely different, one could say, opposing, philosophical viewpoints. Both Penrose and Hofstadter are intellectual giants compared to me, but what I find interesting is that both apparently start with their philosophical viewpoints and then find arguments to support them, rather than the other way round. Hofstadter quotes, more than once, the Oxford philosopher, J.R. Lucas, whom he obviously respects, but philosophically disagrees with. Likewise, I found myself often in agreement with Hofstadter on many of his finer points, but still in disagreement with his overall thesis. I think it’s obvious from other posts on this blog, that I am much closer to Penrose’s philosophy in many respects, not just on AI.

Having said all that, this is a very complex and difficult subject, and I’m not at all sure I can do it justice. What goes hand in hand with the subject of AI, and Hofstadter doesn’t shy away from this, is the notion of consciousness. Can AI ever be conscious in the way we are? Hofstadter says yes, and Penrose, I believe, would say no. (Penrose effectively argues that algorithm-using machines – computers - will never think like humans.) Another person who has much to say on this subject is John Searle, and he would almost certainly say no, based on his famous ‘Chinese Room’ thought experiment. (I expound on this in my Apr.08 post: The Ghost in the Machine).

Larry Niven in one of his comments on his own blog, in response to one of my comments, made the observation that science hasn’t resolved the brain/mind conundrum, and gave it as an example of ‘…the impotence of scientific evidence to affect philosophical debates…’ (I’m sure if I’ve misinterpreted him, or quoted him out of context, he’ll let me know.)

To throw a googly into the mix, since Hofstadter first published the book 30 years ago, a lot of work has been done in this area, and one of the truly interesting ideas is the Bayesian model of the brain based on Bayesian probability, proposed by Karl Friston (New Scientist 31 May 08). In a nutshell, Friston proposes that the brain functions on the same principle at all levels, which is to make an initial assessment then modify it based on additional information. He claims this works even at the neuron level, as well as the cognitive level. (I report on this in my July 08 post titled, Epistemology; a discussion.) I even extrapolate this up the cognitive tree to include the scientific method, whereby we hypothesise, follow up with experimentation or observation, then modify the hypothesis accordingly.

Hofstadter makes a similar point about ‘default options’ that we use in everyday observations, like the way we use stereotypes. It’s only by evaluating a specific case in more detail that we can break away from a stereotypic interpretation of an event. This is also an employment of the Bayesian principle, but Hofstadter doesn’t say this because it hadn’t been proposed at the time he wrote it.

What Searle points out in his excellent book, Mind, is that consciousness is an experience, which is so subjective that we really don’t know if anyone else experiences it the way we do – we only assume they do. Stephen Law writes about this in his book, The Philosophy Gym, and I challenged him (by snail mail at the time) that this was a conceit on his part, because he obviously expected that people who read his book, could think like him, which means they must be conscious. It was a good natured jibe, even though I’m not sure he saw it that way at the time, but he was generous in his reply.

Descartes famous statement, ‘I think therefore I am’, has been pilloried over the centuries since he wrote it, but I would contend that ‘I think’ is a tautology, because ‘I’ is your thoughts and nothing else. This gets to the heart of Hofstadter’s thesis, that we, individually, are all ‘strange loops’. Hofstadter employs Godel’s Theorem in an unusual, analogous way to make this contention: we are ‘strange loops’. By strange loop, Hofstadter means that we can effectively look at all the levels of our thinking except the ground level, which is our neurons. In between we have symbols, which is language, which we can discuss and analyse in a dispassionate way, just like I’m doing now. I can talk about my own thoughts and ideas as if they weren’t mine at all. Consciousness, in Hofstadter’s model (for want of a better word) is the top level, and neurons are the hardware level. In between we have the software (symbols) which is effectively language.

I think language as software is a good metaphor but not necessarily a literal interpretation. Software means algorithms, which are effectively instructions. Whilst language obviously contains rules, I don’t see it as particularly algorithmic, though others, including Hofstadter, may disagree. On the other hand, I do see DNA as algorithmic in the way it creates organisms, and Hofstadter makes the same leap of interpretation.

The analogy with Godel’s Theorem is that, in any formal mathematical system, there will always exist a mathematical statement that expresses something about the system but can’t be found in the system, if I’ve got it right. In other words, there will always exist the possibility of a ‘correct’ mathematical statement that is not part of the original formal system, which is why it is called the Incompleteness Theorem – no mathematical formal system can ever be complete in that it will include all mathematical statements. In this analogy, the self or ‘I’ is like a Godelian entity that is a product of the system but not contained in it. Again, my interpretation may not be what Hofstadter intended, but it’s the best I can make of it. It exists at another level, I think is what Hofstadter would say.

In another part of the book, Hofstadter makes a direct ‘mapping’ which he calls a ‘dogmap’ (play on words for dogma) where he compares DOGMA I ‘Molecular Biology’ with DOGMA II ‘Mathematical Logic’, using Godel’s Theorem ‘self-referencing’ as directly comparable to DNA/RNA’s ‘self reproduction’. He admits this is an analogy but later acknowledges that the same mapping may be possible from Godel's Theorem to consciousness.

Even without this allusion by Hofstadter, and no Godelian analogy required, I see a direct comparison between the way DNA/RNA creates complex organisms and the way neurons create thoughts. In both cases there is a gulf of layers in between that makes one wonder how they could have evolved. Of course, this is grist for ID advocates and I’ve even come across a blogger (Sophie) who quotes Hofstadter to make this very point.

In one of my earliest posts on this blog (The Universe’s Interpreters, Sep. 07) I make the point that the universe consists of worlds within worlds, and the reason we can comprehend it to the extent that we do, is because we can conjure concepts within concepts ad infinitum. Hofstadter makes a similar point, though not in the same words, but at least 2 decades before I thought of it.

DNA/RNA exists at a level far removed from the end result, which is a living complex organism, yet there is a direct causal relationship. Neurons are cells that exist at a level far removed from the end result, which is consciousness, yet there is a direct causal relationship.

These 2 cases, DNA to complex organisms and neurons to consciousness, I think remain the 2 greatest mysteries of the natural world. To say that they can only be explained by invoking a ‘Designer’ (God) is to say we’ve uncovered everything we know about the universe at all of its levels of complexity and only God can explain everything else. I would call this the defeatist position if it was to be taken seriously. But, in effect, the ID advocates are saying that whilst any mysteries remain in our comprehension of the universe, there will always be a role for God. Once we find an explanation for these mysteries, there will be other mysteries, perhaps at other levels, that we can still employ God to explain. So the argument will never stop. Before Newton it was the orbits of the planets, and before Mendel it was the passing down of genetic traits. Now it is the origin of DNA. The mysteries may get deeper but past experience says that we will find an answer and the answer won’t be God (see my Dec .08 post: The God hypothesis; not).

As a caveat to the above argument, I've said elsewhere (Emergent phenomena, Oct. 08) that we may never understand consciousness as a direct mathematical relationship to neuron activity (although Penrose pins his hopes on quantum phenomena). And I'm unsure that we will ever be able to explain how it becomes an experience, and that's one of the reasons I'm sceptical that AI will ever have that experience. But this lack of understanding is not evidence of God; it's just evidence of our lack of understanding.

To quote Confucius: 'To realise that you know something when you do, and to realise that you do not know when you do not, this is knowing.' Or to quote his near contemporary, Socrates, who put it more succinctly: 'The height of wisdom is to know how thoroughly ignorant we are.'

My personal hypothesis, completely speculative with no scientific evidence at all, is that maybe there is a feedback mechanism that goes from the top level to the bottom level that we’ve yet to discover. They are both mysteries that most people don’t contemplate and it took Hofstadter’s book, written over 3 decades ago, to bring them fully home to me, and to appreciate how analogous they are: base level causally affects top level, yet complexity of one level seems independent to complexity of the other - there is no obvious 1 to 1 correlation. (Examples: it can take a combination of genes to express a single trait; there is not a specific 'home' in the brain for specific memories.)

I guess it’s this specific revelation that I personally take from Hofstadter’s book, but I really can’t do it justice. It is one of the best books I’ve read, even though I don’t agree with his overall thesis: machines will eventually think like humans, therefore they will have consciousness.

In my one and only published novel, ELVENE, there is an AI entity, Alfa, who plays an important role in the story. I was very careful in my construction of Alfa to demonstrate that he didn’t think like humans (yes, I gave him a gender and that’s explained) but that he was nevertheless extremely intelligent and able to converse with humans with cognitive ease. But I don’t believe Alfa was conscious albeit he may have given that impression (this is fiction, remember). I agree with Searle, in that simulated intelligence at a very high level will be achievable, but it will remain a simulation. AI uses algorithms and brains don’t – on this, I agree with Penrose. On the other hand, Hofstadter argues that we use rule-based software in the form of ‘symbols’, which we call language. I’m sure whoever reads this will have their own opinions.


Addendum 1: I've just read (today, 21 Feb.09) an article in Scientific American (January 2009) that tackles the subject: From Atoms to Traits. It points out that there is good correlation between genes and traits, and expounds on the latest knowledge in this area. In particular, it gives a good account (by examples) of how random changes 'feed' the natural selection 'engine' of evolution. I admit that there is still much to be learned, but, if you follow this topic at all, you will know that discoveries and insights are being made all the time. The mystery of how genes evolved, as opposed to the organisms that they create, is still unsolved in my view. Martin A. Nowak, a Harvard University mathematician and biologist, profiled in Scientific American (October 2008) believes the answer may lie in mathematics: Can mathematics solve the origin of life? An idea hypothesised by Gregory J. Chaitin in his book, Thinking about Godel and Turing, which I review in my Jan.08 post: Is mathematics evidence of a transcendental realm?

Addendum 2: I changed the title to more accurately reflect the content of the post.

Saturday, 19 July 2008

Epistemology; a discussion

Recently (1 July) I wrote a post on The Mirror Paradox, which arose from my reading of Umberto Eco’s book, Kant and the Platypus back in 2002. The post was an edited version of part of a letter I wrote to Eco; the rest of the letter was to do with epistemology, and that is the source of this post.

Some people think that because we can’t explain something, either it is wrong or it doesn’t exist. Two examples from the opposite sides of philosophy (materialism and fundamentalist religion) illustrate this point very clearly. In a previous post, The Ghost in the Machine (Apr.08), I reviewed an article in SEED magazine (Henry Markram’s Blue Brain project). In the same magazine, there is an essay by  Nicholas Humphrey on the subject of consciousness. Effectively, he writes a page-length treatise arguing that consciousness must be an illusion because we have no explanation for it. This is despite the fact that he, and everyone he meets in life, experiences consciousness every day. Humphrey’s argument, in synopsis, is that it is easier to explain it as an illusion than as reality, therefore it must be an illusion. Personally, I would like to know how he distinguishes dreaming from living, or even if he can (please refer Addendum below, 4 April 2010). Another example from the polar opposite side of rational thinking is evolution. Fundamentalist Christians tend to think, because we can’t explain every single aspect of evolution, it can be challenged outright as false. This is driven, of course, by a belief that it is false by Divine proclamation, so any aspect of the theory that is proven true, of which there is evidence at all levels of biology, is pure serendipity. (Refer my Nov.07 post, Is evolution fact? Is creationism myth?)

I’m making a fundamental epistemological point that we don’t understand everything – another, excellent example is quantum mechanics (see The Laws of Nature, Mar.08), where I quote Richard Feynman, probably the world’s best known expert on quantum mechanics (he had a Nobel Prize to prove it), and arguably its best expositor, who said quite categorically in his book, QED, ‘…I don’t understand it. Nobody does.’ There is nothing that makes less sense than quantum mechanics, yet it is arguably the most successful scientific theory of all time. Historically, we’ve always believed that we almost know everything, and Feynman was no less optimistic, believing that we would one day know all physics. But, if history is any indication of the future, I choose to differ. In every avenue of scientific endeavour: biology, cosmology, quantum theory, neuroscience; there are enormous gaps in our knowledge with mysteries begging inquiry, and, no doubt, behind those mysteries, lay a whole gallery of future mysteries yet to be discovered.

None of this was in the letter I wrote to Umberto Eco, but it seems like a good starting point: we don’t know everything, we never have and we probably never will. The only thing we can say with confidence is that we will know more tomorrow than we know today, and that is true for all the areas I mentioned above. As I’ve already said in previous posts: only future generations can tell us how ignorant the current generation is.

Actually, this is not so far removed from Eco’s introduction in Kant and the Platypus, where he hypothesises on the limits of our ability to comprehend the universe, which may include metaphysical elements like God. He postulates 4 hypotheses based on matching items of knowledge (symbols) with items of physical entities (elements), which he calls, for convenience sake, 'atoms', and various combinations of these. As a corollary to this approach, he wonders if the graininess of the universe is a result of our language rather than an inherent feature of it, as all the hypotheses require segmentation rather than a continuum.

I won’t discuss Eco’s hypotheses, only mention them in passing, as I take a different approach. For a start, I would use ‘concept’ instead of ‘symbol’ or ‘atom,’ and ‘phenomena’ instead of ‘elements’. It’s not that I’m taking explicit issue with Eco’s thesis, but I choose a different path. I define science as the study of natural phenomena in all their manifestations, which is really what one is discussing when one questions the limits of our ability to comprehend the physical universe. Secondly, it is becoming more and more apparent that it is mathematics rather than language that is determining our ability to comprehend the universe – a philosophical point I’ve already discussed in 2 posts: Is mathematics evidence of a transcendental realm? (Jan.08) and The Laws of Nature (Mar. 08).

Some people argue that mathematics is really just another language, but I would contend that this is a serious misconception of the very nature of mathematics. As Feynman points out in his book, The Character of Physical Law, translating mathematical ideas into plain English (or any other verbal language) is not impossible (he was a master at it) but it’s quite different to translating English into, say, French. To describe mathematics in plain language requires the realisation of concepts and the use of analogies and examples. Mathematics is inherently paradoxical, because it is conceptually abstract, yet it can be applied to the real world in diverse and infinitely numerous ways. Whereas plain language starts with descriptors of objects (nouns) which are then combined with other words (including verbs) that allow one to communicate actions, consequences, histories and intentions; you could argue that mathematics starts with numbers. But numbers are not descriptors – a number is a concept – they are like seeds that have infinite potential to describe the world in a way that is distinctly different to ordinary language.

Nevertheless, Eco has a point, concerning the limits of language, and one may rephrase his question in light of my preceding dissertation: is it our use of number that projects graininess onto the universe? This question has a distinctly Kantian flavour. One of the problems I had with Kant (when I studied him) was his own ‘Copernican revolution’ (his terminology) that we project our models of reality onto the world rather than the converse. As a standalone statement, this is a reasonable assertion, and I will return to it later, but where I disagreed, was his insistence that time and space are projections of the human mind rather than a reality that we perceive.

I truly struggled to see how this fitted in with the rest of his philosophy which I find quite cogent. In particular, his idea of the ‘thing-in-itself’, which essentially says that we may never know the real essence of something but only what we perceive it to be. (I think this is Kant's great contribution to philosophy.) He gave the example of colour, which, contrary to many people’s belief, is a purely psychological phenomenon. It is something that only happens inside our minds. Some animals can’t see in colour at all and some animals see colours that we don’t, for example, in the ultra-violet range. Some animals, that use echo-location, like bats, dolphins and whales, probably see in ultra-sound. It would be hypothetically possible for some creatures to see in radar, if they ever evolved the ability to transmit radar signals. But, more significantly, our discoveries in quantum mechanics and relativity theory, are proof that what we perceive as light and as time respectively are not necessarily what they really are, depending on what level of nature we examine. This leads to another aspect of epistemology that I will return to later – I don’t want to get too far off the track.

In fact, relativity theory tells us that time and space are inherent features of the universe, and, again, it is only through mathematics that we can decipher the enigma that is relativity, as well as quantum phenomena. But we don't need relativity theory to challenge Kant's thesis on the nature of space and time. We sense time and space through our eyes (our eyes are literally like a clock that determines how fast the world passes us by) and, again, this is different for different species. Many birds, and insects, see the world in slow motion compared to us because their eyes perceive the world in more ‘frames per second’ than we do (for us I think it’s around 24). The point is, contrary to Kant’s assertion, if our senses didn’t perceive the reality of space and time, then we would not be able to interact with the world at all. We would not even be able to walk outside our doors.

I once had an argument with a professor in linguistics, who claimed that 3 dimensional Cartesian axes are a human projection, and therefore all our mathematical interpretations, including relativity, based on Reimann geometry (which is curved), are also projections. The fact is that we live in a 3 dimensional spatial world and if we lived in a higher dimensional spatial world, our mathematical interpretation of it would reflect that. In fact, mathematically, we can have as many-dimensional worlds as we like, as string theory demonstrates. Einstein’s genius was to appreciate that gravity made the universe Reimann rather than Euclidean, but, at the scale we observe it, it’s not noticeable, in the same way that we can survey our little blocks of land as if they are flat rather than curved, even though we know the earth’s surface is really a sphere.

After all that, I haven’t answered the question: is the perceived graininess of the universe a result of our projection or not? One of the consequences of Kant’s epiphany, concerning the thing-in-itself, is that it seems to change according to the level of nature we observe it at. The example I like to give is the human body, which is comprised of individual cells. If one examines an individual cell there is no way we could appreciate the human body of which it is a part. At an even smaller scale we can examine its DNA, which is what determines how the human body will eventually turn out. The DNA is actually like a code, only it’s more than an analogy, it really is a code; it contains all the instructions on how to construct the creature it represents. So what is the thing-in-itself? Is it the genome? Is it the fully grown adult body? Humans are the only species that we know of who have the ability to conceptualise this, and, therefore, are able to comprehend at least some of the machinations of the natural world. And this, I believe, lies at the heart of Eco’s introductory hypotheses. It’s not to do with matching symbols with elements, or combinations thereof, but matching concepts with phenomena, and, more significantly, concepts within concepts, and phenomena that emerge from other phenomena.

Many people talk about the recursive ability of the human brain, which is to hold multiple relationships within one’s mind, like my friend’s mother’s lover has a cat with an injured foot. I understand that 5 is the norm, after which we tend to lose the thread. In which case, I ask: how can we follow a story, or even an argument, like the one I’m writing now? In another post (Imagination, Mar.08) I suggest that maybe it was storytelling that originally developed this aspect of our intellectual ability. We tend to think of words as being the ‘atoms’ of a story, but, as a writer of fiction, I know better, as I will explain shortly. Individual words do have a meaning of their own, but, as Wittgenstein pointed out, it is only in the context of a sentence that the true meaning is apparent. In fact, it is the sentence, or phrase, that has meaning rather than the individual words, as I’m demonstrating right now. But it really requires a string of sentences, and a lengthy one at that, to create an argument or a story. The shortest component of a story is actually a scene, and a scene is usually delineated by a break in time or location at its beginning and its end. But, of course, we don’t keep all the scenes in our memory for the course of the story, which may unfold over a period of days, so how do we do it?

Well, there is a thread (often times more than one) which usually involves a character, and we live the thread in the moment just like we do with our lives. It’s like when we are in contact with that thread we have the entire thread in our mind yet we are only interested in its current location in time and space. The thread allows us to pull out memories of it, make associations, into the past and future. This is the really extraordinary attribute of the human brain. I’ve no doubt that other animals have threads as well, but I doubt they have the same ability as we do. It is our ability to make associations that determines almost everything intellectually about us, including our ability to memorise and learn. It is only when we integrate new knowledge into existing knowledge that we actually learn it and understand it. To give an example, again, from Wittgenstein, if you come across a new word, you can only comprehend it when it is explained in terms of words you already know. In a story, we are continually integrating new information into existing information, yet we don’t see it as learning; we see it as entertainment. How clever is that?

I argue that recursiveness in the human brain is virtually limitless because, like the cells and the human body, we can conceptualise concepts within concepts ad infinitum, as we do in mathematics. For example, calculus requires the manipulation of infinite elements yet we put them all into one function, so we don’t have to even think of an infinite number of elements, which, of course, would be impossible.

I’ve made the point in other posts, that the reason we comprehend the universe to the extent that we do is because we have this ability to perceive concepts within concepts and the universe is made up of elements within elements, where the individual element often has nothing in common with the larger element of which it is a part, so graininess is not the issue. I don’t believe this is a projection; I believe that this is an inherent attribute of the entire universe, and the only reason we can comprehend it, in the esoteric way we do, is because we are lucky enough to have the innate ability to perform the same trick mentally (see The Universe's Interpreters, Sep.07).

I’ve almost exhausted this subject, but I want to say something about schemas. I mentioned, earlier in this essay, Kant’s assertion that we project our ideas, or models, onto the universe in order to comprehend it. I discuss this as well in The Laws of Nature, but in a different context. Eco also talked about schemas, and while he said it was different to the psychological term, I will attempt to use it in the same sense as it is used in psychology. A schema is a template, is the best description I can give, whereby we apply it to new experiences and new knowledge. We even have a schema for the self, which we employ, subconsciously, when we assess someone we meet.

I argue that the brain is a contextual instrument in that it axiomatically looks for a context when it encounters something new, or will even create one where one doesn’t readily exist. By this I mean we always try and understand something on the basis of what we already know. To give an example, taken from Eco’s book, when Europeans first saw a platypus they attempted to categorise it as a mammal or a reptile (it lays eggs). But, if I was a European, or from the northern hemisphere, I would probably think it was a type of otter or beaver, assuming I was familiar with otters and beavers, because it is air-breathing yet it spends most of its time in river water or underground. Another example: assume you had never seen a man on a horse, but mythically you had seen pictures of centaurs, so the first time you saw a mounted man you might assume it was all one animal.

My point is that we apply schemas to everything we meet and perceive, often subconsciously, and when we become more familiar with the new experience, phenomenon or knowledge, we adjust our schema or create a new one, which we then apply to the next new experience, phenomenon or whatever.

There is a logical connection here, to what I suggested earlier, that we only understand new knowledge when we integrate it into existing knowledge. A schema is a consequence of existing experiences and knowledge, so cognitively it's the same process. The corollary to this is that when we encounter something completely alien, we need a new schema altogether (not unlike Kuhn's paradigm-shift).

I read recently in New Scientist (31May 2008) that someone (Karl Friston) had come up with a Bayesian interpretation of the brain (using Bayesian probability), at all levels, including neurons (they strengthen connections based on reinforced signals). The brain makes predictions, then adjusts its predictions based on what it senses in a reiterative process. He gives the everyday example of seeing something out of the corner of your eye, then turning your head to improve your prediction.

Schemas, their interaction with the world and our modification of them accordingly, is such a reiterative process, only on a different scale. Previously, I've talked about the dialectic in science between theory and observation, or theory and experimentation, which is another example of the same process, all be it's at another level altogether and is performed in a more disciplinary manner.

This is where I should write a conclusion, but I think I already have.

Having completed this essay, it has little resemblance to my letter to Umberto Eco in 2002, in either content or style, but some ideas and some arguments are the same.


Addendum (4 April 2010): I may have misrepresented Nicholas Humphrey - please read the addendum to my post Consciousness explained (3 April 2010).

Friday, 11 April 2008

The Ghost in the Machine

One of my favourite Sci-Fi movies, amongst a number of favourites, is the Japanese anime, Ghost in the Shell, by Mamoru Oshii. Made in 1995, it’s a cult classic and appears to have influenced a number of sci-fi storytellers, particularly James Cameron (Dark Angel series) and the Wachowski brothers (Matrix trilogy). It also had a more subtle impact on a lesser known storyteller, Paul P. Mealing (Elvene). I liked it because it was not only an action thriller, but it had the occasional philosophical soliloquy by its heroine concerning what it means to be human (she's a cyborg). It had the perfect recipe for sci-fi, according to my own criterion: a large dose of escapism with a pinch of food-for-thought. 

But it also encapsulated two aspects of modern Japanese culture that are contradictory by Western standards. These are the modern Japanese fascination with robots, and their historical religious belief in a transmigratory soul, hence the title, Ghost in the Shell. In Western philosophy, this latter belief is synonymous with dualism, famously formulated by Rene Descartes, and equally famously critiqued by Gilbert Ryle. Ryle was contemptuous of what he called, ‘the dogma of the ghost in the machine’, arguing that it was a category mistake. He gave the analogy of someone visiting a university and being shown all the buildings: the library, the lecture theatres, the admin building and so on. Then the visitor asks, ‘Yes, that’s all very well, but where is the university?’ According to Ryle, the mind is not an independent entity or organ in the body, but an attribute of the entire organism. I will return to Ryle’s argument later. 

In contemporary philosophy, dualism is considered a non sequitur: there is no place for the soul in science, nor ontology apparently. And, in keeping with this philosophical premise, there are a large number of people who believe it is only a matter of time before we create a machine intelligence with far greater capabilities than humans, with no ghost required, if you will excuse the cinematic reference. Now, we already have machines that can do many things far better than we can, but we still hold the upper hand in most common sense situations. The biggest challenge will come from so called ‘bottom-up’ AI (Artificial Intelligence) which will be self-learning machines, computers, robots, whatever. But, most interesting of all, is a project, currently in progress, called the ‘Blue Brain’, run by Henry Markram in Lausanne, Switzerland. Markram’s stated goal is to eventually create a virtual brain that will be able to simulate everything a human brain can do, including consciousness. He believes this will be achieved in 10 years time or less (others say 30). According to him, it’s only a question of grunt: raw processing power. (Reference: feature article in the American science magazine, SEED, 14, 2008) 

For many people, who work in the field of AI, this is philosophically achievable. I choose my words carefully here, because I believe it is the philosophy that is dictating the goal and not the science. This is an area where the science is still unclear if not unknown. Many people will tell you that consciousness is one of the last frontiers of science. For some, this is one of 3 remaining problems to be solved by science; the other 2 being the origin of the universe and the origin of life. They forget to mention the resolution of relativity theory with quantum mechanics, as if it’s destined to be a mere footnote in the encyclopaedia of complete human knowledge. 

There are, of course, other philosophical points of view, and two well known ones are expressed by John Searle and Roger Penrose respectively. John Searle is most famously known for his thought experiment of the ‘Chinese Room’, in which you have someone sitting in an enclosed room receiving questions through an 'in box', in Chinese, and, by following specific instructions (in English in Searle's case), provides answers in Chinese that they issue through an 'out box'. The point is that the person behaves just like a processor and has no knowledge of Chinese at all. In fact, this is the perfect description of a ‘Turing machine’ (see my post, Is mathematics evidence of a transcendental realm?) only instead of tape going through a machine you have a person performing the instructions in lieu of a machine. 

The Chinese Room actually had a real world counterpart: not many people know that, before we had computers, small armies of people would be employed (usually women) to perform specific but numerous computations for a particular project with no knowledge of how their specific input fitted into the overall execution of said project. Such a group was employed at Bletchley Park during WWII to work on the decoding of enigma transmissions where Turing worked. These people were called ‘computers’ and Turing was instrumental in streamlining their analysis. However, according to Turing’s biographer, Andrew Hodges, Turing did not develop an electronic computer at Bletchley Park, as some people believe, and he did not invent the Colossus, or Colossi, that were used to break another German code, the Lorenz, ‘...but [Turing] had input into their purpose, and saw at first-hand their triumph.’ (Hodges, 1997). 

Penrose has written 3 books, that I’m aware of, addressing the question of AI (The Emperor’s New Mind, Shadows of the Mind and The Large, the Small and the Human Mind) and Turing’s work is always central to his thesis. In the last book listed, Penrose invites others to expound on alternative views: Abner Shimony, Nancy Cartwright and Stephen Hawking. Of the three books referred to, Shadows of the Mind is the most intellectually challenging, because he is so determined not to be misunderstood. I have to say that Penrose always comes across as an intellect of great ability, but also great humility – he rarely, if ever, shows signs of hubris. For this reason alone, I always consider his arguments with great respect, even if I disagree with his thesis. To quote the I Ching: ‘he possesses as if he possessed nothing.’ 

Penroses’s predominant thesis, based on Godel’s and Turing’s proof (which I discuss in more detail in my post, Is mathematics evidence of a transcendental realm?) is that the human mind, or any mind for that matter, cannot possibly run on algorithms, which are the basis of all Turing machines. So the human mind is not a Turing machine is Penrose’s conclusion. More importantly, in anticipation of a further development of this argument, algorithms are synonymous with software, and the original conceptual Turing machine, that Turing formulated in his ‘halting problem proof’, is really about software. The Universal Turing machine is software that can duplicate all other Turing machines, given the correct instructions, which is what software is. 

To return to Ryle, he has a pertinent point in regard to his analogy, that I referred to earlier, of the university and the mind; it’s to do with a generic phenomenon which is observed throughout many levels of nature, which we call ‘emergence’. The mind is an emergent property, or attribute, that arises from the activity of a large number of neurons (trillions) in the same way that the human body is an emergent entity that arises from a similarly large number of cells. Some people even argue that classical physics is an emergent property that arises from quantum mechanics (see my post on The Laws of Nature). In fact, Penrose contends that these 2 mysteries may be related (he doesn't use the term emergent), and he proposes a view that the mind is the result of a quantum phenomenon in our neurons. I won’t relate his argument here, mainly because I don’t have Penrose's intellectual nous, but he expounds upon it in both of his books: Shadows of the Mind and The Large, the Small and the Human Mind; the second one being far more accessible than the first. 

The reason that Markram, and many others in the AI field, believe they can create an artificial consciousness, is because, if it is an emergent property of neurons, then all they have to do is create artificial neurons and consciousness will follow. This is what Markram is doing, only his neurons are really virtual neurons. Markram has ‘mapped’ the neurons from a thin slice of a rat’s brain into a supercomputer, and when he ‘stimulates’ his virtual neurons with an electrical impulse it creates a pattern of ‘firing’ activity just like we would expect to find in a real brain. On the face of it, Markram seems well on his way to achieving his goal. 

But there are two significant differences between Markram’s model (if I understand it correctly) and the real thing. All attempts at AI, including Markram’s, require software, yet the human brain, or any other brain for that matter, appears to have no software at all. Some people might argue that language is our software, and, from a strictly metaphorical perspective, that is correct. But we don’t seem to have any ‘operational’ software, and, if we do, the brain must somehow create it itself. So, if we have a ‘software’, it’s self-generational from the neurons themselves. Perhaps this is what Markram expects to find in his virtual neuron brain, but his virtual neuron brain already is software (if I interpret the description given in SEED correctly). 

I tend to agree with some of his critics, like Thomas Nagel (quoted in SEED), that Markram will end up with a very accurate model of a brain’s neurons, but he still won’t have a mind. ‘Blue Brain’, from what I can gather, is effectively a software model of the neurons of a small portion of a rat’s brain running on 4 super computers comprising a total of 8,000 IBM microchips. And even if he can simulate the firing pattern of his neurons to duplicate the rat’s, I would suspect it would take further software to turn that simulation into something concrete like an action or an image. As Markram says himself, it would just be a matter of massive correlation, and using the super computer to reverse the process. So he will, theoretically, and in all probability, be able to create a simulated action or image from the firing of his virtual neurons, but will this constitute consciousness? I expect not, but others, including Markram, expect it will. He admits himself, if he doesn’t get consciousness after building a full scale virtual model of a human brain, it would beg the question: what is missing? Well, I would suggest that would be missing is life, which is the second fundamental difference that I alluded to in the preceding paragraph, but didn’t elaborate on. 

I contend that even simple creatures, like insects, have consciousness, so you shouldn’t need a virtual human brain to replicate it. If consciousness equals sentience, and I believe it does, then that covers most of the animal kingdom. 

So Markram seems to think that his virtual brain will not only be conscious, but also alive – it’s very difficult to imagine one without the other. And this, paradoxically, brings one back to the ghost in the machine. Despite all the reductionism and scientific ruminations of the last century, the mystery still remains. I’m sure many will argue that there is no mystery: when your neurons stop firing, you die – it’s that simple. Yes, it is, but why is life, consciousness and the firing of neurons so concordant and so co-dependent? And do you really think a virtual neuron model will also exhibit both these attributes? Personally, I think not. And to return to cinematic references: does that mean, as with Hal, in Arthur C. Clarke’s 2001, A Space Odyssey, that when someone pulls the plug on Markram’s 'Blue Brain', it dies? 

In a nutshell: nature demonstrates explicitly that consciousness is dependent on life, and there is no evidence that life can be created from software, unless, of course, that software is DNA.