Earlier in the same year (last year) I read Roger Penrose’s book, Shadows of the Mind, which addresses exactly the same issue. What is interesting is that, in both cases, the authors use Godel’s Incompleteness Theorem to support completely different, one could say, opposing, philosophical viewpoints. Both Penrose and Hofstadter are intellectual giants compared to me, but what I find interesting is that both apparently start with their philosophical viewpoints and then find arguments to support them, rather than the other way round. Hofstadter quotes, more than once, the
Having said all that, this is a very complex and difficult subject, and I’m not at all sure I can do it justice. What goes hand in hand with the subject of AI, and Hofstadter doesn’t shy away from this, is the notion of consciousness. Can AI ever be conscious in the way we are? Hofstadter says yes, and Penrose, I believe, would say no. (Penrose effectively argues that algorithm-using machines – computers - will never think like humans.) Another person who has much to say on this subject is John Searle, and he would almost certainly say no, based on his famous ‘Chinese Room’ thought experiment. (I expound on this in my Apr.08 post: The Ghost in the Machine).
Larry Niven in one of his comments on his own blog, in response to one of my comments, made the observation that science hasn’t resolved the brain/mind conundrum, and gave it as an example of ‘…the impotence of scientific evidence to affect philosophical debates…’ (I’m sure if I’ve misinterpreted him, or quoted him out of context, he’ll let me know.)
To throw a googly into the mix, since Hofstadter first published the book 30 years ago, a lot of work has been done in this area, and one of the truly interesting ideas is the Bayesian model of the brain based on Bayesian probability, proposed by Karl Friston (New Scientist 31 May 08). In a nutshell, Friston proposes that the brain functions on the same principle at all levels, which is to make an initial assessment then modify it based on additional information. He claims this works even at the neuron level, as well as the cognitive level. (I report on this in my July 08 post titled, Epistemology; a discussion.) I even extrapolate this up the cognitive tree to include the scientific method, whereby we hypothesise, follow up with experimentation or observation, then modify the hypothesis accordingly.
Hofstadter makes a similar point about ‘default options’ that we use in everyday observations, like the way we use stereotypes. It’s only by evaluating a specific case in more detail that we can break away from a stereotypic interpretation of an event. This is also an employment of the Bayesian principle, but Hofstadter doesn’t say this because it hadn’t been proposed at the time he wrote it.
What Searle points out in his excellent book, Mind, is that consciousness is an experience, which is so subjective that we really don’t know if anyone else experiences it the way we do – we only assume they do. Stephen Law writes about this in his book, The Philosophy Gym, and I challenged him (by snail mail at the time) that this was a conceit on his part, because he obviously expected that people who read his book, could think like him, which means they must be conscious. It was a good natured jibe, even though I’m not sure he saw it that way at the time, but he was generous in his reply.
Descartes famous statement, ‘I think therefore I am’, has been pilloried over the centuries since he wrote it, but I would contend that ‘I think’ is a tautology, because ‘I’ is your thoughts and nothing else. This gets to the heart of Hofstadter’s thesis, that we, individually, are all ‘strange loops’. Hofstadter employs Godel’s Theorem in an unusual, analogous way to make this contention: we are ‘strange loops’. By strange loop, Hofstadter means that we can effectively look at all the levels of our thinking except the ground level, which is our neurons. In between we have symbols, which is language, which we can discuss and analyse in a dispassionate way, just like I’m doing now. I can talk about my own thoughts and ideas as if they weren’t mine at all. Consciousness, in Hofstadter’s model (for want of a better word) is the top level, and neurons are the hardware level. In between we have the software (symbols) which is effectively language.
I think language as software is a good metaphor but not necessarily a literal interpretation. Software means algorithms, which are effectively instructions. Whilst language obviously contains rules, I don’t see it as particularly algorithmic, though others, including Hofstadter, may disagree. On the other hand, I do see DNA as algorithmic in the way it creates organisms, and Hofstadter makes the same leap of interpretation.
The analogy with Godel’s Theorem is that, in any formal mathematical system, there will always exist a mathematical statement that expresses something about the system but can’t be found in the system, if I’ve got it right. In other words, there will always exist the possibility of a ‘correct’ mathematical statement that is not part of the original formal system, which is why it is called the Incompleteness Theorem – no mathematical formal system can ever be complete in that it will include all mathematical statements. In this analogy, the self or ‘I’ is like a Godelian entity that is a product of the system but not contained in it. Again, my interpretation may not be what Hofstadter intended, but it’s the best I can make of it. It exists at another level, I think is what Hofstadter would say.
In another part of the book, Hofstadter makes a direct ‘mapping’ which he calls a ‘dogmap’ (play on words for dogma) where he compares DOGMA I ‘Molecular Biology’ with DOGMA II ‘Mathematical Logic’, using Godel’s Theorem ‘self-referencing’ as directly comparable to DNA/RNA’s ‘self reproduction’. He admits this is an analogy but later acknowledges that the same mapping may be possible from Godel's Theorem to consciousness.
Even without this allusion by Hofstadter, and no Godelian analogy required, I see a direct comparison between the way DNA/RNA creates complex organisms and the way neurons create thoughts. In both cases there is a gulf of layers in between that makes one wonder how they could have evolved. Of course, this is grist for ID advocates and I’ve even come across a blogger (Sophie) who quotes Hofstadter to make this very point.
In one of my earliest posts on this blog (The Universe’s Interpreters, Sep. 07) I make the point that the universe consists of worlds within worlds, and the reason we can comprehend it to the extent that we do, is because we can conjure concepts within concepts ad infinitum. Hofstadter makes a similar point, though not in the same words, but at least 2 decades before I thought of it.
DNA/RNA exists at a level far removed from the end result, which is a living complex organism, yet there is a direct causal relationship. Neurons are cells that exist at a level far removed from the end result, which is consciousness, yet there is a direct causal relationship.
These 2 cases, DNA to complex organisms and neurons to consciousness, I think remain the 2 greatest mysteries of the natural world. To say that they can only be explained by invoking a ‘Designer’ (God) is to say we’ve uncovered everything we know about the universe at all of its levels of complexity and only God can explain everything else. I would call this the defeatist position if it was to be taken seriously. But, in effect, the ID advocates are saying that whilst any mysteries remain in our comprehension of the universe, there will always be a role for God. Once we find an explanation for these mysteries, there will be other mysteries, perhaps at other levels, that we can still employ God to explain. So the argument will never stop. Before
As a caveat to the above argument, I've said elsewhere (Emergent phenomena, Oct. 08) that we may never understand consciousness as a direct mathematical relationship to neuron activity (although Penrose pins his hopes on quantum phenomena). And I'm unsure that we will ever be able to explain how it becomes an experience, and that's one of the reasons I'm sceptical that AI will ever have that experience. But this lack of understanding is not evidence of God; it's just evidence of our lack of understanding.
To quote Confucius: 'To realise that you know something when you do, and to realise that you do not know when you do not, this is knowing.' Or to quote his near contemporary, Socrates, who put it more succinctly: 'The height of wisdom is to know how thoroughly ignorant we are.'
My personal hypothesis, completely speculative with no scientific evidence at all, is that maybe there is a feedback mechanism that goes from the top level to the bottom level that we’ve yet to discover. They are both mysteries that most people don’t contemplate and it took Hofstadter’s book, written over 3 decades ago, to bring them fully home to me, and to appreciate how analogous they are: base level causally affects top level, yet complexity of one level seems independent to complexity of the other - there is no obvious 1 to 1 correlation. (Examples: it can take a combination of genes to express a single trait; there is not a specific 'home' in the brain for specific memories.)I guess it’s this specific revelation that I personally take from Hofstadter’s book, but I really can’t do it justice. It is one of the best books I’ve read, even though I don’t agree with his overall thesis: machines will eventually think like humans, therefore they will have consciousness.
In my one and only published novel, ELVENE, there is an AI entity, Alfa, who plays an important role in the story. I was very careful in my construction of Alfa to demonstrate that he didn’t think like humans (yes, I gave him a gender and that’s explained) but that he was nevertheless extremely intelligent and able to converse with humans with cognitive ease. But I don’t believe Alfa was conscious albeit he may have given that impression (this is fiction, remember). I agree with Searle, in that simulated intelligence at a very high level will be achievable, but it will remain a simulation. AI uses algorithms and brains don’t – on this, I agree with Penrose. On the other hand, Hofstadter argues that we use rule-based software in the form of ‘symbols’, which we call language. I’m sure whoever reads this will have their own opinions.