Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Thursday, 22 November 2018

The search for ultimate truth is unattainable

Someone lent me a really good philosophy book called Ultimate Questions by Bryan Magee.  To quote directly from the back fly leaf cover: “Bryan Magee has had an unusually multifaceted career as a professor of philosophy, music and theatre critic, BBC broadcaster and member of [British] Parliament.” It so happens I have another of his books, The Story of Philosophy, which is really a series of interviews with philosophers about philosophers, and I expect it’s a transcription of radio podcasts. Magee was over 80 when he wrote Ultimate Questions, which must be prior to 2016 when the book was published.

This is a very thought-provoking book, which is what you'd expect from a philosopher. To a large extent, and to my surprise, Magee and I have come to similar positions on fundamental epistemological and ontological issues, albeit by different paths. However, there is also a difference, possibly a divide, which I’ll come to later.

Where to start? I’ll start at the end because it coincides with my beginning. It’s not a lengthy tome (120+ pages) and it’s comprised of 7 chapters or topics, which are really discussions. In the last chapter, Our Predicament Summarized, he emphasises his view of an inner and outer world, both of which elude full comprehension, that he’s spent the best part of the book elaborating on.

As I’ve discussed previously, the inner and outer world is effectively the starting point for my own world view. The major difference between Magee and myself are the paths we’ve taken. My path has been a scientific one, in particular the science of physics, encapsulating as it does, the extremes of the physical universe, from the cosmos to the infinitesimal.

Magee’s path has been the empirical philosophers from Locke to Hume to Kant to Schopenhauer and eventually arriving at Wittgenstein. His most salient and persistent point is that our belief that we can comprehend everything there is to comprehend about the ‘world’ is a delusion. He tells an anecdotal story of when he was a student of philosophy and he was told that the word ‘World’ comprised not only what we know but everything we can know. He makes the point, that many people fail to grasp, that there could be concepts that are beyond our grasp in the same way that there are concepts we do understand but are nevertheless beyond the comprehension of the most intelligent of chimpanzees or dolphins or any creature other than human. None of these creatures can appreciate the extent of the heavens the way we can or even the way our ancient forebears could. Astronomy has a long history. Even indigenous cultures, without the benefit of script, have learned to navigate long distances with the aid of the stars. We have a comprehension of the world that no other creature has (on this planet) so it’s quite reasonable to assume that there are aspects of our world that we can’t imagine either.

Because my path to philosophy has been through science, I have a subtly different appreciation of this very salient point. I wrote a post based on Noson Yanofsky’s The Outer Limits of Reason, which addresses this very issue: there are limits in logic, mathematics and science, and there always will be. But I’m under the impression that Magee takes this point further. He expounds, better than anyone else I’ve read, that there are actual limits to what our brains can, not only perceive, but conceptualise, which leads to the possibility, most of us ignore, that there are things beyond our kin completely and always.

As Magee himself states, this opens the door to religion, which he discusses at length, yet he gives this warning: “Anyone who sets off in honest and serious pursuit of truth needs to know that in doing that he is leaving religion behind.” It’s a bit unfair to provide this quote out of context, as it comes at the end of a lengthy discussion, nevertheless, it’s the word ‘truth’ that gives his statement cogency. My own view is that religion is not an epistemology, it’s an experience. What’s more it’s an experience (including the experience of God) that is unique to the person who has it and can’t be shared with anyone else. This puts individual religious experience at odds with institutionalised religions, and as someone pointed out (Yuval Harari, from memory) this means that the people who have religious experiences are all iconoclasts.

I’m getting off the point, but it’s relevant in as much that arguments involving science and religion have no common ground. I find them ridiculous because they usually involve pitting an ancient text (of so-called prophecy) against modern scientific knowledge and all the technology it has propagated, which we all rely upon for our day-to-day existence. If religion ever had an epistemological role it has long been usurped.

On the other hand, if religion is an experience, it is part of the unfathomable which lies outside our rational senses, and is not captured by words. Magee contends that the best one can say about an afterlife or the existence of a God, is that ‘we don’t know’. He calls himself an agnostic but not just in the narrow sense relating to a Deity, but in the much broader sense of acknowledging our ignorance. He discusses these issues in much more depth than my succinct paraphrasing implies. He gives the example of music as something we experience that can’t be expressed in words. Many people have used music as an analogy for religious experience, but, as Magee points out, music has a material basis in instruments and a score and sound waves, whereas religion does not.

Coincidentally, someone today showed me a text on Socrates, from a much larger volume on classical Greece. Socrates famously proclaimed his ignorance as the foundation of his wisdom. In regard to science, he said: “Each mystery, when solved, reveals a deeper mystery.” This statement is so prophetic; it captures the essence of science as we know it today, some 2500 years after Socrates. It’s also the reason I agree with Magee.

John Wheeler conceived a metaphor, that I envisaged independently of him. (Further evidence that I’ve never had an original idea.)

We live on an island of knowledge surrounded by a sea of ignorance.
As our island of knowledge grows, so does the shore of our ignorance.


I contend that the island is science and the shoreline is philosophy, which implies that philosophy feeds science, but also that they are inseparable. By philosophy, in this context, I mean epistemology.

To give an example that confirms both Socrates and Wheeler, the discovery and extensive research into DNA provides both evidence and a mechanism for biological evolution from the earliest life forms to the most complex; yet the emergence of DNA as providing ‘instructions’ for the teleological development of an organism is no less a mystery looking for a solution than evolution itself.

The salient point of Wheeler's metaphor is that the sea of ignorance is infinite and so the island grows but is never complete. In his last chapter, Magee makes the point that truth (even in science) is something we progress towards without attaining. “So rationality requires us to renounce the pursuit of proof in favour of the pursuit of progress.” (My emphasis.) However, 'the pursuit of proof’ is something we’ve done successfully in mathematics ever since Euclid. It is on this point that I feel Magee and I part company.

Like many philosophers, when discussing epistemology, Magee hardly mentions mathematics. Only once, as far as I can tell, towards the very end (in the context of the quote I referenced above about ‘proof’) he includes it in the same sentence as science, logic and philosophy as inherited from Descartes, where he has this to say: “It is extraordinary to get people, including oneself, to give up this long-established pursuit of the unattainable.” He is right in as much as there will always be truths, including mathematical truths, that we can never know (refer my recent post on Godel, Turing and Chaitin). But there are also innumerable (mathematical) truths that we have discovered and will continue to discover into the future (part of the island of knowledge). As Freeman Dyson points out, 'Mathematics is forever', whilst discussing the legacy of Srinivasa Ramanujan's genius. In other words, mathematical truths don't become obsolete in the same way that science does.

I don’t know what Magee’s philosophical stance is on mathematics, but not giving it any special consideration tells me something already. I imagine, from his perspective, it serves no special epistemological role, except to give quantitative evidence for the validity of competing scientific theories.

In one of his earlier chapters, Magee talks about the ‘apparatus’ we have in the form of our senses and our brain that provide a limited means to perceive our external world. We have developed technological means to augment our senses; microscopes and telescopes being the most obvious. But we now have particle accelerators and radio telescopes that explore worlds we didn’t even know existed less than a century ago.

Mathematics, I would contend, is part of that extended apparatus. Riemann’s geometry allowed Einstein to perceive a universe that was ‘curved’ and Euler’s equation allowed Schrodinger to conceive a wave function. Both of these mathematically enhanced ‘discoveries’ revolutionised science at opposite ends of the epistemological spectrum: the cosmological and the subatomic.

Magee rightly points out our almost insignificance in both space and time as far as the Universe is concerned. We are figuratively like the blink of an eye on a grain of sand, yet reality has no meaning without our participation. In reference to the internal and external worlds that formulate this reality, Magee has this to say: “But then the most extraordinary thing is that the world of interaction between these two unintelligibles is rationally intelligible.” Einstein famously made a similar point: "The most incomprehensible thing about the Universe is that it’s comprehensible.”

One can’t contemplate that statement, especially in the context of Einstein’s iconic achievements, without considering the specific and necessary role of mathematics. Raymond Tallis, who writes a regular column in Philosophy Now, and for whom I have great respect, nevertheless downplays the role of mathematics. He once made the comment that mathematical Platonists (like me) 'make the error of confusing the map for the terrain.’ I wrote a response, saying: ‘the use of that metaphor infers the map is human-made, but what if the map preceded the terrain.’ (The response wasn’t published.) The Universe obeys laws that are mathematically in situ, as first intimated by Galileo, given credence by Kepler, Newton, Maxwell; then Einstein, Schrodinger, Heisenberg and Bohr.

I’d like to finish by quoting Paul Davies:

We have a closed circle of consistency here: the laws of physics produce complex systems, and these complex systems lead to consciousness, which then produces mathematics, which can then encode in a succinct and inspiring way the very underlying laws of physics that gave rise to it.

This, of course, is another way of formulating Roger Penrose’s 3 Worlds, and it’s the mathematical world that is, for me, the missing piece in Magee’s otherwise thought-provoking discourse.


Last word: I’ve long argued that mathematics determines the limits of our knowledge of the physical world. Science to date has demonstrated that Socrates was right: the resolution of one mystery invariably leads to another. And I agree with Magee that consciousness is a phenomenon that may elude us forever.

Addendum: I came across this discussion between Magee and Harvard philosopher, Hilary Putnam, from 1977 (so over 40 years ago), where Magee exhibits a more nuanced view on the philosophy of science and mathematics (the subject of their discussion) than I gave him credit for in my post. Both of these men take their philosophy of science from philosophers, like Kant, Descartes and Hume, whereas I take my philosophy of science from scientists: principally, Paul Davies, Roger Penrose and Richard Feynman, and to a lesser extent, John Wheeler and Freeman Dyson; I believe this is the main distinction between their views and mine. They even discuss this 'distinction' at one point, with the conclusion that scientists, and particularly physicists, are stuck in the past - they haven't caught up (my terminology, not theirs). They even talk about the scientific method as if it's obsolete or anachronistic, though again, they don't use those specific terms. But I'd point to the LHC (built decades after this discussion) as evidence that the scientific method is alive and well, and it works. (I intend to make this a subject of a separate post.)

Friday, 9 November 2018

Can AI be self-aware?

I recently got involved in a discussion on Facebook with a science fiction group on the subject of artificial intelligence. Basically, it started with a video claiming that a robot had discovered self-awareness, which is purportedly an early criterion for consciousness. But if you analyse what they actually achieved: it’s clever sleight-of-hand at best and pseudo self-awareness at worst. The sleight-of-hand is to turn fairly basic machine logic into an emotive gesture to fool humans (like you and me) into thinking it looks and acts like a human, which I’ll describe in detail below.

And it’s pseudo self-awareness in that it’s make-believe, in the same way that pseudo science is make-believe science, meaning pretend science, like creationism. We have a toy that we pretend exhibits self-awareness. So it is we who do the make-believe and pretending, not the toy.

If you watch the video you’ll see that they have 3 robots and they give them a ‘dumbing pill’ (meaning a switch was pressed) so they can’t talk. But one of them is not dumb and they are asked: “Which pill did you receive?” One of them dramatically stands up and says: “I don’t know”. But then waves its arm and says, “I’m sorry, I know now. I was able to prove I was not given the dumbing pill.”

Obviously, the entire routine could have been programmed, but let’s assume it’s not. It’s a simple TRUE/FALSE logic test. The so-called self-awareness is a consequence of the T/F test being self-referential – whether it can talk or not. It verifies that it’s False because it hears its own voice. Notice the human-laden words like ‘hears’ and ‘voice’ (my anthropomorphic phrasing). Basically, it has a sensor to detect sound that it makes itself, which logically determines whether the statement, it’s ‘dumb’, is true or false. It says, ‘I was not given a dumbing pill’, which means its sound was not switched off. Very simple logic.

I found an on-line article by Steven Schkolne (PhD in Computer Science at Caltech), so someone with far more expertise in this area than me, yet I found his arguments for so-called computer self-awareness a bit misleading, to say the least. He talks about 2 different types of self-awareness (specifically for computers) – external and internal. An example of external self-awareness is an iphone knowing where it is, thanks to GPS. An example of internal self-awareness is a computer responding to someone touching the keyboard. He argues that “machines, unlike humans, have a complete and total self-awareness of their internal state”. For example, a computer can find every file on its hard drive and even tell you its date of origin, which is something no human can do.

From my perspective, this is a bit like the argument that a thermostat can ‘think’. ‘It thinks it’s too hot or it thinks it’s too cold, or it thinks the temperature is just right.’ I don’t know who originally said that, but I’ve seen it quoted by Daniel Dennett, and I’m still not sure if he was joking or serious.

Computers use data in a way that humans can’t and never will, which is why their memory recall is superhuman compared to ours. Anyone who can even remotely recall data like a computer is called a savant, like the famous ‘Rain Man’. The point is that machines don’t ‘think’ like humans at all. I’ll elaborate on this point later. Schkolne’s description of self-awareness for a machine has no cognitive relationship to our experience of self-awareness. As Schkone says himself: “It is a mistake if, in looking for machine self-awareness, we look for direct analogues to human experience.” Which leads me to argue that what he calls self-awareness in a machine is not self-awareness at all.

A machine accesses data, like GPS data, which it can turn into a graphic of a map or just numbers representing co-ordinates. Does the machine actually ‘know’ where it is? You can ask Siri (as Schkolne suggests) and she will tell you, but he acknowledges that it’s not Siri’s technology of voice recognition and voice replication that makes your iphone self-aware. No, the machine creates a map, so you know where ‘You’ are. Logically, a machine, like an aeroplane or a ship, could navigate over large distances with GPS with no humans aboard, like drones do. That doesn’t make them self-aware; it makes their logic self-referential, like the toy robot in my introductory example. So what Schkolne calls self-awareness, I call self-referential machine logic. Self-awareness in humans is dependent on consciousness: something we experience, not something we deduce.

And this is the nub of the argument. The argument goes that if self-awareness amongst humans and other species is a consequence of consciousness, then machines exhibiting self-awareness must be the first sign of consciousness in machines. However, self-referential logic, coded into software doesn’t require consciousness, it just requires machine logic suitably programmed. I’m saying that the argument is back-to-front. Consciousness can definitely imbue self-awareness, but a self-referential logic coded machine does not reverse the process and imbue consciousness.

I can extend this argument more generally to contend that computers will never be conscious for as long as they are based on logic. What I’d call problem-solving logic came late, evolutionarily, in the animal kingdom. Animals are largely driven by emotions and feelings, which I argue came first in evolutionary terms. But as intelligence increased so did social skills, planning and co-operation.

Now, insect colonies seem to put the lie to this. They are arguably closer to how computers work, based on algorithms that are possibly chemically driven (I actually don’t know). The point is that we don’t think of ants and bees as having human-like intelligence. A termite nest is an organic marvel, yet we don’t think the termites actually plan its construction the way a group of humans would. In fact, some would probably argue that insects don’t even have consciousness. Actually, I think they do. But to give another well-known example, I think the dance that bees do is ‘programmed’ into their DNA, whereas humans would have to ‘learn’ it from their predecessors.

There is a way in which humans are like computers, which I think muddies the waters, and leads people into believing that the way we think and the way machines ‘think’ is similar if not synonymous.

Humans are unique within the animal kingdom in that we use language like software; we effectively ‘download’ it from generation to generation and it limits what we can conceive and think about, as Wittgenstein pointed out. In fact, without this facility, culture and civilization would not have evolved. We are the only species (that we are aware of) that develops concepts and then manipulates them mentally because we learn a symbol-based language that gives us that unique facility. But we invented this with our brains; just as we invent symbols for mathematics and symbols for computer software. Computer software is, in effect, a language and it’s more than an analogy.

We may be the only species that uses symbolic language, but so do computers. Note that computers are like us in this respect, rather than us being like computers. With us, consciousness is required first, whereas with AI, people seem to think the process can be reversed: if you create a machine logic language with enough intelligence, then you will achieve consciousness. It’s back-to-front, just as self-referential logic creating self-aware consciousness is back-to-front.

I don't think AI will ever be conscious or sentient. There seems to be an assumption that if you make a computer more intelligent it will eventually become sentient. But I contend that consciousness is not an attribute of intelligence. I don't believe that more intelligent species are more conscious or more sentient. In other words, I don't think the two attributes are concordant, even though there is an obvious dependency between consciousness and intelligence in animals. But it’s a one way dependency; if consciousness was dependent on intelligence then computers would already be conscious.


Addendum: The so-called Turing test is really a test for humans, not robots, as this 'interview' with 'Sophia' illustrates.