Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Saturday, 5 January 2019

What makes humans unique

Now everyone pretty well agrees that there is not one single thing that makes humans unique in the animal kingdom, but most people would agree that our cognitive abilities leave the most intelligent and social of species in our wake. I say ‘most’ because there are some, possibly many, who argue that humans are not as special as we like to think and there is really nothing we can do that other species can’t do. They would point out that other species, if not all advanced species, have language, and many produce art to attract a mate and build structures (like ants and beavers) and some even use tools (like apes and crows).

However, I find it hard to imagine that other species can think and conceptualise in a language the way we do or even communicate complex thoughts and intentions using oral utterances alone. To give other examples, I know of no other species that tells stories, keeps track of days by inventing a calendar based on heavenly constellations (like the Mayans) or even thinks about thinking. And as far as I know, we are the only species who literally invents a complex language that we teach our children (it’s not inherited) so that we can extend memories across generations. Even cultures without written scripts can do this using songs and dances and art. As someone said (John Hands in Cosmo Sapiens) we are the only species ‘who know that we know’. Or, as I said above, we are the only species that ‘thinks about thinking’.

Someone once pointed out to me that the only thing that separates us from all other species is the accumulation of knowledge, resulting in what we call civilization. He contended that over hundreds, even thousands of years, this had resulted in a huge gap between us and every other sentient creature on the planet. I pointed out to him that this only happened because we had invented the written word, based on languages, that allowed us to transfer memories across generations. Other species can teach their young certain skills, that may not be genetically inherited, but none can accumulate knowledge over hundreds of generations like we can. His very point demonstrated the difference he was trying to deny.

In a not-so-recent post, I delineated my philosophical ruminations into 23 succinct paragraphs, covering everything from science and mathematics to language, morality and religion.  My 16th point said:



Humans have the unique ability to nest concepts within concepts ad-infinitum, which mirror the physical world.

In another post from 2012, in answer to a Question of the Month in Philosophy  Now: How does language work?; I made the same point. (This is the only submission to Philosophy Now, out of 8 thus far, that didn’t get published.)

I attributed the above ‘philosophical point’ to Douglas Hofstadter, because he says something similar in his Pulitzer Prize winning book, Godel Escher Bach, but in reality, I had reached this conclusion before reading it.

It’s my contention that it is this ability that separates us from other species and that has allowed all the intellectual endeavours we associate with humanity, including stories, music, art, architecture, mathematics, science and engineering.

I will illustrate with an example that we are all familiar with, yet many of us struggle to pursue at an advanced level. I’m talking about mathematics, and I choose it because I believe it also explains why many of us fail to achieve the degree of proficiency we might prefer.

With mathematics we learn modules which we then use as a subroutine in a larger calculation. To give a very esoteric example, Einstein’s general theory of relativity requires at least 4 modules: calculus, vectors, matrices and the Lorentz transformation. These all combine in a metric tensor that becomes the basis of his field equations. The thing is, if you don’t know how to deal with any one of these, you obviously can’t derive his field equations. But the point is that the human brain can turn all these ‘modules’ into black boxes and then the black boxes can be manipulated at another level.

It’s not hard to see that we do this with everything, including writing an essay like I’m doing now. I raise a number of ideas and then try to combine them into a coherent thesis. The ‘atoms’ are individual words but no one tries to comprehend it at that level. Instead they think in terms of the ideas that I’ve expressed in words.

We do the same with a story, which becomes like a surrogate life for the time that we are under its spell. I’ve pointed out in other posts that we only learn something new when we integrate it into what we already know. And, with a story, we are continually integrating new information into existing information. Without this unique cognitive skill, stories wouldn’t work.

But more relevant to the current topic, the medium for a story is not words but the reader’s imagination. In a movie, we short-circuit the process, which is why they are so popular.

Because a story works at the level of imagination, it’s like a dream in that it evokes images and emotions that can feel real. One could imagine that a dog or a cat could experience emotions if we gave them a virtual reality experience, but a human story has the same level of complexity that we find in everyday life and which we express in a language. The simple fact that we can use language alone to conjure up a world with characters, along with a plot that can be followed, gives some indication of how powerful language is for the human species.

In a post I wrote on storytelling back in 2012, I referenced a book by Kiwi academic, Brian Boyd, who points out that pretend play, which we all do as children (though I suspect it’s now more likely done using a videogame console) gives us cognitive skills and is the precursor to both telling and experiencing stories. The success of streaming services indicates how stories are an essential part of the human experience.

While it’s self-evident that both mathematics and storytelling are two human endeavours that no other species can do (even at a rudimentary level) it’s hard to see how they are related.

People who are involved in computer programming or writing code, are aware of the value, even necessity, of subroutines. Our own brain does this when we learn to do something without having to think about it, like walking. But we can do the same thing with more complex tasks like driving a car or playing a musical instrument. The key point here is that they are all ‘motor tasks’, and we call the result ‘muscle memory’, as distinct from cognitive tasks. However, I expect it relates to cognitive tasks as well. For example, every time you say something it’s like the sentence has been pre-formed in your brain. We use particular phrases, all the time, which are analogous to ‘subroutines.’

I should point out that this doesn’t mean that computers ‘think’, which is a whole other topic. I’m just relating how the brain delegates tasks so it can ‘think’ about more important things. If we had to concentrate every time we took a step, we would lose the train of thought of whatever it was we were engaged in at the time; a conversation being the most obvious example.

The mathematics example I gave is not dissimilar to the idea of a ‘subroutine’. In fact, one can employ mathematical ‘modules’ into software, so it’s more than an analogy. So with mathematics we’ve effectively achieved cognitively what the brain achieves with motor skills at the subconscious level. And look where it has got us: Einstein’s general theory of relativity, which is the basis of all current theories of the Universe.

We can also think of a story in terms of modules. They are the individual scenes, which join together to form an episode, which form together to create an overarching narrative that we can follow even when it’s interrupted.

What mathematics and storytelling have in common is that they are both examples where the whole appears to be greater than the sum of its parts. Yet we know that in both cases, the whole is made up of the parts, because we ‘process’ the parts to get the whole. My point is that only humans are capable of this.

In both cases, we mentally build a structure that seems to have no limits. The same cognitive skill that allows us to follow a story in serial form also allows us to develop scientific theories. The brain breaks things down into components and then joins them back together to form a complex cognitive structure. Of course, we do this with physical objects as well, like when we manufacture a car or construct a building, or even a spacecraft. It’s called engineering.

Saturday, 22 December 2018

When real life overtakes fiction

I occasionally write science fiction; a genre I chose out of fundamental laziness. I knew I could write in that medium without having to do any research to speak of. I liked the idea of creating the entire edifice - world, story and characters - from my imagination with no constraints except the bounds of logic.

There are many subgenres of sci-fi: extraterrestrial exploration, alien encounters, time travel, robots & cyborgs, inter-galactic warfare, genetically engineered life-forms; but most SF stories, including mine, are a combination of some of these. Most sci-fi can be divided into 2 broad categories – space opera and speculative fiction, sometimes called hardcore SF. Space operas, exemplified by the Star Wars franchise, Star Trek and Dr Who, generally take more liberties with the science part of science fiction.

I would call my own fictional adventures science-fantasy, in the mould of Frank Herbert’s Dune series or Ursula K Le Guin’s fiction; though it has to be said, I don’t compete with them on any level.

I make no attempt to predict the future, even though the medium seems to demand it. Science fiction is a landscape that I use to explore ideas in the guise of a character-oriented story. I discovered, truly by accident, that I write stories about relationships. Not just relationships between lovers, but between mother and daughter, daughter and father(s), protagonist and nemesis, protagonist and machine.

One of the problems with writing science fiction is that the technology available today seems to overtake what one imagines. In my fiction no one uses a mobile phone. I can see a future where people can just talk to someone in the ether, because they can connect in their home or in their car, without a device per se. People can connect via a holographic form of Skype, which means they can have a meeting with someone in another location. We are already doing this, of course, and variations on this theme have been used in Star Wars and other space operas. But most of the interactions I describe are very old fashioned face-to-face, because that's still the best way to tell a story.

If you watch (or read) crime fiction you’ll generally find it’s very suspenseful with violence not too far away. But if you analyze it, you’ll find it’s a long series of conversations, with occasional action and most of the violence occurring off-screen (or off-the-page). In other words, it’s more about personal interactions than you realise, and that’s what generally attracts you, probably without you even knowing it.

This is a longwinded introduction to explain why I am really no better qualified to predict future societies than anyone else. I subscribe to New Scientist and The New Yorker, both of which give insights into the future by examining the present. In particular, I recently read an article in The New Yorker (Dec, 17, 2018) by David Owen about facial-recognition, called Here’s Looking At You, that is already being used by police forces in America to target arrests without any transparency. Mozilla (in a podcast last year) described how a man had been misidentified twice, was arrested and subsequently lost his job and his career. I also read in last week’s New Scientist (15 Dec. 2018) how databases are being developed to know everything about a person, even what TV shows they watch and their internet use. It’s well known that in China there is a credit-point system that determines what buildings you can access and what jobs you can apply for. China has the most surveillance cameras anywhere in the world, and they intend to combine them with the latest facial recognition software.

Yuval Harari, in Homo Deus, talks about how algorithms are going to take over our lives, but I think he missed the mark. We are slowly becoming more Orwellian with social media already determining election results. In the same issue of New Scientist, journalist, Chelsea Whyte, asks: Is it time to unfriend the social network? with specific reference to Facebook’s recently exposed track-record. According to her: “Facebook’s motto was once ‘move fast and break things.’ Now everything is broken.” Quoting from the same article:

Now, the UK parliament has published internal Facebook emails that expose the mindset inside the company. They reveal discussions among staff over whether to collect users’ phone call logs and SMS texts through its Android app. “This is a pretty high-risk thing to do from a PR perspective but it appears that the growth team will charge ahead and do it.” (So said Product Manager Michael LeBeau in an email from 2015)

Even without Edward Snowden’s whistle-blowing expose, we know that governments the world over are collecting our data because the technological ability to do that is now available. We are approaching a period in our so-called civilised development where we all have an on-line life (if you are reading this) and it can be accessed by governments and corporations alike. I’ve long known that anyone can learn everything they need to know about me from my computer, and increasingly they don’t even need the computer.

In one of my fictional stories, I created a dystopian world where everyone had a ‘chip’ that allowed all conversations to be recorded so there was literally no privacy. We are fast approaching that scenario in some totalitarian societies. In Communist China under Mao, and Communist Soviet Union under Stalin, people found the circle of people they could trust got smaller and smaller. Now with AI capabilities and internet-wide databases, privacy is becoming illusory. With constant surveillance, all subversion can be tracked and subsequently prosecuted. Someone once said that only societies that are open to new ideas progress. If you live in a society where new ideas are censored then you will get stagnation.

In my latest fiction I’ve created another autocratic world, where everyone is tracked because everywhere they go they interact with very realistic androids who act as servants, butlers and concierges, but, in reality, keep track of what everyone’s doing. The only ‘futuristic’ aspect of this are the androids and the fact that I’ve set it on some alien world. (My worlds aren’t terra-formed; people live in bubbles that create a human-friendly environment.)

After reading these very recent articles in New Scientist and TNY, I’ve concluded that our world is closer to the one I’ve created in my imagination than I thought.


Addendum 1: This is a podcast about so-called Surveillance Capitalism, from Mozilla. Obviously, I use Google and I'm also on FaceBook, but I don't use Twitter. Am I part of the problem or part of the solution? The truth is I don't know. I try to make people think and share ideas. I have political leanings, obviously, but they're transparent. Foremost, I believe, that if you can't put your name to something you shouldn't post it.

Thursday, 22 November 2018

The search for ultimate truth is unattainable

Someone lent me a really good philosophy book called Ultimate Questions by Bryan Magee.  To quote directly from the back fly leaf cover: “Bryan Magee has had an unusually multifaceted career as a professor of philosophy, music and theatre critic, BBC broadcaster and member of [British] Parliament.” It so happens I have another of his books, The Story of Philosophy, which is really a series of interviews with philosophers about philosophers, and I expect it’s a transcription of radio podcasts. Magee was over 80 when he wrote Ultimate Questions, which must be prior to 2016 when the book was published.

This is a very thought-provoking book, which is what you'd expect from a philosopher. To a large extent, and to my surprise, Magee and I have come to similar positions on fundamental epistemological and ontological issues, albeit by different paths. However, there is also a difference, possibly a divide, which I’ll come to later.

Where to start? I’ll start at the end because it coincides with my beginning. It’s not a lengthy tome (120+ pages) and it’s comprised of 7 chapters or topics, which are really discussions. In the last chapter, Our Predicament Summarized, he emphasises his view of an inner and outer world, both of which elude full comprehension, that he’s spent the best part of the book elaborating on.

As I’ve discussed previously, the inner and outer world is effectively the starting point for my own world view. The major difference between Magee and myself are the paths we’ve taken. My path has been a scientific one, in particular the science of physics, encapsulating as it does, the extremes of the physical universe, from the cosmos to the infinitesimal.

Magee’s path has been the empirical philosophers from Locke to Hume to Kant to Schopenhauer and eventually arriving at Wittgenstein. His most salient and persistent point is that our belief that we can comprehend everything there is to comprehend about the ‘world’ is a delusion. He tells an anecdotal story of when he was a student of philosophy and he was told that the word ‘World’ comprised not only what we know but everything we can know. He makes the point, that many people fail to grasp, that there could be concepts that are beyond our grasp in the same way that there are concepts we do understand but are nevertheless beyond the comprehension of the most intelligent of chimpanzees or dolphins or any creature other than human. None of these creatures can appreciate the extent of the heavens the way we can or even the way our ancient forebears could. Astronomy has a long history. Even indigenous cultures, without the benefit of script, have learned to navigate long distances with the aid of the stars. We have a comprehension of the world that no other creature has (on this planet) so it’s quite reasonable to assume that there are aspects of our world that we can’t imagine either.

Because my path to philosophy has been through science, I have a subtly different appreciation of this very salient point. I wrote a post based on Noson Yanofsky’s The Outer Limits of Reason, which addresses this very issue: there are limits in logic, mathematics and science, and there always will be. But I’m under the impression that Magee takes this point further. He expounds, better than anyone else I’ve read, that there are actual limits to what our brains can, not only perceive, but conceptualise, which leads to the possibility, most of us ignore, that there are things beyond our kin completely and always.

As Magee himself states, this opens the door to religion, which he discusses at length, yet he gives this warning: “Anyone who sets off in honest and serious pursuit of truth needs to know that in doing that he is leaving religion behind.” It’s a bit unfair to provide this quote out of context, as it comes at the end of a lengthy discussion, nevertheless, it’s the word ‘truth’ that gives his statement cogency. My own view is that religion is not an epistemology, it’s an experience. What’s more it’s an experience (including the experience of God) that is unique to the person who has it and can’t be shared with anyone else. This puts individual religious experience at odds with institutionalised religions, and as someone pointed out (Yuval Harari, from memory) this means that the people who have religious experiences are all iconoclasts.

I’m getting off the point, but it’s relevant in as much that arguments involving science and religion have no common ground. I find them ridiculous because they usually involve pitting an ancient text (of so-called prophecy) against modern scientific knowledge and all the technology it has propagated, which we all rely upon for our day-to-day existence. If religion ever had an epistemological role it has long been usurped.

On the other hand, if religion is an experience, it is part of the unfathomable which lies outside our rational senses, and is not captured by words. Magee contends that the best one can say about an afterlife or the existence of a God, is that ‘we don’t know’. He calls himself an agnostic but not just in the narrow sense relating to a Deity, but in the much broader sense of acknowledging our ignorance. He discusses these issues in much more depth than my succinct paraphrasing implies. He gives the example of music as something we experience that can’t be expressed in words. Many people have used music as an analogy for religious experience, but, as Magee points out, music has a material basis in instruments and a score and sound waves, whereas religion does not.

Coincidentally, someone today showed me a text on Socrates, from a much larger volume on classical Greece. Socrates famously proclaimed his ignorance as the foundation of his wisdom. In regard to science, he said: “Each mystery, when solved, reveals a deeper mystery.” This statement is so prophetic; it captures the essence of science as we know it today, some 2500 years after Socrates. It’s also the reason I agree with Magee.

John Wheeler conceived a metaphor, that I envisaged independently of him. (Further evidence that I’ve never had an original idea.)

We live on an island of knowledge surrounded by a sea of ignorance.
As our island of knowledge grows, so does the shore of our ignorance.


I contend that the island is science and the shoreline is philosophy, which implies that philosophy feeds science, but also that they are inseparable. By philosophy, in this context, I mean epistemology.

To give an example that confirms both Socrates and Wheeler, the discovery and extensive research into DNA provides both evidence and a mechanism for biological evolution from the earliest life forms to the most complex; yet the emergence of DNA as providing ‘instructions’ for the teleological development of an organism is no less a mystery looking for a solution than evolution itself.

The salient point of Wheeler's metaphor is that the sea of ignorance is infinite and so the island grows but is never complete. In his last chapter, Magee makes the point that truth (even in science) is something we progress towards without attaining. “So rationality requires us to renounce the pursuit of proof in favour of the pursuit of progress.” (My emphasis.) However, 'the pursuit of proof’ is something we’ve done successfully in mathematics ever since Euclid. It is on this point that I feel Magee and I part company.

Like many philosophers, when discussing epistemology, Magee hardly mentions mathematics. Only once, as far as I can tell, towards the very end (in the context of the quote I referenced above about ‘proof’) he includes it in the same sentence as science, logic and philosophy as inherited from Descartes, where he has this to say: “It is extraordinary to get people, including oneself, to give up this long-established pursuit of the unattainable.” He is right in as much as there will always be truths, including mathematical truths, that we can never know (refer my recent post on Godel, Turing and Chaitin). But there are also innumerable (mathematical) truths that we have discovered and will continue to discover into the future (part of the island of knowledge). As Freeman Dyson points out, 'Mathematics is forever', whilst discussing the legacy of Srinivasa Ramanujan's genius. In other words, mathematical truths don't become obsolete in the same way that science does.

I don’t know what Magee’s philosophical stance is on mathematics, but not giving it any special consideration tells me something already. I imagine, from his perspective, it serves no special epistemological role, except to give quantitative evidence for the validity of competing scientific theories.

In one of his earlier chapters, Magee talks about the ‘apparatus’ we have in the form of our senses and our brain that provide a limited means to perceive our external world. We have developed technological means to augment our senses; microscopes and telescopes being the most obvious. But we now have particle accelerators and radio telescopes that explore worlds we didn’t even know existed less than a century ago.

Mathematics, I would contend, is part of that extended apparatus. Riemann’s geometry allowed Einstein to perceive a universe that was ‘curved’ and Euler’s equation allowed Schrodinger to conceive a wave function. Both of these mathematically enhanced ‘discoveries’ revolutionised science at opposite ends of the epistemological spectrum: the cosmological and the subatomic.

Magee rightly points out our almost insignificance in both space and time as far as the Universe is concerned. We are figuratively like the blink of an eye on a grain of sand, yet reality has no meaning without our participation. In reference to the internal and external worlds that formulate this reality, Magee has this to say: “But then the most extraordinary thing is that the world of interaction between these two unintelligibles is rationally intelligible.” Einstein famously made a similar point: "The most incomprehensible thing about the Universe is that it’s comprehensible.”

One can’t contemplate that statement, especially in the context of Einstein’s iconic achievements, without considering the specific and necessary role of mathematics. Raymond Tallis, who writes a regular column in Philosophy Now, and for whom I have great respect, nevertheless downplays the role of mathematics. He once made the comment that mathematical Platonists (like me) 'make the error of confusing the map for the terrain.’ I wrote a response, saying: ‘the use of that metaphor infers the map is human-made, but what if the map preceded the terrain.’ (The response wasn’t published.) The Universe obeys laws that are mathematically in situ, as first intimated by Galileo, given credence by Kepler, Newton, Maxwell; then Einstein, Schrodinger, Heisenberg and Bohr.

I’d like to finish by quoting Paul Davies:

We have a closed circle of consistency here: the laws of physics produce complex systems, and these complex systems lead to consciousness, which then produces mathematics, which can then encode in a succinct and inspiring way the very underlying laws of physics that gave rise to it.

This, of course, is another way of formulating Roger Penrose’s 3 Worlds, and it’s the mathematical world that is, for me, the missing piece in Magee’s otherwise thought-provoking discourse.


Last word: I’ve long argued that mathematics determines the limits of our knowledge of the physical world. Science to date has demonstrated that Socrates was right: the resolution of one mystery invariably leads to another. And I agree with Magee that consciousness is a phenomenon that may elude us forever.

Addendum: I came across this discussion between Magee and Harvard philosopher, Hilary Putnam, from 1977 (so over 40 years ago), where Magee exhibits a more nuanced view on the philosophy of science and mathematics (the subject of their discussion) than I gave him credit for in my post. Both of these men take their philosophy of science from philosophers, like Kant, Descartes and Hume, whereas I take my philosophy of science from scientists: principally, Paul Davies, Roger Penrose and Richard Feynman, and to a lesser extent, John Wheeler and Freeman Dyson; I believe this is the main distinction between their views and mine. They even discuss this 'distinction' at one point, with the conclusion that scientists, and particularly physicists, are stuck in the past - they haven't caught up (my terminology, not theirs). They even talk about the scientific method as if it's obsolete or anachronistic, though again, they don't use those specific terms. But I'd point to the LHC (built decades after this discussion) as evidence that the scientific method is alive and well, and it works. (I intend to make this a subject of a separate post.)

Friday, 9 November 2018

Can AI be self-aware?

I recently got involved in a discussion on Facebook with a science fiction group on the subject of artificial intelligence. Basically, it started with a video claiming that a robot had discovered self-awareness, which is purportedly an early criterion for consciousness. But if you analyse what they actually achieved: it’s clever sleight-of-hand at best and pseudo self-awareness at worst. The sleight-of-hand is to turn fairly basic machine logic into an emotive gesture to fool humans (like you and me) into thinking it looks and acts like a human, which I’ll describe in detail below.

And it’s pseudo self-awareness in that it’s make-believe, in the same way that pseudo science is make-believe science, meaning pretend science, like creationism. We have a toy that we pretend exhibits self-awareness. So it is we who do the make-believe and pretending, not the toy.

If you watch the video you’ll see that they have 3 robots and they give them a ‘dumbing pill’ (meaning a switch was pressed) so they can’t talk. But one of them is not dumb and they are asked: “Which pill did you receive?” One of them dramatically stands up and says: “I don’t know”. But then waves its arm and says, “I’m sorry, I know now. I was able to prove I was not given the dumbing pill.”

Obviously, the entire routine could have been programmed, but let’s assume it’s not. It’s a simple TRUE/FALSE logic test. The so-called self-awareness is a consequence of the T/F test being self-referential – whether it can talk or not. It verifies that it’s False because it hears its own voice. Notice the human-laden words like ‘hears’ and ‘voice’ (my anthropomorphic phrasing). Basically, it has a sensor to detect sound that it makes itself, which logically determines whether the statement, it’s ‘dumb’, is true or false. It says, ‘I was not given a dumbing pill’, which means its sound was not switched off. Very simple logic.

I found an on-line article by Steven Schkolne (PhD in Computer Science at Caltech), so someone with far more expertise in this area than me, yet I found his arguments for so-called computer self-awareness a bit misleading, to say the least. He talks about 2 different types of self-awareness (specifically for computers) – external and internal. An example of external self-awareness is an iphone knowing where it is, thanks to GPS. An example of internal self-awareness is a computer responding to someone touching the keyboard. He argues that “machines, unlike humans, have a complete and total self-awareness of their internal state”. For example, a computer can find every file on its hard drive and even tell you its date of origin, which is something no human can do.

From my perspective, this is a bit like the argument that a thermostat can ‘think’. ‘It thinks it’s too hot or it thinks it’s too cold, or it thinks the temperature is just right.’ I don’t know who originally said that, but I’ve seen it quoted by Daniel Dennett, and I’m still not sure if he was joking or serious.

Computers use data in a way that humans can’t and never will, which is why their memory recall is superhuman compared to ours. Anyone who can even remotely recall data like a computer is called a savant, like the famous ‘Rain Man’. The point is that machines don’t ‘think’ like humans at all. I’ll elaborate on this point later. Schkolne’s description of self-awareness for a machine has no cognitive relationship to our experience of self-awareness. As Schkone says himself: “It is a mistake if, in looking for machine self-awareness, we look for direct analogues to human experience.” Which leads me to argue that what he calls self-awareness in a machine is not self-awareness at all.

A machine accesses data, like GPS data, which it can turn into a graphic of a map or just numbers representing co-ordinates. Does the machine actually ‘know’ where it is? You can ask Siri (as Schkolne suggests) and she will tell you, but he acknowledges that it’s not Siri’s technology of voice recognition and voice replication that makes your iphone self-aware. No, the machine creates a map, so you know where ‘You’ are. Logically, a machine, like an aeroplane or a ship, could navigate over large distances with GPS with no humans aboard, like drones do. That doesn’t make them self-aware; it makes their logic self-referential, like the toy robot in my introductory example. So what Schkolne calls self-awareness, I call self-referential machine logic. Self-awareness in humans is dependent on consciousness: something we experience, not something we deduce.

And this is the nub of the argument. The argument goes that if self-awareness amongst humans and other species is a consequence of consciousness, then machines exhibiting self-awareness must be the first sign of consciousness in machines. However, self-referential logic, coded into software doesn’t require consciousness, it just requires machine logic suitably programmed. I’m saying that the argument is back-to-front. Consciousness can definitely imbue self-awareness, but a self-referential logic coded machine does not reverse the process and imbue consciousness.

I can extend this argument more generally to contend that computers will never be conscious for as long as they are based on logic. What I’d call problem-solving logic came late, evolutionarily, in the animal kingdom. Animals are largely driven by emotions and feelings, which I argue came first in evolutionary terms. But as intelligence increased so did social skills, planning and co-operation.

Now, insect colonies seem to put the lie to this. They are arguably closer to how computers work, based on algorithms that are possibly chemically driven (I actually don’t know). The point is that we don’t think of ants and bees as having human-like intelligence. A termite nest is an organic marvel, yet we don’t think the termites actually plan its construction the way a group of humans would. In fact, some would probably argue that insects don’t even have consciousness. Actually, I think they do. But to give another well-known example, I think the dance that bees do is ‘programmed’ into their DNA, whereas humans would have to ‘learn’ it from their predecessors.

There is a way in which humans are like computers, which I think muddies the waters, and leads people into believing that the way we think and the way machines ‘think’ is similar if not synonymous.

Humans are unique within the animal kingdom in that we use language like software; we effectively ‘download’ it from generation to generation and it limits what we can conceive and think about, as Wittgenstein pointed out. In fact, without this facility, culture and civilization would not have evolved. We are the only species (that we are aware of) that develops concepts and then manipulates them mentally because we learn a symbol-based language that gives us that unique facility. But we invented this with our brains; just as we invent symbols for mathematics and symbols for computer software. Computer software is, in effect, a language and it’s more than an analogy.

We may be the only species that uses symbolic language, but so do computers. Note that computers are like us in this respect, rather than us being like computers. With us, consciousness is required first, whereas with AI, people seem to think the process can be reversed: if you create a machine logic language with enough intelligence, then you will achieve consciousness. It’s back-to-front, just as self-referential logic creating self-aware consciousness is back-to-front.

I don't think AI will ever be conscious or sentient. There seems to be an assumption that if you make a computer more intelligent it will eventually become sentient. But I contend that consciousness is not an attribute of intelligence. I don't believe that more intelligent species are more conscious or more sentient. In other words, I don't think the two attributes are concordant, even though there is an obvious dependency between consciousness and intelligence in animals. But it’s a one way dependency; if consciousness was dependent on intelligence then computers would already be conscious.


Addendum: The so-called Turing test is really a test for humans, not robots, as this 'interview' with 'Sophia' illustrates.

Thursday, 11 October 2018

My philosophy in 24 dot points

A friend (Erroll Treslan) posted on Facebook a link to a matrix that attempts to encapsulate the history of (Western) philosophy by listing the most influential people and linking their ideas, either conflicting or in agreement.

I decided to attempt the same for myself and have included those people, whom I believe influenced me, which is not to say they agree with me. In the case of some of my psychological points I haven’t cited anyone as I’ve forgotten where my beliefs came from (in those cases).

  • There are 3 worlds: physical, mental and mathematical. (Penrose)
  • Consciousness exists in a constant present; classical physics describes the past and quantum mechanics describes the future. (Schrodinger, Bragg, Dyson)
  • Reality requires both consciousness and a physical universe. You can have a universe without consciousness, which was the case in the past, but it has no meaning and no purpose. (Barrow, Davies)
  • Purpose has evolved but the Universe is not teleological in that it is not determinable. (Davies)
  • There is a cosmic anthropic principle; without sentient beings there might as well be nothing. (Carter, Barrow, Davies)
  • Mathematics exists independently from humans and the Universe. (Barrow, Penrose, Pythagoras, Plato)
  • There will always be mathematical truths we don’t know. (Godel, Turing, Chaitin)
  • Mathematics is not a language per se. It starts with the prime numbers, called the 'atoms of mathematics', yet extends to infinity and the transcendental. (Euclid, Euler, Riemann)
  • The Universe created the means to understand itself, with mathematics the medium and humans the only known agents. (Einstein, Wigner)
  •  The Universe obeys laws dependent on fine-tuned mathematical parameters. (Hoyle, Barrow, Davies)
  • The Universe is not a computer; chaos rules and is not predictable. (Stewart, Gleik)
  • The brain does not run on algorithms; there is no software. (Penrose, Searle)
  • Human language is analogous to software because we ‘download’ it from generation to generation and it ‘mutates’; if I can mix my metaphors. (Dawkins, Hofstadter)
  • We think and conceptualise in a language. Axiomatically, this limits what we can conceive and think about. (Wittgenstein)
  • We only learn something new when we integrate it into what we already know. (Wittgenstein)
  • Humans have the unique ability to nest concepts within concepts ad-infinitum, which mirror the physical world. (Hofstadter)
  • Morality is largely subjective, dependent on cultural norms but malleable by milieu, conditioning and cognitive dissonance. (Mill, Zimbardo)
  • It is inherently human to form groups with an ingroup-outgroup mentality.
  • Evil requires the denial of humanity in others.
  • Empathy is the key to social reciprocity at all levels of society. (Confucius, Jesus)
  • Quality of life is dependent on our interaction with others from birth to death. (Aristotle, Buddha)
  • Wisdom comes from adversity. The premise of every story ever told is about a protagonist dealing with adversity – it’s a universal theme (Frankl, I Ching).
  • God is an experience that is internal, yet is perceived as external. (Feuerbach)
  • Religion is the mind’s quest to find meaning for its own existence.

Addendum: I’ve changed it from 23 points to 24 by adding point 22. It’s actually a belief I’ve held for some time. They are all ‘beliefs’ except point 7, which arises from a theorem.

Monday, 3 September 2018

Is the world continuous or discrete?

There is an excellent series on YouTube called ‘Closer to Truth’, where the host, Richard Lawrence Kuhn, interviews some of the cleverest people on the planet (about existential and epistemological issues) in such a way that ordinary people, like you and me, can follow. I understand from Wikipedia that it’s really a television series started in 2000 on America’s PBS.

In an interview with Gregory Chaitin, he asks the above question, which made me go back and re-read Chaitin’s book, Thinking about Godel and Turing, which I originally bought and read over a decade ago, and then posted about on this blog, (not long after I created it). It’s really a collection of talks and abridged papers given by Chaitin from 1970 to 2007, so there’s a lot of repetition but also an evolution in his narrative and ideas. Reading it for the second time (from cover to cover) over a decade later has the benefit of using the filter of all the accumulated knowledge that I’ve acquired in the interim.

More than one person (Umberto Eco and Jeremy Lent, for examples) have wondered if the discreteness we find in the world, and which we logically apply to mathematics, is a consequence of a human projection rather than an objective reality. In other words, is it an epistemological bias rather than an ontological condition? I’ll return to this point later.

Back to Chaitin’s opus, he effectively takes us through the same logical and historical evolution over and over again, which ultimately leads to the same conclusions. I’ll summarise briefly. In 1931, Kurt Godel proved a theorem that effectively tells us that, within a formal axiom-based mathematical system, there will always be mathematical truths that can’t be solved. Then in 1936, Alan Turing proved, with a thought experiment that presaged the modern computer, that there will always be machine calculations that may never stop and we can’t predict whether they will or not. For example, Riemann’s hypothesis can be calculated using an algorithm to whatever limit you like (and is being calculated somewhere right now, probably) but you can never know in advance if it will ever stop (by finding a false result). As Chaitin points out, this is an extension of Godel’s theorem, and Godel’s theorem can be deduced from Turing’s.

Then Chaitin himself proved, by inventing (or discovering) a mathematical device, (Ω) called Omega, that there are innumerable numbers that can never be completely calculated (Omega gives a probability of a Turing program halting). In fact, there are more incomputable numbers than rational numbers, even though they are both infinite in extent. The rational Reals are countably infinite while the incomputable Reals are uncountably infinite. I’ve mentioned this previously when discussing Noson Yanofsky’s excellent book, The Outer Limits of Reason; What Science, Mathematics, and Logic CANNOT Tell Us. Chaitin claims that this proves that Godel’s Incompleteness Theorem is not some aberration, but is part of the foundation of mathematics – there are infinitely more numbers that can’t be calculated than those that can.

So that’s the gist of Chaitin’s book, but he draws some interesting conclusions on the side, so-to-speak. For a start, he argues that maths should be done more like physics and maybe we should accept some unproved theorems (like Riemann’s) as new axioms, as one would in physics. In fact, this is happening almost by default in as much as there already exists new theorems that are dependent on Riemann’s conjecture being true. In other words, Riemann’s hypothesis has effectively morphed into a mathematical caveat so people can explore its consequences.

The other area of discussion that Chaitin gets into, which is relevant to this discussion is whether the Universe is like a computer. He cites Stephen Wolfram (who invented Mathematica) and Edward Fredkin.

According to Pythagoras everything is number, and God is a mathematician… However, now a neo-Pythagorean doctrine is emerging, according to which everything is 0/1 bits, and the world is built entirely out of digital information. In other words, now everything is software, God is a computer programmer, not a mathematician, and the world is a giant information-processing system, a giant computer [Fredkin, 2004, Wolfram, 2002, Chaitin, 2005].

Carlo Rovelli also argues that the Universe is discrete, but for different reasons. It’s discrete because quantum mechanics (QM) has a Planck limit for both time and space, which would suggest that even space-time is discrete. Therefore it would seem to lend itself to being made up of ‘bits’. This fits in with the current paradigm that QM and therefore reality, is really about ‘information’ and information, as we know, comes in ‘bits’.

Chaitin, at one point, goes so far as to suggest that the Universe calculates its future state from the current state. This is very similar to Newton’s clockwork universe, whereby Laplace famously claimed that given the position of every particle in the Universe and all the relevant forces, one could, in principle, ‘read the future just as readily as the past’. These days we know that’s not correct, because we’ve since discovered QM, but people are arguing that a QM computer could do the same thing. David Deutsch is one who argues that (in principle).

There is a fundamental issue with all this that everyone seems to have either forgotten or ignored. Prior to the last century, a man called Henri Poincare discovered some mathematical gremlins that seemed of little relevance to reality, but eventually led to a physics discipline which became known as chaos theory.

So after re-reading Chaitin’s book, I decided to re-read Ian Stewart’s erudite and deeply informative book, Does God Play Dice? The New Mathematics of Chaos.

Not quite a third of the way through, Stewart introduces Chaitin’s theorem (of incomputable numbers) to demonstrate why the initial conditions in chaos theory can never be computed, which I thought was a very nice and tidy way to bring the 2 philosophically opposed ideas together. Chaos theory effectively tells us that a computer can never predict the future evolvement of the Universe, and it’s Chaitin’s own theorem which provides the key.

At another point, Stewart quips that God uses an analogue computer. He’s referring to the fact that most differential equations (used by scientists and engineers) are linear whilst nature is clearly nonlinear.

Today’s science shows that nature is relentlessly nonlinear. So whatever God deals with… God’s got an analogue computer as versatile as the entire universe to play with – in fact, it is the entire universe. (Emphasis in the original.)

As all scientists know (and Yanofsky points out in his book) we mostly use statistical methods to understand nature’s dynamics, not the motion of individual particles, which would be impossible. Erwin Schrodinger made a similar point in his excellent tome, What is Life? To give just one example that most people are aware of: radioactive decay (an example Schrodinger used). Statistically, we know the half-lives of radioactive decay, which follow a precise exponential rule, but no one can predict the radioactive decay of an individual isotope.

Whilst on the subject of Schrodinger, his eponymous equation is both linear and deterministic which seems to contradict the very idea of QM discrete and probabilistic effects. Perhaps that is why Carlo Rovelli contends that Schrodinger’s wavefunction has misled our attempts to understand QM in reality.

Roger Penrose explicates QM in phases: U, R and C (he always displays them bold), depicting the wave function phase; the measurement phase; and the classical physics phase. Logically, Schrodinger’s wave function only exists in the U phase, prior to measurement or observation. If it wasn’t linear you couldn’t add the waves together (of all possible paths) which is essential for determining the probabilities and is also fundamental to QED (which is the latest iteration of QM). The fact that it’s deterministic means that it can calculate symmetrically forward and backward in time.

My own take on this is that QM and classical physics obey different rules, and the rules for classical physics are chaos, which are neither predictable nor linear. Both lead to unpredictability but for different reasons and using different mathematics. Stewart has argued that just maybe you could describe QM using chaos theory and David Deutsch has argued the opposite: that you could use the multi-world interpretation of QM to explain chaos theory. I think they’re both wrong-headed, but I’m the first to admit that all these people know far more than me. Freeman Dyson (one of the smartest physicists not to win a Nobel Prize) is the only other person I know who believes that maybe QM and classical physics are distinct. He’s pointed out that classical physics describes events in the past and QM provides future probabilities. It’s not a great leap from there to suggest that the wavefunction exists in the future.

You may have noticed that I’ve wandered away from my original question, so maybe I should wonder my way back. In my introduction, I mentioned the epistemological point, considered by some, that maybe our employment of mathematics, which is based on integers, has made us project discreteness onto the world.

Chaitin’s theorem demonstrates that most of mathematics is not discrete at all. In fact, he cites his hero, Gottlieb Leibniz, that most of mathematics is ‘transcendental’, which means it’s beyond our intellectual grasp. This turns the general perception that mathematics is a totally logical construct on its head. We access mathematics using logic, but if there are an uncountable infinity of Reals that are not computable, then, logically, they are not accessible to logic, including computer logic. This is a consequence of Chaitin’s own theorem, yet he argues that is the reason it’s not reality.

In fact, Chaitin would argue that it’s because of that inacessability that a discrete universe makes sense. In other words, a discrete universe would be computable. However, chaos theory suggests God would have to keep resetting his parameters. (There is such a thing as ‘chaotic control’, called ‘Proportional Perturbation Feedback’, PPF, which is how pacemakers work.)

Ian Stewart has something to say on this, albeit while talking about something else. He makes the valid point that there is a limit to how many decimals you can use in a computer, which has practical limitations:

The philosophical point is that the discrete computer model we end up with is not the same as the discrete model given by atomic physics.

Continuity uses calculus, as in the case of Schrodinger’s equation (referenced above) but also Einstein’s field equations, and calculus uses infinitesimals to maintain continuity mathematically. A computer doing calculus ‘cheats’ (as Stewart points out) by adding differences quite literally.

This leads Stewart to make the following observation:

Computers can work with a small number of particles. Continuum mechanics can work with infinitely many. Zero or infinity. Mother Nature slips neatly into the gap between the two.

Wolfram argues that the Universe is pseudo-random, which would allow it to run on algorithms. But there are 2 levels of randomness, one caused by QM and one caused by chaos. (Chaos can create stability as well, which I‘ve discussed elsewhere.) The point is that initial conditions have to be calculated to infinity to determine chaotic phenomena (like weather), but it applies to virtually everything in nature. Even the orbits of the planets are chaotic, but over millions, even billions of years. So at some level the Universe may be discrete, even at the Planck scale, but when it comes to evolutionary phenomena, chaos rules, and it’s neither computably determinable (long term) nor computably discrete.

There is one aspect of this that I’ve never seen discussed and that is the relationship between chaos theory and time. Carlos Rovelli, in his recent book, The Order of Time, argues that ‘time’s arrow’ can only be explained by entropy, but another physicist, Richard A Muller, in his book, NOW; The Physics of Time, argues the converse. Muller provides a lengthy and compelling argument on why entropy doesn’t explain the arrow of time.

This may sound simplistic, but entropy is really about probabilities. As time progresses, a dynamic system, if left to its own devices, progresses to states of higher probability. For example, perfume released from a bottle in one corner of a room soon dissipates throughout the room because there is a much higher probability of that then it accumulating in one spot. A broken egg has an infinitesimally low probability of coming back together again. The caveat, ‘left to its own devices’, simply means that the system is in equilibrium with no external source of energy to keep it out of equilibrium.

What has this to do with chaos theory? Well, chaotic phenomena are time asymmetrical (they can't be repeated, if rerun). Take weather. If weather was time reversible symmetrical, forecasts would be easy. And weather is not in a state of equilibrium, so entropy is not the dominant factor. Take another example: biological evolution. It’s not driven by entropy because it increases in complexity but it’s definitely time asymmetrical and it’s chaotic. In fact, speciation appears to be fractal, which is a chaos parameter.

Now, I pointed out that the U phase of Penroses’s explication of QM is time symmetrical, but I would contend that the overall U, R, C sequence is not. I contend that there is a sequence from QM to classical physics that is time asymmetrical. This infers, of course, that QM and classical physics are distinct.


Addendum 1: This is slightly off-topic, but relevant to my own philosophical viewpoint. Freeman Dyson delivers a lecture on QM, and, in the 22.15 to 24min time period, he argues that the wavefunction and QM can only tell us about the future and not the past.

Addendum 2 (Conclusion): Someone told me that this was difficult to follow, so I've written a summary based on a comment I gave below.

Chaitin's theorem arises from his derivation of omega (Ω), which is the 'halting probability', an extension of Turing's famous halting theorem. You can read about it here, including its significance to incomputability.

I agree with Chaitin 'mathematically' in that I think there are infinitely more incomputable Reals than computable Reals. You already have this with transcendental numbers like π and e, which are incomputable. Chaitin's Ω can be computed to whatever resolution you like, just like π and e, but (of course) not to infinity.

I disagree with him 'philosophically' in that I don't think the Universe is necessarily discrete and can be reduced to 0s and 1s (bits). In other words, I don't think the Universe is like a computer.

Curiously and ironically, Chaitin has proved that the foundation of mathematics consists mostly of incomputable Reals, yet he believes the Universe is computable. I agree with him on the first part but not the second.

Addendum 3: I discussed the idea of the Universe being like a computer in an earlier post, with no reference to Chaitin or Stewart.

Addendum 4: I recently read Jim Al-Khalili's chapter on 'Laplace's demon' in his book, Paradox; The Nine Greatest Enigmas in Physics, which is specifically a discussion on 'chaos theory'. Al-Khalili contends that 'the Universe is almost certainly deterministic', but I think his definition of 'deterministic' might be subtly different to mine. He rightly points out that chaos is deterministic but unpredictable. What this means is that everything in the past and everything in the future has a cause and effect. So there is a causal path from any current event to as far into the past as you want to go. And there will also be a causal path from that same event into the future; it's just that you won't be able to predict it because it's uncomputable. In that sense the future is deterministic but not determinable. However (as Ian Stewart points out in Does God Play Dice?) if you re-run a chaotic experiment you will get a different result, which is almost a definition of chaos; tossing a coin is the most obvious example (cited by Stewart). My point is that if the Universe is chaotic then it follows that if you were to rerun the Universe you'd get a different result. So it may be 'deterministic' but it's not 'determined'. I might elaborate on this in a separate post.