Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
Showing posts with label Language. Show all posts
Showing posts with label Language. Show all posts

Monday, 28 October 2024

Do we make reality?

 I’ve read 2 articles, one in New Scientist (12 Oct 2024) and one in Philosophy Now (Issue 164, Oct/Nov 2024), which, on the surface, seem unrelated, yet both deal with human exceptionalism (my term) in the context of evolution and the cosmos at large.
 
Staring with New Scientist, there is an interview with theoretical physicist, Daniele Oriti, under the heading, “We have to embrace the fact that we make reality” (quotation marks in the original). In some respects, this continues on with themes I raised in my last post, but with different emphases.
 
This helps to explain the title of the post, but, even if it’s true, there are degrees of possibilities – it’s not all or nothing. Having said that, Donald Hoffman would argue that it is all or nothing, because, according to him, even ‘space and time don’t exist unperceived’. On the other hand, Oriti’s argument is closer to Paul Davies’ ‘participatory universe’ that I referenced in my last post.
 
Where Oriti and I possibly depart, philosophically speaking, is that he calls the idea of an independent reality to us ‘observers’, “naïve realism”. He acknowledges that this is ‘provocative’, but like many provocative ideas it provides food-for-thought. Firstly, I will delineate how his position differs from Hoffman’s, even though he never mentions Hoffman, but I think it’s important.
 
Both Oriti and Hoffman argue that there seems to be something even more fundamental than space and time, and there is even a recent YouTube video where Hoffman claims that he’s shown mathematically that consciousness produces the mathematical components that give rise to spacetime; he has published a paper on this (which I haven’t read). But, in both cases (by Hoffman and Oriti), the something ‘more fundamental’ is mathematical, and one needs to be careful about reifying mathematical expressions, which I once discussed with physicist, Mark John Fernee (Qld University).
 
The main issue I have with Hoffman’s approach is that space-time is dependent on conscious agents creating it, whereas, from my perspective and that of most scientists (although I’m not a scientist), space and time exists external to the mind. There is an exception, of course, and that is when we dream.
 
If I was to meet Hoffman, I would ask him if he’s heard of proprioception, which I’m sure he has. I describe it as the 6th sense we are mostly unaware of, but which we couldn’t live without. Actually, we could, but with great difficulty. Proprioception is the sense that tells us where our body extremities are in space, independently of sight and touch. Why would we need it, if space is created by us? On the other hand, Hoffman talks about a ‘H sapiens interface’, which he likens to ‘desktop icons on a computer screen’. So, somehow our proprioception relates to a ‘spacetime interface’ (his term) that doesn’t exist outside the mind.
 
A detour, but relevant, because space is something we inhabit, along with the rest of the Universe, and so is time. In relativity theory there is absolute space-time, as opposed to absolute space and time separately. It’s called the fabric of the universe, which is more than a metaphor. As Viktor Toth points out, even QFT seems to work ‘just fine’ with spacetime as its background.
 
We can do quantum field theory just fine on the curved spacetime background of general relativity.

 
[However] what we have so far been unable to do in a convincing manner is turn gravity itself into a quantum field theory.
 
And this is where Oriti argues we need to find something deeper. To quote:
 
Modern approaches to quantum gravity say that space-time emerges from something deeper – and this could offer a new foundation for physical laws.
 
He elaborates: I work with quantum gravity models in which you don’t start with a space-time geometry, but from more abstract “atomic” objects described in purely mathematical language. (Quotation marks in the original.)
 
And this is the nub of the argument: all our theories are mathematical models and none of them are complete, in as much as they all have limitations. If one looks at the history of physics, we have uncovered new ‘laws’ and new ‘models’ when we’ve looked beyond the limitations of an existing theory. And some mathematical models even turned out to be incorrect, despite giving answers to what was ‘known’ at the time. The best example being Ptolemy’s Earth-centric model of the solar system. Whether string theory falls into the same category, only future historians will know.
 
In addition, different models work at different scales. As someone pointed out (Mile Gu at the University of Queensland), mathematical models of phenomena at one scale are different to mathematical models at an underlying scale. He gave the example of magnetism, demonstrating that mathematical modelling of the magnetic forces in iron could not predict the pattern of atoms in a 3D lattice as one might expect. In other words, there should be a causal link between individual atoms and the overall effect, but it could not be determined mathematically. To quote Gu: “We were able to find a number of properties that were simply decoupled from the fundamental interactions.” Furthermore, “This result shows that some of the models scientists use to simulate physical systems have properties that cannot be linked to the behaviour of their parts.”
 
This makes me sceptical that we will find an overriding mathematical model that will entail the Universe at all scales, which is what theories of quantum gravity attempt to do. One of the issues that some people raise is that a feature of QM is superposition, and the superposition of a gravitational field seems inherently problematic.
 
Personally, I think superposition only makes sense if it’s describing something that is yet to happen, which is why I agree with Freeman Dyson that QM can only describe the future, which is why it only gives us probabilities.
 
Also, in quantum cosmology, time disappears (according to Paul Davies, among others) and this makes sense (to me), if it’s attempting to describe the entire universe into the future. John Barrow once made a similar point, albeit more eruditely.
 
Getting off track, but one of the points that Oriti makes is whether the laws and the mathematics that describes them are epistemic or ontic. In other words, are they reality or just descriptions of reality. I think it gets blurred, because while they are epistemic by design, there is still an ontology that exists without them, whereas Oriti calls that ‘naïve realism’. He contends that reality doesn’t exist independently of us. This is where I always cite Kant: that we may never know the ‘thing-in-itself,’ but only our perception of it. Where I diverge from Kant is that the mathematical models are part of our perception. Where I depart from Oriti is that I argue there is a reality independently of us.
 
Both QM and relativity theory are observer-dependent, which means they could both be describing an underlying reality that continually eludes us. Whereas Oriti argues that ‘reality is made by our models, not just described by them’, which would make it subjective.
 
As I pointed out in my last post, there is an epistemological loop, whereby the Universe created the means to understand itself, through us. Whether there is also an ontological loop as both Davies and Oriti infer, is another matter: do we determine reality through our quantum mechanical observations? I will park that while I elaborate on the epistemic loop.
 
And this finally brings me to the article in Philosophy Now by James Miles titled, We’re as Smart as the Universe gets. He argues that, from an evolutionary perspective, there is a one-in-one-billion possibility that a species with our cognitive abilities could arise by natural selection, and there is no logical reason why we would evolve further, from an evolutionary standpoint. I have touched on this before, where I pointed out that our cultural evolution has overtaken our biological evolution and that would also happen to any other potential species in the Universe who developed cognitive abilities to the same level. Dawkins coined the term, ‘meme’, to describe cultural traits that have ‘survived’, which now, of course, has currency on social media way beyond its original intention. Basically, Dawkins saw memes as analogous to genes, which get selected; not by a natural process but by a cultural process.
 
I’ve argued elsewhere that mathematical theorems and scientific theories are not inherently memetic. This is because they are chosen because they are successful, whereas memes are successful because they are chosen. Nevertheless, such theorems and theories only exist because a culture has developed over millennia which explores them and builds on them.
 
Miles talks about ‘the high intelligence paradox’, which he associates with Darwin’s ‘highest and most interesting problem’. He then discusses the inherent selection advantage of co-operation, not to mention specialisation. He talks about the role that language has played, which is arguably what really separates us from other species. I’ve argued that it’s our inherent ability to nest concepts within concepts ad-infinitum (which is most obvious in our facility for language, like I’m doing now) that allows us to, not only tell stories, compose symphonies, explore an abstract mathematical landscape, but build motor cars, aeroplanes and fly men to the moon. Are we the only species in the Universe with this super-power? I don’t know, but it’s possible.
 
There are 2 quotes I keep returning to:
 
The most incomprehensible thing about the Universe is that it’s comprehensible. (Einstein)
 
The Universe gave rise to consciousness and consciousness gives meaning to the Universe.
(Wheeler)
 
I haven’t elaborated, but Miles makes the point, while referencing historical antecedents, that there appears no evolutionary 'reason’ that a species should make this ‘one-in-one-billion transition’ (his nomenclature). Yet, without this transition, the Universe would have no meaning that could be comprehended. As I say, that’s the epistemic loop.
 
As for an ontic loop, that is harder to argue. Photons exist in zero time, which is why I contend they are always in the future of whatever they interact with, even if they were generated in the CMBR some 13.5 billion years ago. So how do we resolve that paradox? I don’t know, but maybe that’s the link that Davies and Oriti are talking about, though neither of them mention it. But here’s the thing: when you do detect such a photon (for which time is zero) you instantaneously ‘see’ back to 380,000 years after the Universe’s birth.





Sunday, 2 June 2024

Radical ideas

 It’s hard to think of anyone I admire in physics and philosophy who doesn’t have at least one radical idea. Even Richard Feynman, who avoided hyperbole and embraced doubt as part of his credo: "I’d rather have doubt and be uncertain, than be certain and wrong."
 
But then you have this quote from his good friend and collaborator, Freeman Dyson:

Thirty-one years ago, Dick Feynman told me about his ‘sum over histories’ version of quantum mechanics. ‘The electron does anything it likes’, he said. ‘It goes in any direction at any speed, forward and backward in time, however it likes, and then you add up the amplitudes and it gives you the wave-function.’ I said, ‘You’re crazy.’ But he wasn’t.
 
In fact, his crazy idea led him to a Nobel Prize. That exception aside, most radical ideas are either still-born or yet to bear fruit, and that includes mine. No, I don’t compare myself to Feynman – I’m not even a physicist - and the truth is I’m unsure if I even have an original idea to begin with, be they radical or otherwise. I just read a lot of books by people much smarter than me, and cobble together a philosophical approach that I hope is consistent, even if sometimes unconventional. My only consolation is that I’m not alone. Most, if not all, the people smarter than me, also hold unconventional ideas.
 
Recently, I re-read Robert M. Pirsig’s iconoclastic book, Zen and the Art of Motorcycle Maintenance, which I originally read in the late 70s or early 80s, so within a decade of its publication (1974). It wasn’t how I remembered it, not that I remembered much at all, except it had a huge impact on a lot of people who would never normally read a book that was mostly about philosophy, albeit disguised as a road-trip. I think it keyed into a zeitgeist at the time, where people were questioning everything. You might say that was more the 60s than the 70s, but it was nearly all written in the late 60s, so yes, the same zeitgeist, for those of us who lived through it.
 
Its relevance to this post is that Pirsig had some radical ideas of his own – at least, radical to me and to virtually anyone with a science background. I’ll give you a flavour with some selective quotes. But first some context: the story’s protagonist, whom we assume is Pirsig himself, telling the story in first-person, is having a discussion with his fellow travellers, a husband and wife, who have their own motorcycle (Pirsig is travelling with his teenage son as pillion), so there are 2 motorcycles and 4 companions for at least part of the journey.
 
Pirsig refers to a time (in Western culture) when ghosts were considered a normal part of life. But then introduces his iconoclastic idea that we have our own ghosts.
 
Modern man has his own ghosts and spirits too, you know.
The laws of physics and logic… the number system… the principle of algebraic substitution. These are ghosts. We just believe in them so thoroughly they seem real.

 
Then he specifically cites the law of gravity, saying provocatively:
 
The law of gravity and gravity itself did not exist before Isaac Newton. No other conclusion makes sense.
And what that means, is that the law of gravity exists nowhere except in people’s heads! It’s a ghost! We are all of us very arrogant and conceited about running down other people’s ghosts but just as ignorant and barbaric and superstitious about our own.
Why does everybody believe in the law of gravity then?
Mass hypnosis. In a very orthodox form known as “education”.

 
He then goes from the specific to the general:
 
Laws of nature are human inventions, like ghosts. Laws of logic, of mathematics are also human inventions, like ghosts. The whole blessed thing is a human invention, including the idea it isn’t a human invention. (His emphasis)
 
And this is philosophy in action: someone challenges one of your deeply held beliefs, which forces you to defend it. Of course, I’ve argued the exact opposite, claiming that ‘in the beginning there was logic’. And it occurred to me right then, that this in itself, is a radical idea, and possibly one that no one else holds. So, one person’s radical idea can be the antithesis of someone else’s radical idea.
 
Then there is this, which I believe holds the key to our disparate points of view:
 
We believe the disembodied 'words' of Sir Isaac Newton were sitting in the middle of nowhere billions of years before he was born and that magically he discovered these words. They were always there, even when they applied to nothing. Gradually the world came into being and then they applied to it. In fact, those words themselves were what formed the world. (again, his emphasis)
 
Note his emphasis on 'words', as if they alone make some phenomenon physically manifest.
 
My response: don’t confuse or conflate the language one uses to describe some physical entity, phenomena or manifestation with what it describes. The natural laws, including gravity, are mathematical in nature, obeying sometimes obtuse and esoteric mathematical relationships, which we have uncovered over eons of time, which doesn’t mean they only came into existence when we discovered them and created the language to describe them. Mathematical notation only exists in the mind, correct, including the number system we adopt, but the mathematical relationships that notation describes, exist independently of mind in the same way that nature’s laws do.
 
John Barrow, cosmologist and Fellow of the Royal Society, made the following point about the mathematical ‘laws’ we formulated to describe the first moments of the Universe’s genesis (Pi in the Sky, 1992).
 
Specifically, he says our mathematical theories describing the first three minutes of the Universe predict specific ratios of the earliest ‘heavier’ elements: deuterium, 2 isotopes of helium and lithium, which are 1/1000, 1/1000, 22 and 1/100,000,000 respectively; with the remaining (roughly 78%) being hydrogen. And this has been confirmed by astronomical observations. He then makes the following salient point:



It confirms that the mathematical notions that we employ here and now apply to the state of the Universe during the first three minutes of its expansion history at which time there existed no mathematicians… This offers strong support for the belief that the mathematical properties that are necessary to arrive at a detailed understanding of events during those first few minutes of the early Universe exist independently of the presence of minds to appreciate them.
 
As you can see this effectively repudiates Pirsig’s argument; but to be fair to Pirsig, Barrow wrote this almost 2 decades after Pirsig’s book.
 
In the same vein, Pirsig then goes on to discuss Poincare’s Foundations of Science (which I haven’t read), specifically talking about Euclid’s famous fifth postulate concerning parallel lines never meeting, and how it created problems because it couldn’t be derived from more basic axioms and yet didn’t, of itself, function as an axiom. Euclid himself was aware of this, and never used it as an axiom to prove any of his theorems.
 
It was only in the 19th Century, with the advent of Riemann and other non-Euclidean geometries on curved surfaces that this was resolved. According to Pirsig, it led Poincare to question the very nature of axioms.
 
Are they synthetic a priori judgements, as Kant said? That is, do they exist as a fixed part of man’s consciousness, independently of experience and uncreated by experience? Poincare thought not…
Should we therefore conclude that the axioms of geometry are experimental verities? Poincare didn’t think that was so either…
Poincare concluded that the axioms of geometry are conventions, our choice among all possible conventions is guided by experimental facts, but it remains free and is limited only by the necessity of avoiding all contradiction.

 
I have my own view on this, but it’s worth seeing where Pirsig goes with it:
 
Then, having identified the nature of geometric axioms, [Poincare] turned to the question, Is Euclidean geometry true or is Riemann geometry true?
He answered, The question has no meaning.
[One might] as well as ask whether the metric system is true and the avoirdupois system is false; whether Cartesian coordinates are true and polar coordinates are false. One geometry can not be more true than another; it can only be more convenient. Geometry is not true, it is advantageous.
 
I think this is a false analogy, because the adoption of a system of measurement (i.e. units) and even the adoption of which base arithmetic one uses (decimal, binary, hexadecimal being the most common) are all conventions.
 
So why wouldn’t I say the same about axioms? Pirsig and Poincare are right in as much that both Euclidean and Riemann geometry are true because they’re dependent on the topology that one is describing. They are both used to describe physical phenomena. In fact, in a twist that Pirsig probably wasn’t aware of, Einstein used Riemann geometry to describe gravity in a way that Newton could never have envisaged, because Newton only had Euclidean geometry at his disposal. Einstein formulated a mathematical expression of gravity that is dependent on the geometry of spacetime, and has been empirically verified to explain phenomena that Newton couldn’t. Of course, there are also limits to what Einstein’s equations can explain, so there are more mathematical laws still to uncover.
 
But where Pirsig states that we adopt the axiom that is convenient, I contend that we adopt the axiom that is necessary, because axioms inherently expand the area of mathematics we are investigating. This is a consequence of Godel’s Incompleteness Theorem that states there are limits to what any axiom-based, consistent, formal system of mathematics can prove to be true. Godel himself pointed out that that the resolution lies in expanding the system by adopting further axioms. The expansion of Euclidean to non-Euclidean geometry is a case in point. The example I like to give is the adoption of √-1 = i, which gave us complex algebra and the means to mathematically describe quantum mechanics. In both cases, the axioms allowed us to solve problems that had hitherto been impossible to solve. So it’s not just a convenience but a necessity.
 
I know I’ve belaboured a point, but both of these: non-Euclidean geometry and complex algebra; were at one time radical ideas in the mathematical world that ultimately led to radical ideas: general relativity and quantum mechanics; in the scientific world. Are they ghosts? Perhaps ghost is an apt metaphor, given that they appear timeless and have outlived their discoverers, not to mention the rest of us. Most physicists and mathematicians tacitly believe that they not only continue to exist beyond us, but existed prior to us, and possibly the Universe itself.
 
I will briefly mention another radical idea, which I borrowed from Schrodinger but drew conclusions that he didn’t formulate. That consciousness exists in a constant present, and hence creates the psychological experience of the flow of time, because everything else becomes the past as soon as it happens. I contend that only consciousness provides a reference point for past, present and future that we all take for granted.

Sunday, 15 October 2023

What is your philosophy of life and why?

This was a question I answered on Quora, and, without specifically intending to, I brought together 2 apparently unrelated topics. The reason I discuss language is because it’s so intrinsic to our identity, not only as a species, but as an individual within our species. I’ve written an earlier post on language (in response to a Philosophy Now question-of-the-month), which has a different focus, and I deliberately avoided referencing that.
 
A ‘philosophy of life’ can be represented in many ways, but my perspective is within the context of relationships, in all their variety and manifestations. It also includes a recurring theme of mine.



First of all, what does one mean by ‘philosophy of life? For some people, it means a religious or cultural way-of-life. For others it might mean a category of philosophy, like post-modernism or existentialism or logical positivism.
 
For me, it means a philosophy on how I should live, and on how I both look at and interact with the world. This is not only dependent on my intrinsic beliefs that I might have grown up with, but also on how I conduct myself professionally and socially. So it’s something that has evolved over time.
 
I think that almost all aspects of our lives are dependent on our interactions with others, which starts right from when we were born, and really only ends when we die. And the thing is that everything we do, including all our failures and successes occur in this context.
 
Just to underline the significance of this dependence, we all think in a language, and we all gain our language from our milieu at an age before we can rationally and critically think, especially compared to when we mature. In fact, language is analogous to software that gets downloaded from generation to generation, so that knowledge can also be passed on and accumulated over ages, which has given rise to civilizations and disciplines like science, mathematics and art.
 
This all sounds off-topic, but it’s core to who we are and it’s what distinguishes us from other creatures. Language is also key to our relationships with others, both socially and professionally. But I take it further, because I’m a storyteller and language is the medium I use to create a world inside your head, populated by characters who feel like real people and who interact in ways we find believable. More than any other activity, this illustrates how powerful language is.
 
But it’s the necessity of relationships in all their manifestations that determines how one lives one’s life. As a consequence, my philosophy of life centres around one core value and that is trust. Without trust, I believe I am of no value. But more than that, trust is the foundational value upon which a society either flourishes or devolves into a state of oppression with its antithesis, rebellion.

 

Saturday, 14 January 2023

Why do we read?

This is the almost-same title of a book I bought recently (Why We Read), containing 70 short essays on the subject, featuring scholars of all stripes: historians, philosophers, and of course, authors. It even includes scientists: Paul Davies, Richard Dawkins and Carlo Rovelli, being 3 I’m familiar with.
 
One really can’t overstate the importance of the written word, because, oral histories aside, it allows us to extend memories across generations and accumulate knowledge over centuries that has led to civilisations and technologies that we all take for granted. By ‘we’, I mean anyone reading this post.
 
Many of the essayists write from their personal experiences and I’ll do the same. The book, edited by Josephine Greywoode and published by Penguin, specifically says on the cover in small print: 70 Writers on Non-Fiction; yet many couldn’t help but discuss fiction as well.
 
And books are generally divided between fiction and non-fiction, and I believe we read them for different reasons, and I wouldn’t necessarily consider one less important than the other. I also write fiction and non-fiction, so I have a particular view on this. Basically, I read non-fiction in order to learn and I read fiction for escapism. Both started early for me and I believe the motivation hasn’t changed.
 
I started reading extra-curricular books from about the age of 7 or 8, involving creatures mostly, and I even asked for an encyclopaedia for Christmas at around that time, which I read enthusiastically. I devoured non-fiction books, especially if they dealt with the natural world. But at the same time, I read comics, remembering that we didn’t have TV at that time, which was only just beginning to emerge.
 
I think one of the reasons that boys read less fiction than girls these days is because comics have effectively disappeared, being replaced by video games. And the modern comics that I have seen don’t even contain a complete narrative. Nevertheless, there are graphic novels that I consider brilliant. Neil Gaiman’s Sandman series and Hayao Miyazake’s Nausicaa of the Valley of the Wind, being standouts. Watchmen by Alan Moore also deserves a mention.
 
So the escapism also started early for me, in the world of superhero comics, and I started writing my own scripts and drawing my own characters pre-high school.
 
One of the essayists in the collection, Niall Ferguson (author of Doom) starts off by challenging a modern paradigm (or is it a meme?) that we live in a ‘simulation’, citing Oxford philosopher, Nick Bostrom, writing in the Philosophical Quarterly in 2003. Ferguson makes the point that reading fiction is akin to immersing the mind in a simulation (my phrasing, not his).
 
In fact, a dream is very much like a simulation, and, as I’ve often said, the language of stories is the language of dreams. But here’s the thing; the motivation for writing fiction, for me, is the same as the motivation for reading it: escapism. Whether reading or writing, you enter a world that only exists inside your head. The ultimate solipsism.

And this surely is a miracle of written language: that we can conjure a world with characters who feel real and elicit emotional responses, while we follow their exploits, failures, love life and dilemmas. It takes empathy to read a novel, and tests have shown that people’s empathy increases after they read fiction. You engage with the character and put yourself in their shoes. It’s one of the reasons we read.
 
 
Addendum: I would recommend the book, by the way, which contains better essays than mine, all with disparate, insightful perspectives.
 

Wednesday, 7 September 2022

Ontology and epistemology; the twin pillars of philosophy

 I remember in my introduction to formal philosophy that there were 5 branches: ontology, epistemology, logic, aesthetics and ethics. Logic is arguably subsumed under mathematics, which has a connection with ontology and epistemology through physics, and ethics is part of all our lives, from politics to education to social and work-related relations to how one should individually live. Aesthetics is like an orphan in this company, yet art is imbued in all cultures in so many ways, it is unavoidable.
 
However, if you read about Western philosophy, the focus is often on epistemology and its close relation, if not utter dependence, on ontology. Why dependence? Because you can’t have knowledge of something without inferring its existence, even if the existence is purely abstract.
 
There are so many facets to this, that it’s difficult to know where to start, but I will start with Kant because he argued that we can never know ‘the-thing-in-itself’, only a perception of it, which, in a nutshell, is the difference between ontology and epistemology.
 
We need some definitions, and ontology is dictionary defined as the ‘nature of being’, while epistemology is ‘theory of knowledge’, and with these definitions, one can see straightaway the relationship, and Kant’s distillation of it.
 
Of course, one can also see how science becomes involved, because science, at its core, is an epistemological endeavour. In reading and researching this topic, I’ve come to the conclusion that, though science and philosophy have common origins in Western scholarship, going back to Plato, they’ve gone down different paths.
 
If one looks at the last century, which included the ‘golden age of physics’, in parallel with the dominant philosophical paradigm, heavily influenced, if not initiated, by Wittgenstein, we see that the difference can be definitively understood in terms of language. Wittgenstein effectively redefined epistemology as how we frame the world with language, while science, and physics in particular, frames the world in mathematics. I’ll return to this fundamental distinction later.
 
In my last post, I went to some lengths to argue that a fundamental assumption among scientists is that there is an ‘objective reality’. By this, I mean that they generally don’t believe in ‘idealism’ (like Donald Hoffman) which is the belief that objects don’t exist when you don’t perceive them (Hoffman describes it as the same experience as using virtual-reality goggles). As I’ve pointed out before, this is what we all experience when we dream, which I contend is different to the experience of our collective waking lives. It’s the word, ‘collective’, that is the key to understanding the difference – we share waking experiences in a way that is impossible to corroborate in a dream.
 
However, I’ve been reading a lot of posts on Quora by physicists, Viktor T Toth and Mark John Fernee (both of whom I’ve cited before and both of whom I have a lot of respect for). And they both point out that much of what we call reality is observer dependent, which makes me think of Kant.
 
Fernee, when discussing quantum mechanics (QM) keeps coming back to the ‘measurement problem’ and the role of the observer, and how it’s hard to avoid. He discusses the famous ‘Wigner’s friend’ thought experiment, which is an extension of the famous Schrodinger’s cat thought experiment, which infers you have the cat in 2 superpositional states: dead and alive. Eugne Wigner developed a thought experiment, whereby 2 experimenters could get contradictory results. Its relevance to this topic is that the ontology is completely dependent on the observer. My understanding of the scenario is that it subverts the distinction between QM and classical physics.
 
I’ve made the point before that a photon travelling across the Universe from some place and time closer to its beginning (like the CMBR) is always in the future of whatever it interacts with, like, for example, an ‘observer’ on Earth. The point I’d make is that billions of years of cosmological time have passed, so in another sense, the photon comes from the observer’s past, who became classical a long time ago. For the photon, time is always zero, but it links the past to the present across almost the entire lifetime of the observable universe.
 
Quantum mechanics, more than any other field, demonstrates the difference between ontology and epistemology, and this was discussed in another post by Fernee. Epistemologically, QM is described mathematically, and is so successful that we can ignore what it means ontologically. This has led to diverse interpretations from the multiple worlds interpretation (MWI) to so-called ‘hidden variables’ to the well known ‘Copenhagen interpretation’.
 
Fernee, in particular, discusses MWI, not that he’s an advocate, but because it represents an ontology that no one can actually observe. Both Toth and Fernee point out that the wave function, which arguably lies at the heart of QM is never observed and neither is its ‘decoherence’ (which is the measurement problem by another name), which leads many to contend that it’s a mathematical fiction. I argue that it exists in the future, and that only classical physics is actually observed. QM deals with probabilities, which is purely epistemological. After the ‘observation’, Schrodinger’s equation, which describes the wave function ceases to have any meaning. One is in the future and the observation becomes the past as soon as it happens.
 
I don’t know enough about it, but I think entanglement is the key to its ontology. Fernee points out in another post that entanglement is to do with conservation, whether it be the conservation of momentum or, more usually, the conservation of spin. It leads to what is called non-locality, according to Bell’s Theorem, which means it appears to break with relativistic physics. I say ‘appears’, because it’s well known that it can’t be used to send information faster than light; so, in reality, it doesn’t break relativity. Nevertheless, it led to Einstein’s famous quote about ‘spooky action at a distance’ (which is what non-locality means in layperson’s terms).
 
But entanglement is tied to the wave function decoherence, because that’s when it becomes manifest. It’s crucial to appreciate that entangled particles are described by the same wave function and that’s the inherent connection. It led Schrodinger to claim that entanglement is THE defining feature of QM; in effect, it’s what separates QM from classical physics.
 
I think QM is the best demonstration of Kant’s prescient claim that we can never know the-thing-in-itself, but only our perception of it. QM is a purely epistemological theory – the ontology it describes still eludes us.
 
But relativity theory also suggests that reality is observer dependent. Toth points out that even the number of particles that are detected in some scenarios are dependent on the frame of reference of the observer. This has led at least one physicist (on Quora) to argue that the word ‘particle’ should be banned from all physics text books – there are only fields. (Toth is an expert on QFT, quantum field theory, and argues that particles are a manifestation of QFT.) I won’t elaborate as I don’t really know enough, but what’s relevant to this topic is that time and space are observer dependent in relativity, or appear to be.
 
In a not-so-recent post, I described how different ‘observers’ could hypothetically ‘see’ the same event happening hundreds of years apart, just because they are walking across a street in opposite directions. I use quotation marks, because it’s all postulated mathematically, and, in fact, relativity theory prevents them from observing anything outside their past and future light cones. I actually discussed this with Fernee, and he pointed out that it’s to do with causality. Where there is no causal relation between events, we can’t determine an objective sequence let alone one relevant to a time frame independent of us (like a cosmic time frame). And this is where I personally have an issue, because, even though we can’t observe it or determine it, I argue that there is still an objective reality independently of us.
 
In relativity there is something called true time (τ) which is the time in the frame of reference of the observer. If spacetime is invariant, then it would logically follow that where you have true time you should have an analogous ‘true space’, yet I’ve never come across it. I also think there is a ‘true simultaneity’ but no one else does, so maybe I’m wrong.
 
There is, however, something called the Planck length, and someone asked Toth if this changed relativistically with the Lorenz transformation, like all other ‘rulers’ in relativity physics. He said that a version of relativity was formulated that made the Planck length invariant but it created problems and didn’t agree with experimental data. What I find interesting about this is that Planck’s constant, h, literally determines the size of atoms, and one doesn’t expect atoms to change size relativistically (but maybe they do). The point I’d make is that these changes are observer dependent, and I’d argue that there is a Planck length that is observer independent, which is the case when there is no observer.
 
This has become a longwinded way of explaining how 20th Century science has effectively taken this discussion away from philosophy, but it’s rarely acknowledged by philosophers, who take refuge in Wittgenstein’s conclusion that language effectively determines what we can understand of the world, because we think in a language and that limits what we can conceptualise. And he’s right, until we come up with new concepts requiring new language. Everything I’ve just discussed was completely unknown more than 120 years ago, for which we had no language, let alone concepts.
 
Some years ago, I reviewed a book by Don Cupitt titled, Above Us Only Sky, which was really about religion in a secular world. But, in it, Cupitt repeatedly argued that things only have meaning when they are ‘language-wrapped’ (his term) and I now realise that he was echoing Wittgenstein. However, there is a context in which language is magical, and that is when it creates a world inside your head, called a story.
 
I’ve been reading Bryan Magee’s The Great Philosophers, based on a series of podcasts with various academics in 1987, which started with Plato and ended with Wittgenstein. He discussed Plato with Myles Burnyeat, Professor of Ancient Philosophy at Oxford. Naturally, they discussed Socrates, the famous dialogues and the more famous Republic, but towards the end they turned to the Timaeus, which was a work on ‘mathematical science’, according to Burnyeat, that influenced Aristotle and Ptolemy.
 
It's worth quoting their last exchange verbatim:
 
Magee: For us in the twentieth century there is something peculiarly contemporary about the fact that, in the programme it puts forward for acquiring an understanding of the world, Plato’s philosophy gives a central role to mathematical physics.
 
Burnyeat: Yes. What Plato aspired to do, modern science has actually done. And so there is a sort of innate sympathy between the two which does not hold for Aristotle’s philosophy. (My emphasis)


Addendum: This is a very good exposition on the 'measurement problem' by Sabine Hossenfelder, which also provides a very good synopsis of the wave function (ψ), Schrodinger's equation and the Born rule.

Sunday, 22 May 2022

We are metaphysical animals

 I’m reading a book called Metaphysical Animals (How Four Women Brought Philosophy Back To Life). The four women were Mary Midgley, Iris Murdoch, Philippa Foot and Elizabeth Anscombe. The first two I’m acquainted with and the last two, not. They were all at Oxford during the War (WW2) at a time when women were barely tolerated in academia and had to be ‘chaperoned’ to attend lectures. Also a time when some women students ended up marrying their tutors. 

The book is authored by Clare Mac Cumhaill and Rachael Wiseman, both philosophy lecturers who became friends with Mary Midgley in her final years (Mary died in 2018, aged 99). The book is part biographical of all 4 women and part discussion of the philosophical ideas they explored.

 

Bringing ‘philosophy back to life’ is an allusion to the response (backlash is too strong a word) to the empiricism, logical positivism and general rejection of metaphysics that had taken hold of English philosophy, also known as analytical philosophy. Iris spent time in postwar Paris where she was heavily influenced by existentialism and Jean-Paul Sartre, in particular, whom she met and conversed with. 

 

If I was to categorise myself, I’m a combination of analytical philosopher and existentialist, which I suspect many would see as a contradiction. But this isn’t deliberate on my part – more a consequence of pursuing my interests, which are science on one hand (with a liberal dose of mathematical Platonism) and how-to-live a ‘good life’ (to paraphrase Aristotle) on the other.

 

Iris was intellectually seduced by Sartre’s exhortation: “Man is nothing else but that which he makes of himself”. But as her own love life fell apart along with all its inherent dreams and promises, she found putting Sartre’s implicit doctrine, of standing solitarily and independently of one’s milieu, difficult to do in practice. I’m not sure if Iris was already a budding novelist at this stage of her life, but anyone who writes fiction knows that this is what it’s all about: the protagonist sailing their lone ship on a sea full of icebergs and other vessels, all of which are outside their control. Life, like the best fiction, is an interaction between the individual and everyone else they meet. Your moral compass, in particular, is often tested. Existentialism can be seen as an attempt to arise above this, but most of us don’t. 

 

Not surprisingly, Wittgenstein looms large in many of the pages, and at least one of the women, Elizabeth Anscombe, had significant interaction with him. With Wittgenstein comes an emphasis on language, which has arguably determined the path of philosophy since. I’m not a scholar of Wittgenstein by any stretch of the imagination, but one thing he taught, or that people took from him, was that the meaning we give to words is a consequence of how they are used in ordinary discourse. Language requires a widespread consensus to actually work. It’s something we rarely think about but we all take for granted, otherwise there would be no social discourse or interaction at all. There is an assumption that when I write these words, they have the same meaning for you as they do for me, otherwise I am wasting my time.

 

But there is a way in which language is truly powerful, and I have done this myself. I can write a passage that creates a scene inside your mind complete with characters who interact and can cause you to laugh or cry, or pretty much any other emotion, as if you were present; as if you were in a dream.

 

There are a couple of specific examples in the book which illustrate Wittgenstein’s influence on Elizabeth and how she used them in debate. They are both topics I have discussed myself without knowing of these previous discourses.

 

In 1947, so just after the war, Elizabeth presented a paper to the Cambridge Moral Sciences Club, which she began with the following disclosure:

 

Everywhere in this paper I have imitated Dr Wittgenstein’s ideas and methods of discussion. The best that I have written is a weak copy of some features of the original, and its value depends only on my capacity to understand and use Dr Wittgenstein’s work.

 

The subject of her talk was whether one can truly talk about the past, which goes back to the pre-Socratic philosopher, Parmenides. In her own words, paraphrasing Parmenides, ‘To speak of something past’ would then to ‘point our thought’ at ‘something there’, but out of reach. Bringing Wittgenstein into the discussion, she claimed that Parmenides specific paradox about the past arose ‘from the way that thought and language connect to the world’.

 

We apply language to objects by naming them, but, in the case of the past, the objects no longer exist. She attempts to resolve this epistemological dilemma by discussing the nature of time as we experience it, which is like a series of pictures that move on a timeline while we stay in the present. This is analogous to my analysis that everything we observe becomes the past as soon as it happens, which is exemplified every time someone takes a photo, but we remain in the present – the time for us is always ‘now’.

 

She explains that the past is a collective recollection, documented in documents and photos, so it’s dependent on a shared memory. I would say that this is what separates our recollection of a real event from a dream, which is solipsistic and not shared with anyone else. But it doesn’t explain why the past appears fixed and the future unknown, which she also attempted to address. But I don’t think this can be addressed without discussing physics.

 

Most physicists will tell you that the asymmetry between the past and future can only be explained by the second law of thermodynamics, but I disagree. I think it is described, if not explained, by quantum mechanics (QM) where the future is probabilistic with an infinitude of possible paths and classical physics is a probability of ONE because it’s already happened and been ‘observed’. In QM, the wave function that gives the probabilities and superpositional states is NEVER observed. The alternative is that all the futures are realised in alternative universes. Of course, Elizabeth Anscombe would know nothing of these conjectures.

 

But I would make the point that language alone does not resolve this. Language can only describe these paradoxes and dilemmas but not explain them.

 

Of course, there is a psychological perspective to this, which many people claim, including physicists, gives the only sense of time passing. According to them, it’s fixed: past, present and future; and our minds create this distinction. I think our minds create the distinction because only consciousness creates a reference point for the present. Everything non-sentient is in a causal relationship that doesn’t sense time. Photons of light, for example, exist in zero time, yet they determine causality. Only light separates everything in time as well as space. I’ve gone off-topic.

 

Elizabeth touched on the psychological aspect, possibly unintentionally (I’ve never read her paper, so I could be wrong) that our memories of the past are actually imagined. We use the same part of the brain to imagine the past as we do to imagine the future, but again, Elizabeth wouldn’t have known this. Nevertheless, she understood that our (only) knowledge of the past is a thought that we turn into language in order to describe it.

 

The other point I wish to discuss is a famous debate she had with C.S. Lewis. This is quite something, because back then, C.S. Lewis was a formidable intellectual figure. Elizabeth’s challenge was all the more remarkable because Lewis’s argument appeared on the surface to be very sound. Lewis argued that the ‘naturalist’ position was self-refuting if it was dependent on ‘reason’, because reason by definition (not his terminology) is based on the premise of cause and effect and human reason has no cause. That’s a simplification, nevertheless it’s the gist of it. Elizabeth’s retort:

 

What I shall discuss is this argument’s central claim that a belief in the validity of reason is inconsistent with the idea that human thought can be fully explained as the product of non-rational causes.

 

In effect, she argued that reason is what humans do perfectly naturally, even if the underlying ‘cause’ is unknown. Not knowing the cause does not make the reasoning irrational nor unnatural. Elizabeth specifically cited the language that Lewis used. She accused him of confusing the concepts of “reason”, “cause” and “explanation”.

 

My argument would be subtly different. For a start, I would contend that by ‘reason’, he meant ‘logic’, because drawing conclusions based on cause and effect is logic, even if the causal relations (under consideration) are assumed or implied rather than observed. And here I contend that logic is not a ‘thing’ – it’s not an entity; it’s an action - something we do. In the modern age, machines perform logic; sometimes better than we do.

 

Secondly, I would ask Lewis, does he think reason only happens in humans and not other animals? I would contend that animals also use logic, though without language. I imagine they’d visualise their logic rather than express it in vocal calls. The difference with humans is that we can perform logic at a whole different level, but the underpinnings in our brains are surely the same. Elizabeth was right: not knowing its physical origins does not make it irrational; they are separate issues.

 

Elizabeth had a strong connection to Wittgenstein right up to his death. She worked with him on a translation and edit of Philosophical Investigations, and he bequeathed her a third of his estate and a third of his copyright.

 

It’s apparent from Iris’s diaries and other sources that Elizabeth and Iris fell in love at one point in their friendship, which caused them both a lot of angst and guilt because of their Catholicism. Despite marrying, Iris later had an affair with Pip (Philippa).

 

Despite my discussion of just 2 of Elizabeth’s arguments, I don’t have the level of erudition necessary to address most of the topics that these 4 philosophers published in. Just reading the 4 page Afterwards, it’s clear that I haven’t even brushed the surface of what they achieved. Nevertheless, I have a philosophical perspective that I think finds some resonance with their mutual ideas. 

 

I’ve consistently contended that the starting point for my philosophy is that for each of us individually, there is an inner and outer world. It even dictates the way I approach fiction. 

 

In the latest issue of Philosophy Now (Issue 149, April/May 2022), Richard Oxenberg, who teaches philosophy at Endicott College in Beverly, Massachusetts, wrote an article titled, What Is Truth? wherein he describes an interaction between 2 people, but only from a purely biological and mechanical perspective, and asks, ‘What is missing?’ Well, even though he doesn’t spell it out, what is missing is the emotional aspect. Our inner world is dominated by emotional content and one suspects that this is not unique to humans. I’m pretty sure that other creatures feel emotions like fear, affection and attachment. What’s more I contend that this is what separates, not just us, but the majority of the animal kingdom, from artificial intelligence.

 

But humans are unique, even among other creatures, in our ability to create an inner world every bit as rich as the one we inhabit. And this creates a dichotomy that is reflected in our division of arts and science. There is a passage on page 230 (where the authors discuss R.G. Collingwood’s influence on Mary), and provide an unexpected definition.

 

Poetry, art, religion, history, literature and comedy are all metaphysical tools. They are how metaphysical animals explore, discover and describe what is real (and beautiful and good). (My emphasis.)

 

I thought this summed up what they mean with their coinage, metaphysical animals, which titles the book, and arguably describes humanity’s most unique quality. Descriptions of metaphysics vary and elude precise definition but the word, ‘transcendent’, comes to mind. By which I mean it’s knowledge or experience that transcends the physical world and is most evident in art, music and storytelling, but also includes mathematics in my Platonic worldview.


 

Footnote: I should point out that certain chapters in the book give considerable emphasis to moral philosophy, which I haven’t even touched on, so another reader might well discuss other perspectives.


Monday, 18 May 2020

An android of the seminal android storyteller

I just read a very interesting true story about an android built in the early 2000s based on the renowned sci-fi author, Philip K Dick, both in personality and physical appearance. It was displayed in public at a few prominent events where it interacted with the public in 2005, then was lost on a flight between Dallas and Las Vegas in 2006, and has never been seen since. The book is called Lost In Transit; The Strange Story of the Philip K Dick Android by David F Duffy.

You have to read the back cover to know it’s non-fiction published by Melbourne University Press in 2011, so surprisingly a local publication. I bought it from my local bookstore at a 30% discount price as they were closing down for good. They were planning to close by Good Friday but the COVID-19 pandemic forced them to close a good 2 weeks earlier and I acquired it at the 11th hour, looking for anything I might find interesting.

To quote the back cover:

David F Duffy was a postdoctoral fellow at the University of Memphis at the time the android was being developed... David completed a psychology degree with honours at the University of Newcastle [Australia] and a PhD in psychology at Macquarie University, before his fellowship at the University of Memphis, Tennessee. He returned to Australia in 2007 and lives in Canberra with his wife and son.

The book is written chronologically and is based on extensive interviews with the team of scientists involved, as well as Duffy’s own personal interaction with the android. He had an insider’s perspective as a cognitive psychologist who had access to members of the team while the project was active. Like everyone else involved, he is a bit of a sci-fi nerd with a particular affinity and knowledge of the works of Philip K Dick.

My specific interest is in the technical development of the android and how its creators attempted to simulate human intelligence. As a cognitive psychologist, with professionally respected access to the team, Duffy is well placed to provide some esoteric knowledge to an interested bystander like myself.

There were effectively 2 people responsible (or 2 team leaders), David Hanson and Andrew Olney, who were brought together by Professor Art Greasser, head of the Institute of Intelligent Systems, a research lab in the psychology building at the University of Memphis (hence the connection with the author). 

Hanson is actually an artist, and his specialty was building ‘heads’ with humanlike features and humanlike abilities to express facial emotions. His heads included mini-motors that pulled on a ‘skin’, which could mimic a range of facial movements, including talking.

Olney developed the ‘brains’ of the android that actually resided on a laptop and was connected by wires going into the back of the android’s head. Hanson’s objective was to make an android head that was so humanlike that people would interact with it on an emotional and intellectual level. For him, the goal was to achieve ‘empathy’. He had made at least 2 heads before the Philip K Dick project.

Even though the project got the ‘blessing’ of Dick’s daughters, Laura and Isa, and access to an inordinate amount of material, including transcripts of extensive interviews, they had mixed feelings about the end result, and, tellingly, they were ‘relieved’ when the head disappeared. It suggests that it’s not the way they wanted him to be remembered.

In a chapter called Life Inside a Laptop, Duffy gives a potted history of AI, specifically in relation to the Turing test, which challenges someone to distinguish an AI from a human. He also explains the 3 levels of processing that were used to create the android’s ‘brain’. The first level was what Olney called ‘canned’ answers, which were pre-recorded answers to obvious questions and interactions, like ‘Hi’, ‘What’s your name?’, ‘What are you?’ and so on. Another level was ‘Latent Semantic Analysis’ (LSA), which was originally developed in a lab in Colorado, with close ties to Graesser’s lab in Memphis, and was the basis of Grasser’s pet project, ‘AutoTutor’ with Olney as its ‘chief programmer’. AutoTutor was an AI designed to answer technical questions as a ‘tutor’ for students in subjects like physics.

To create the Philip K Dick database, Olney downloaded all of Dick’s opus, plus a vast collection of transcribed interviews from later in his life. The Author conjectures that ‘There is probably more dialogue in print of interviews with Philip K Dick than any other person, alive or dead.’

The third layer ‘broke the input (the interlocutor’s side of the dialogue) into sections and looked for fragments in the dialogue database that seemed relevant’ (to paraphrase Duffy). Duffy gives a cursory explanation of how LSA works – a mathematical matrix using vector algebra – that’s probably a little too esoteric for the content of this post.

In practice, this search and synthesise approach could create a self-referencing loop, where the android would endlessly riff on a subject, going off on tangents, that sounded cogent but never stopped. To overcome this, Olney developed a ‘kill switch’ that removed the ‘buffer’ he could see building up on his laptop. At one display at ComicCon (July 2005) as part of the promotion for A Scanner Darkly (a rotoscope movie by Richard Linklater, starring Keanu Reeves), Hanson had to present the android without Olney, and he couldn’t get the kill switch to work, so Hanson stopped the audio with the mouth still working and asked for the next question. The android simply continued with its monolithic monologue which had no relevance to any question at all. I think it was its last public appearance before it was lost. Dick’s daughters, Laura and Isa, were in the audience and they were not impressed.

It’s a very informative and insightful book, presented like a documentary without video, capturing a very quirky, unique and intellectually curious project. There is a lot of discussion about whether we can produce an AI that can truly mimic human intelligence. For me, the pertinent word in that phrase is ‘mimic’, because I believe that’s the best we can do, as opposed to having an AI that actually ‘thinks’ like a human. 

In many parts of the book, Duffy compares what Graesser’s team is trying to do with LSA with how we learn language as children, where we create a memory store of words, phrases and stock responses, based on our interaction with others and the world at large. It’s a personal prejudice of mine, but I think that words and phrases have a ‘meaning’ to us that an AI can never capture.

I’ve contended before that language for humans is like ‘software’ in that it is ‘downloaded’ from generation to generation. I believe that this is unique to the human species and it goes further than communication, which is its obvious genesis. It’s what we literally think in. The human brain can connect and manipulate concepts in all sorts of contexts that go far beyond the simple need to tell someone what they want them to do in a given situation, or ask what they did with their time the day before or last year or whenever. We can relate concepts that have a spiritual connection or are mathematical or are stories. In other words, we can converse in topics that relate not just to physical objects, but are products of pure imagination.

Any android follows a set of algorithms that are designed to respond to human generated dialogue, but, despite appearances, the android has no idea what it’s talking about. Some of the sample dialogue that Duffy presented in his book, drifted into gibberish as far as I could tell, and that didn’t surprise me.

I’ve explored the idea of a very advanced AI in my own fiction, where ‘he’ became a prominent character in the narrative. But he (yes, I gave him a gender) was often restrained by rules. He can converse on virtually any topic because he has a Google-like database and he makes logical sense of someone’s vocalisations. If they are not logical, he’s quick to point it out. I play cognitive games with him and his main interlocutor because they have a symbiotic relationship. They spend so much time together that they develop a psychological interdependence that’s central to the narrative. It’s fiction, but even in my fiction I see a subtle difference: he thinks and talks so well, he almost passes for human, but he is a piece of software that can make logical deductions based on inputs and past experiences. Of course, we do that as well, and we do it so well it separates us from other species. But we also have empathy, not only with other humans, but other species. Even in my fiction, the AI doesn’t display empathy, though he’s been programmed to be ‘loyal’.

Duffy also talks about the ‘uncanny valley’, which I’ve discussed before. Apparently, Hanson believed it was a ‘myth’ and that there was no scientific data to support it. Duffy appears to agree. But according to a New Scientist article I read in Jan 2013 (by Joe Kloc, a New York correspondent), MRI studies tell another story. Neuroscientists believe the symptom is real and is caused by a cognitive dissonance between 3 types of empathy: cognitive, motor and emotional. Apparently, it’s emotional empathy that breaks the spell of suspended disbelief.

Hanson claims that he never saw evidence of the ‘uncanny valley’ with any of his androids. On YouTube you can watch a celebrity android called Sophie and I didn’t see any evidence of the phenomenon with her either. But I think the reason is that none of these androids appear human enough to evoke the response. The uncanny valley is a sense of unease and disassociation we would feel because it’s unnatural; similar to seeing a ghost - a human in all respects except actually being flesh and blood. 

I expect, as androids, like the Philip K Dick simulation and Sophie, become more commonplace, the sense of ‘unnaturalness’ would dissipate - a natural consequence of habituation. Androids in movies don’t have this effect, but then a story is a medium of suspended disbelief already.

Saturday, 5 January 2019

What makes humans unique

Now everyone pretty well agrees that there is not one single thing that makes humans unique in the animal kingdom, but most people would agree that our cognitive abilities leave the most intelligent and social of species in our wake. I say ‘most’ because there are some, possibly many, who argue that humans are not as special as we like to think and there is really nothing we can do that other species can’t do. They would point out that other species, if not all advanced species, have language, and many produce art to attract a mate and build structures (like ants and beavers) and some even use tools (like apes and crows).

However, I find it hard to imagine that other species can think and conceptualise in a language the way we do or even communicate complex thoughts and intentions using oral utterances alone. To give other examples, I know of no other species that tells stories, keeps track of days by inventing a calendar based on heavenly constellations (like the Mayans) or even thinks about thinking. And as far as I know, we are the only species who literally invents a complex language that we teach our children (it’s not inherited) so that we can extend memories across generations. Even cultures without written scripts can do this using songs and dances and art. As someone said (John Hands in Cosmo Sapiens) we are the only species ‘who know that we know’. Or, as I said above, we are the only species that ‘thinks about thinking’.

Someone once pointed out to me that the only thing that separates us from all other species is the accumulation of knowledge, resulting in what we call civilization. He contended that over hundreds, even thousands of years, this had resulted in a huge gap between us and every other sentient creature on the planet. I pointed out to him that this only happened because we had invented the written word, based on languages, that allowed us to transfer memories across generations. Other species can teach their young certain skills, that may not be genetically inherited, but none can accumulate knowledge over hundreds of generations like we can. His very point demonstrated the difference he was trying to deny.

In a not-so-recent post, I delineated my philosophical ruminations into 23 succinct paragraphs, covering everything from science and mathematics to language, morality and religion.  My 16th point said:



Humans have the unique ability to nest concepts within concepts ad-infinitum, which mirror the physical world.

In another post from 2012, in answer to a Question of the Month in Philosophy  Now: How does language work?; I made the same point. (This is the only submission to Philosophy Now, out of 8 thus far, that didn’t get published.)

I attributed the above ‘philosophical point’ to Douglas Hofstadter, because he says something similar in his Pulitzer Prize winning book, Godel Escher Bach, but in reality, I had reached this conclusion before reading it.

It’s my contention that it is this ability that separates us from other species and that has allowed all the intellectual endeavours we associate with humanity, including stories, music, art, architecture, mathematics, science and engineering.

I will illustrate with an example that we are all familiar with, yet many of us struggle to pursue at an advanced level. I’m talking about mathematics, and I choose it because I believe it also explains why many of us fail to achieve the degree of proficiency we might prefer.

With mathematics we learn modules which we then use as a subroutine in a larger calculation. To give a very esoteric example, Einstein’s general theory of relativity requires at least 4 modules: calculus, vectors, matrices and the Lorentz transformation. These all combine in a metric tensor that becomes the basis of his field equations. The thing is, if you don’t know how to deal with any one of these, you obviously can’t derive his field equations. But the point is that the human brain can turn all these ‘modules’ into black boxes and then the black boxes can be manipulated at another level.

It’s not hard to see that we do this with everything, including writing an essay like I’m doing now. I raise a number of ideas and then try to combine them into a coherent thesis. The ‘atoms’ are individual words but no one tries to comprehend it at that level. Instead they think in terms of the ideas that I’ve expressed in words.

We do the same with a story, which becomes like a surrogate life for the time that we are under its spell. I’ve pointed out in other posts that we only learn something new when we integrate it into what we already know. And, with a story, we are continually integrating new information into existing information. Without this unique cognitive skill, stories wouldn’t work.

But more relevant to the current topic, the medium for a story is not words but the reader’s imagination. In a movie, we short-circuit the process, which is why they are so popular.

Because a story works at the level of imagination, it’s like a dream in that it evokes images and emotions that can feel real. One could imagine that a dog or a cat could experience emotions if we gave them a virtual reality experience, but a human story has the same level of complexity that we find in everyday life and which we express in a language. The simple fact that we can use language alone to conjure up a world with characters, along with a plot that can be followed, gives some indication of how powerful language is for the human species.

In a post I wrote on storytelling back in 2012, I referenced a book by Kiwi academic, Brian Boyd, who points out that pretend play, which we all do as children (though I suspect it’s now more likely done using a videogame console) gives us cognitive skills and is the precursor to both telling and experiencing stories. The success of streaming services indicates how stories are an essential part of the human experience.

While it’s self-evident that both mathematics and storytelling are two human endeavours that no other species can do (even at a rudimentary level) it’s hard to see how they are related.

People who are involved in computer programming or writing code, are aware of the value, even necessity, of subroutines. Our own brain does this when we learn to do something without having to think about it, like walking. But we can do the same thing with more complex tasks like driving a car or playing a musical instrument. The key point here is that they are all ‘motor tasks’, and we call the result ‘muscle memory’, as distinct from cognitive tasks. However, I expect it relates to cognitive tasks as well. For example, every time you say something it’s like the sentence has been pre-formed in your brain. We use particular phrases, all the time, which are analogous to ‘subroutines.’

I should point out that this doesn’t mean that computers ‘think’, which is a whole other topic. I’m just relating how the brain delegates tasks so it can ‘think’ about more important things. If we had to concentrate every time we took a step, we would lose the train of thought of whatever it was we were engaged in at the time; a conversation being the most obvious example.

The mathematics example I gave is not dissimilar to the idea of a ‘subroutine’. In fact, one can employ mathematical ‘modules’ into software, so it’s more than an analogy. So with mathematics we’ve effectively achieved cognitively what the brain achieves with motor skills at the subconscious level. And look where it has got us: Einstein’s general theory of relativity, which is the basis of all current theories of the Universe.

We can also think of a story in terms of modules. They are the individual scenes, which join together to form an episode, which form together to create an overarching narrative that we can follow even when it’s interrupted.

What mathematics and storytelling have in common is that they are both examples where the whole appears to be greater than the sum of its parts. Yet we know that in both cases, the whole is made up of the parts, because we ‘process’ the parts to get the whole. My point is that only humans are capable of this.

In both cases, we mentally build a structure that seems to have no limits. The same cognitive skill that allows us to follow a story in serial form also allows us to develop scientific theories. The brain breaks things down into components and then joins them back together to form a complex cognitive structure. Of course, we do this with physical objects as well, like when we manufacture a car or construct a building, or even a spacecraft. It’s called engineering.

Sunday, 19 March 2017

The importance of purpose

A short while ago, New Scientist (Issue: 28 January 2017) had on its cover the headline, The Meaning of Life. On reading the article, titled Why am I here? (by Teal Burrell, pp. 30-33) it was really about the importance to health in finding purpose in one’s life. I believe this is so essential that I despair when I see hope and opportunity deliberately curtailed as we do with our treatment of refugees. It’s criminal – I really believe that – because it’s so fundamental to both psychological and physical health. As someone who often struggled to find purpose, this is a subject close to my heart.

As the article points out, for many people, religion provides a ‘higher purpose’, which is really a separate topic, but not an unrelated one. The author also references Viktor Frankl’s famous book, Man’s Search for Meaning (very early in the piece), which I’ve sometimes argued is the only book I’ve read that should be compulsory reading. The book is based on Frankl’s experience as a holocaust survivor, but ultimately led to a philosophy and a psychological method (for want of a better term) that he practiced as a psychologist.

I’ve also read another book of his, The Unconscious God, where he argues that there are 3 basic ways in which we find purpose or meaning in our lives. One, through a relationship; two through a project; and three through dealing with adversity. This last seems paradoxical, even oxymoronic, yet it is the premise of virtually every work of narrative fiction that all of us (who watch cinema or TV) imbibe with addictive enthusiasm. I’ve long argued that wisdom doesn’t come from achievements or education but dealing with adversity in our lives, which is impossible to avoid no matter who you are. It makes one think of Socrates' (attributed) famous aphorism: The unexamined life is not worth living. If we think about it, we only examine our lives when we fail. So a life without failure is not really much of a life. The corollary to this is that risk is essential to success and to gaining maturity in all things.

Humans are the most socially complex creatures on the planet – take language. I’ve recently read a book, Cosmo Sapiens; Human Evolution from the Origin of the Universe, by John Hands. It’s as ambitious as its title suggests and it took him 10 years to complete: very erudite and comprehensive, Hands challenges science orthodoxies without being anti-science. But his book is not the topic of this post, so I won’t distract you further. One of his many salient points is that humans are unique, not the least because of our ability for self-reflection. He contends that we are not the only species with the ability to ‘know’, but we are the only species who ‘know that we know’ (his words) or think about thinking (my words). The point is that cognitively we are distinct from every other species on the planet because we can consider and cogitate on our origins, our mortality and our place in the overall scheme of things, in ways that other species can’t possibly think about.

And language is the key attribute, because, without it, we can’t even think in the way that we all take for granted; yet it's derived from our social environment (we all had to be taught). I understand that children isolated from adults can develop their own language, but, even under these extremely rare circumstances, it requires social interaction to develop. This is a lengthy introduction to the fact that all of us require social interaction (virtually from birth) to have a meaningful life in any way, shape or form. We spend a large part of our lives interacting with others and, to a very large extent, the quality of that interaction determines the quality of our lives.

And this is a convoluted way of reaching the first of Frankl’s ‘ways of finding meaning’: through a relationship. For most of us this implies a conjugal relationship with all that entails. For many of us, in our youth, there is a tendency to put all our eggs in that particular basket. But with age, our perspective changes with lust playing a lesser role, whilst more resilient traits like friendship, reliance and trust become more important, even necessary, in long term relationships, upon which we build something meaningful for ourselves and others. For many people, I think children provide a purpose, not that I’ve ever had any, but it’s something I’ve observed.

I know from personal experience, that having a project can provide purpose, and for many people, myself included, it can seem necessary. We live in a society (in the West, anyway) where our work often defines us and gives us an identity. I think this has historical roots. Men, in particular, were defined by what they do, often following a family tradition. This idea of a hereditary role (for life) is not as prevalent as it once was, but I suspect it snuffed out the light of aspiration for many. A couple of weeks ago I saw David Stratton; a Cinematic Life, followed by a Q&A with the man himself. David, who is about a decade older than me, came to Australia and made a career as a film critic, becoming one of the most respected, not only in Australia, but in the world. However, the cost was the bitter disappointment expressed by his father for not taking over the family grocery business back in England. Women, on the other hand, were not allowed the luxury of finding their own independent identity until relatively recently in Western societies. It’s the word ‘independent’ that was their particular stumbling block, because, even in my postwar childhood, women were not meant to be independent of a man.

The movie, Up in the Air, starring George Clooney, which I reviewed back in 2010, does a fair job of addressing this issue in the guise of cinematic entertainment. To illustrate my point, I’ll quote from my own post:

The movie opens with a montage of people being sacked (fired) with a voice-over of Clooney explaining his job. This cuts to the core of the movie for me: what do we live for? For many people their job defines them – it is their identity, in their own eyes and the eyes of their society. So cutting off someone’s job is like cutting off their life – it’s humiliating at the very least, suicidally depressing at worst and life-changing at best.

So purpose is something most of us pursue, either through relationships within our family or through our work or both. But many of you will be asking: is there a higher purpose? I can’t answer that, but I’ll provide my own philosophical slant on it.

Socrates (again), who was forced to take his own life (as a consequence of a democratic process, it should be noted) supposedly said, in addition to the well-worn trope quoted above: 'Whether death is a door to another world or an endless sleep, we don’t know'. And I would add: We are not meant to know. I’m agnostic about an afterlife, but, to be honest, I’m not expecting one, and I’ve provided my views elsewhere. But there is a point worth making, which is that people who believe that their next life is more important than the one they’re currently living often have a perverse, not to say destructive, view on mortality. One only has to look at suicide bombers who believe that their death is a ticket to Paradise.

Having said all that, it’s well known that people with religious beliefs can benefit psychologically in that they often live healthy and fulfilling lives (as the New Scientist article, referenced in the introduction, attests). Personally, I think that when one reaches the end of one’s life, they will judge it not by their achievements and successes but by the lives they have touched. Purpose can best be found when we help others, whether it be through work or family or sport or just normal everyday interactions with strangers.

Sunday, 27 November 2016

Arrival; a masterclass in storytelling

Four movie reviews in one year; maybe I should change the title of my blog – no, just kidding. Someone (either Jake Wilson or Paul Byrnes from The Age) gave it the ultimate accolade: ‘At last, a science fiction movie with a brain.’ They also gave it 3.5 stars but ended their review with: ‘[the leads: Amy Adams, Forest Whitaker and Jeremy Renner] have the chops to keep us watching even when the narrative starts to wobble.’ So they probably wouldn’t agree with me calling it a masterclass.

It’s certainly not perfect – I’m not sure I’ve seen the perfect movie yet – but it’s clever on more than one level. I’m always drawn to good writing in a movie, which is something most people are not even aware of. It was based on a book, whose author escaped me as a couple in front of me got up to leave just as the name came up on the screen. But I have Google, so I can tell you that the screenplay was written by Eric Heisserer, and Ted Chiang wrote the novella, “Story of Your Life”, upon which it is based. French-Canadian director, Denis Villeneuve has also made Prisoners and Sicario, neither of which I’ve seen, but Sicario is highly acclaimed.

It would be remiss of me not to mention the music and soundscape, which really adds another dimension to this movie. I noticed that beginning and end scores were by Max Richter, whom I admire in the contemporary classical music scene. Though the overall music score is credited to Johann Johannsson. Some of the music reminded of Tibetan music with its almost subterranean tones. Australia also gets a bit of 'coverage', if that's the right word, though not always in a flattering manner. Forest Whitaker's character reminds us how we all but committed genocide against the Aboriginal people.

I haven’t read the book, but I’m willing to give credit to both writers for producing a ‘science fiction story with a brain’. Science fiction has a number of subgenres: the human diaspora into interstellar space; time travel; alien worlds; parallel universes; artificial intelligence; dystopian fiction, utopian fiction and the list goes on, with various combinations. The title alone tells us that this is an Alien encounter on Earth, but the movie keeps us guessing as to whether it’s an invasion or just a curious interloper or something else altogether.

I’ve written elsewhere that narrative tension is one of the essential writing skills and this story has it on many levels. To give one example without giving the plot away, there is a sequence of narrative events where we think we know what’s going to happen, with the suspense ramping up while we wait for what we expect to happen to happen, then something completely unexpected happens, which is totally within the bounds of possibility, therefore believable. In some respects this sums up the whole movie because all through it we are led to believe one thing only to learn we are witnessing something else. It’s called a reversal, which I’m not always a fan of, but this one is more than just a clever twist for the sake of being clever. Maybe that’s what the reviewer meant by ‘…when the narrative starts to wobble’. I don’t know. I have to confess I wasn’t completely sold, yet it was essential to the story and it works within the context of the story, so it’s part of the masterclass.

One of the things that struck me right from the beginning is that we see the movie almost in first person – though, not totally, as at least one cutaway scene requires the absence of the protagonist. I would not be surprised if Ted Chiang wrote his short story in the first person. I don’t know what nationality Ted Chiang is, but I assume he is of Chinese extraction, and the Chinese are major players in this movie.

Communication is at the core of this film, both plot and subplot, and Amy Adams’ character (Louise Banks) makes the pertinent point in a bit of expositional dialogue that was both relevant to the story and relevant to what makes us human: that language, to a large extent, determines how we think because, by the very nature of our brains, we are limited in what we can think by the language that we think in. That’s not what she said but that was the lesson I took from it.

I’ve made the point before, though possibly not on this blog, that science fiction invariably has something to say about the era in which it was written and this movie is no exception. Basically, we see how paranoia can be a dangerous contagion, as if we need reminding. We are also reminded how wars and conflicts bring out the best and worst in humanity with the worst often being the predominant player.