Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
Showing posts with label Language. Show all posts
Showing posts with label Language. Show all posts

30 August 2025

Godel and Wittgenstein; same goal, different approach

 The current issue of Philosophy Now (Issue 169, Aug/Sep 2025) has as its theme, The Sources of Knowledge Issue, with a clever graphic on the cover depicting bottles of ‘sauces’ of 4 famous philosophers in this area: Thomas Kuhn, Karl Popper, Kurt Godel and Edmund Gettier. The last one is possibly not as famous as the other 3, and I’m surprised they didn’t include Ludwig Wittgenstein, though there is at least one article featuring him inside.
 
I’ve already written a letter to the Editor over one article Challenging the Objectivity of Science by Sina Mirzaye Shirkoohi, who is a ‘PhD Candidate at the Faculty of Administrative Sciences of the University Laval in Quebec City’; and which I may feature in a future post if it gets published.
 
But this post is based on an article titled Godel, Wittgenstein & the Limits of Knowledge by Michael D McGranahan, who has a ‘BS in Geology from San Diego State and an MS in Geophysics from Stanford, with 10 years [experience] in oil and gas exploration before making a career change’, without specifying what that career change is. ‘He is a lifelong student of science, philosophy and history.’ So, on the face of it, we may have a bit in common, because I’ve also worked in oil and gas, though in a non-technical role and I have no qualifications in anything. I’ve also had a lifelong interest in science and more recently, philosophy, but I’m unsure I would call myself a student, except of the autodidactic kind, and certainly not of history. I’m probably best described as a dilettante.
 
That’s a long runup, but I like to give people their due credentials, especially when I have them at hand. McGranahan, in his own words, ‘wants to explore the convergence of Godel and Wittgenstein on the limits of knowledge’, whereas I prefer to point out the distinctions. I should say up front that I’m hardly a scholar on Wittgenstein, though I feel I’m familiar enough with his seminal ideas regarding the role of language in epistemology. It should also be pointed out that Wittgenstein was one of the most influential philosophers of the 20th Century, especially in academia.
 
I will start with a quote cited by McGranahan: “The limits of my language mean the limits of my world.”
 
I once wrote a rather pretentious list titled, My philosophy in 24 dot points, where I paraphrase Wittgenstein: We think and conceptualise in a language. Axiomatically, this limits what we can conceive and think about. This is not exactly the same as the quote given above, and it has a subtly different emphasis. In effect, I think Wittgenstein has it back-to-front, based solely on his statement, obviously out-of-context, so I might be misrepresenting him, but I think it’s the limits of our knowledge of the world, that determines the limits of our language, rather than the other way round.
 
As I pointed out in my last post, we are continually creating new language to assimilate new knowledge. So, when I say, ‘this limits what we can conceive and think about’, it’s obvious that different cultures living in different environments will develop concepts that aren’t necessarily compatible with each other and this will be reflected in their respective languages. It’s one of the reasons all languages adopt new words from other languages when people from different cultures interact.
 
Humans are unique in that we think in a language. In fact, it’s not too much of a stretch to analogise it with software, remembering software is a concept that didn’t come into common parlance until after Wittgenstein died in 1951 (though Turing died in 1954).
 
To extend that metaphor, language becomes our ‘operating language’ for ‘thinking’, and note that it happens early in one’s childhood, well before we develop an ability to comprehend complex and abstract concepts. Just on that, arguably our exposure to stories is our first encounter with abstract concepts, if by abstract we mean entities that only exist in one’s mind.
 
I have a particular view, that as far as I know, is not shared with anyone else, which is that we have a unique ability to nest concepts within concepts ad infinitum, which allows us to create mental ‘black boxes’ in our thinking. To give an example, all the sentences I’m currently writing are made of distinct words, yet each sentence has a meaning that transcends the meaning of the individual words. Then, of course, the accumulation of sentences hopefully provides a cogent argument that you can follow. The same happens in a story which is arguably even more amazing, given a novel (like Elvene) contains close to 100k words, and will take up 8hrs of your life, but probably over 2 or 3 days. So we maintain mental continuity despite breaks and interruptions.
 
Wittgenstein once made the same point (regarding words and sentences), so that specific example is not original. Where my view differs is that I contend it also reflects our understanding of the physical world, which comprises entities within entities that have different physical representations at different levels. The example I like to give is a human body made up of individual cells, which themselves contain strands of DNA that provide the code for the construction and functioning of an individual. From memory, Douglas Hoffstadter made a similar point in Godel Escher Bach, so maybe not an original idea after all.
 
Time to talk about Godel. I’m not a logician, but I don’t believe you need to be to appreciate the far-reaching consequences of his groundbreaking theorem. In fact, as McGranahan points out, there are 2 theorems: Godel’s First Incompleteness Theorem and his Second Incompleteness Theorem. And it’s best to quote McGranahan directly:
 
Godel’s First Incompleteness Theorem proves mathematically that any consistent formal mathematical system within which a certain amount of elementary arithmetic can be carried out, is incomplete – meaning, there are one or more true statements that can be made in the language of the system which can neither be proved nor disproved in the system.
 
He then states the logical conclusion of this proof:
 
This finding leads to two alternatives: Alternative #1: If a set of axioms is consistent, then it is incomplete. Alternative #2: In a consistent system, not every statement can be proved in the language of that system.
 
Godel’s Second Incompleteness Theorem is simply this: No set of axioms can prove its own consistency.

 
It’s Alternative #2 that goes to the nub of the theorem: there are and always will be mathematical ‘truths’ that can’t be proved ‘true’ using the axioms of that system. Godel said himself that such truths (true statements) might be proved by expanding the system with new axioms. In other words, you may need to discover new mathematics to uncover new proofs, and this is what we’ve found in practice, and why some conjectures take so long to prove – like hundreds of years. The implication behind this is that our search for mathematical truths is neverending, meaning that mathematics is a neverending endeavour.
 
As McGranahan succinctly puts it: So knowing something is true, and proving it, are two different things.
 
This has led Roger Penrose to argue that Godel’s Theorems demonstrate the distinction between the human mind and a computer. Because a human mind can intuit a ‘truth’ that a computer can’t prove with logic. In a sense, he’s right, which is why we have conjectures like the ones I mentioned in my last post relating to prime numbers – the twin prime conjecture, the Goldbach conjecture and Riemann’s famous hypothesis. However, they also demonstrate the relationship between Godel’s Theorem and Turing’s famous Halting Problem, which Gregory Chaitin argues are really 2 manifestations of the same problem.
 
With each of those conjectures, you can create an algorithm to find all the solutions on a computer, but you can’t run the computer to infinity, so unless it ‘stops’, you don’t know if they’re true or not. The irony is that (for each conjecture): if it stops, it’s false and if it’s true, it never stops so it’s unknown. I covered this in another post where I argued that there is a relationship between infinity and the unknowable. The obvious connection here, that no one remarks on, is that Godel’s theorems only work because mathematics is infinite. If it was finite, it would be 'complete'. I came to an understanding of Godel’s Theorem through Turing’s Halting Problem, because it was easier to understand. A machine is unable to determine if a mathematical ‘truth’ is true or not through logic alone.
 
According to McGranahan, Wittgenstein said that “Tautology and contradiction are without sense.” He then said, “Tautology and contradiction are, however, nonsensical.” This implies that ‘without sense’ and ‘nonsensical’ have different meanings, “which illustrates the very language problem of which we speak” (McGranahan using Wittgenstein’s own language style to make his point). According to McGranahan, Wittgenstein then concluded: “that mathematics (if tautology and contradiction will be allowed to stand for mathematics), is nonsense.” (Parentheses in the original)
 
According to McGranahan, “…because in his logic, mathematical formulae are not bipolar (true or false) and hence cannot form pictures and elements and objects [which is how Wittgenstein defines language], and thus cannot describe actual states of affairs, and therefore, cannot describe the world.”
 
I feel that McGranahan doesn’t really resolve this, except to say: “There would seem to be a conflict… Who is right?” I actually think that if anyone is wrong, it’s Wittgenstein, though I admit a personal prejudice, in as much as I don’t think language defines the world.
 
On the other hand, everything we’ve learned about the world since the scientific revolution has come to us through mathematics, not language, and that was just as true in Wittgenstein’s time as it is now; after all, he lived through the 2 great scientific revolutions of quantum mechanics and relativity theory, both dependent on mathematics only discovered after Newton’s revolution.
 
The limits of our knowledge of the physical world are determined by the limits of our knowledge of mathematics (known as physics). And our language, while it ‘axiomatically limits what we can conceive and think about’, can also be (and continually is) expanded to adopt new concepts.

18 August 2025

Reality, metaphysics, infinity

 This post arose from 3 articles I read in as many days: 2 on the same specific topic; and 1 on an apparently unrelated topic. I’ll start with the last one first.
 
I’m a regular reader of Raymond Tallis’s column in Philosophy Now, called Tallis in Wonderland, and I even had correspondence with him on one occasion, where he was very generous and friendly, despite disagreements. In the latest issue of Philosophy Now (No 169, Aug/Sep 2025), the title of his 2-page essay is Pharmaco-Metaphysics? Under which it’s stated that he ‘argues against acidic assertions, and doubts DMT assertions.’ Regarding the last point, it should be pointed out that Tallis’s background is in neuroscience.
 
By way of introduction, he points out that he’s never had firsthand experience of psychedelic drugs, but admits to his drug-of-choice being Pino Grigio. He references a quote by William Blake in The Marriage of Heaven and Hell: “If the doors of perception were cleaned, then everything would appear to man as it is, Infinite.” I include this reference, albeit out-of-context, because it has an indirect connection to the other topic I alluded to earlier.
 
Just on the subject of drugs creating alternate realities, which Tallis goes into in more detail than I want to discuss here, he makes the point that the participant knows that there is a reality from which they’ve become adrift; as if they’re in a boat that has slipped its moorings, which has neither a rudder nor oars (my analogy, not Tallis’s). I immediately thought that this is exactly what happens when I dream, which is literally every night, and usually multiple times.
 
Tallis is very good at skewering arguments by extremely bright people by making a direct reference to an ordinary everyday activity that they, and the rest of us, would partake in. I will illustrate with examples, starting with the psychedelic ‘trip’ apparently creating a reality that is more ‘real’ than the one inhabited without the drug.
 
The trip takes place in an unchanged reality. Moreover, the drug has been synthesised, tested, quality-controlled, packaged, and transported in that world, and the facts about its properties have been discovered and broadcast by individuals in the grip of everyday life. It is ordinary people usually in ordinary states of mind in the ordinary world who experiment with the psychedelics that target 5HT2A receptors.
 

He's pointing out an inherent inconsistency, if not outright contradiction (contradictoriness is the term he uses), that the production and delivery of the drug takes place in a world that the recipient’s mind wants to escape from.
 
And the point relevant to the topic of this essay: It does not seem justified, therefore, to blithely regard mind-altering drugs as opening metaphysical peepholes on to fundamental reality; as heuristic devices enabling us to discover the true nature of the world. (my emphasis)
 
To give another example of philosophical contradictoriness (I’m starting to like this term), he references Berkeley:
 
Think, for instance of those who, holding a seemingly solid copy of A Treatise Concerning the Principle of Human Knowledge (1710), accept George Berkeley’s claim [made in the book] that entities exist only insofar as they are perceived. They nevertheless expect the book to be still there when they enter a room where it is stored.
 
This, of course, is similar to Donald Hoffman’s thesis, but that’s too much of a detour.
 
My favourite example that he gives, is based on a problem that I’ve had with Kant ever since I first encountered Kant.
 
[To hold] Immanuel Kant’s view that ‘material objects’ located in space and time in the way we perceive them to be, are in fact constructs of the mind – then travel by train to give a lecture on this topic at an agreed place and time. Or yet others who (to take a well-worn example) deny the reality of time, but are still confident that they had their breakfast before their lunch.
 
He then makes a point I’ve made myself, albeit in a different context.
 
More importantly, could you co-habit in the transformed reality with those to whom you are closest – those who accept without question as central to your everyday life, and who return the compliment of taking you for granted?

 
To me, all these examples differentiate a dreaming state from our real-life state, and his last point is the criterion I’ve always given that determines the difference. Even though we often meet people in our dreams with whom we have close relationships, those encounters are never shared.
 
Tallis makes a similar point:
 
Radically revisionary views, if they are to be embraced sincerely, have to be shared with others in something that goes deeper than a report from (someone else’s) experience or a philosophical text.

 
This is why I claim that God can only ever be a subjective experience that can’t be shared, because it too fits into this category.
 
I recently got involved in a discussion on Facebook in a philosophical group, about Wittgenstein’s claim that language determines the limits of what we can know, which I argue is back-to-front. We are forever creating new language for new experiences and discoveries, which is why experts develop their own lexicons, not because they want to isolate other people (though some may), but because they deal with subject-matter the rest of us don’t encounter.
 
I still haven’t mentioned the other 2 articles I read – one in New Scientist and one in Scientific American – and they both deal with infinity. Specifically, they deal with a ‘movement’ (for want of a better term) within the mathematical community to effectively get rid of infinity. I’ve discussed this before with specific reference to UNSW mathematician, Norman Wildberger. Wildberger recently gained attention by making an important breakthrough (jointly with Dean Rubine using Catalan numbers). However, for reasons given below, I have issues with his position on infinity.
 
The thing is that infinity doesn’t exist in the physical world, or if it does, it’s impossible for us to observe, virtually by definition. However, in mathematics, I’d contend that it’s impossible to avoid. Primes are called the atoms of arithmetic, and going back to Euclid (325-265BC), he proved that there are an infinite number of primes. The thing is that there are 3 outstanding conjectures involving primes: the Goldbach conjecture; the twin prime conjecture; and the Riemann Hypothesis (which is the most famous unsolved problem in mathematics at the time of writing). And they all involve infinities. If infinities are no longer ‘allowed’, does that mean that all these conjectures are ‘solved’ or does it mean, they will ‘never be solved’?
                                                                                                                    
One of the contentions raised (including by Wildberger) is that infinity has no place in computations – specifically, computations by computers. Wildberger effectively argues that mathematics that can’t be computed is not mathematics (which rules out a lot of mathematics). On the other hand, you have Gregory Chaitin who points out that there are infinitely more incomputable Real numbers than computable Real numbers. I would have thought that this had been settled, since Cantor discovered that you can have countable infinite numbers and uncountable infinite numbers; the latter being infinitely larger than the former.
 
Just today I watched a video by Curt Jaimungal interviewing Chiara Marletto on ‘Constructor Theory’, which to my limited understanding based on this extract from a larger conversation, seems to be premised on the idea that everything in the Universe can be understood if it’s run on a quantum computer. As far as I can tell, she’s not saying it is a computer simulation, but she seems to emulate Stephen Wolfram’s philosophical position that it’s ‘computation all the way down’. Both of these people know a great deal more than me, but I wonder how they deal with chaos theory, which seems to drive the entire universe at multiple levels and can’t be computed due to a dependency on infinitesimal initial conditions. It’s why the weather can’t be forecast accurately beyond 10 days (because it can’t be calculated, no matter how complex the computer modelling) and why every coin-toss is independent of its predecessor (unless you rig it).
 
Note the use of the word, ‘infinitesimal’. I argue that chaos theory is the one phenomenon where infinity meets the real world. I agree with John Polkinghorne that it allows the perfect mechanism for God to intervene in the physical world, even though I don’t believe in an interventionist God (refer Marcus du Sautoy, What We Cannot Know).
 
I think the desire to get rid of infinity is rooted in an unstated philosophical position that the only things that can exist are the things we can know. This doesn’t mean that we currently know everything – I don’t think any mathematician or physicist believes that – but that everything is potentially knowable. I have long disagreed. And this is arguably the distinction between physics and metaphysics. I will take the definition attributed to Plato: ‘That which holds that what exists lies beyond experience.’ In modern science, if not modern philosophy, there is a tendency to discount metaphysics, because, by definition, it exists beyond what we experience in the real world. You can see an allusion here to my earlier discussion on Tallis’s essay, where he juxtaposes reality as we experience it with psychedelic experiences that purportedly provide a window into an alternate reality, where ‘everything would appear to man as it is, Infinite’. Where infinity represents everything we can’t know in the world we inhabit.
 
The thing is that I see mathematics as the only evidence of metaphysics; the only connection our minds have between a metaphysical world that transcends the Universe, and the physical universe we inhabit and share with innumerable other sentient creatures, albeit on a grain of sand on an endless beach, the horizon of which we’re yet to discern.
 
So I see this transcendental, metaphysical world of endless possible dimensions as the perfect home for infinity. And without mathematics, we would have no evidence, let alone a proof, that infinity even exists.

29 May 2025

The role of the arts. Why did it evolve? Will AI kill it?

 As I mentioned in an earlier post this month, I’m currently reading Brian Greene’s book, Until the End of Time; Mind, Matter; and Our Search for Meaning in an Evolving Universe, which covers just about everything from cosmology to evolution to consciousness, free will, mythology, religion and creativity. He spends a considerable amount of time on storytelling, compared to other art forms, partly because it allows an easy segue from language to mythology to religion.
 
One of his points of extended discussion was in trying to answer the question: why did our propensity for the arts evolve, when it has no obvious survival value? He cites people like Steven Pinker, Brian Boyd (whom I discuss at length in another post) and even Darwin, among others. I won’t elaborate on these, partly due to space, and partly because I want to put forward my own perspective, as someone who actually indulges in an artistic activity, and who could see clearly how I inherited artistic genes from one side of my family (my mother’s side). No one showed the slightest inclination towards artistic endeavour on my father’s side (including my sister). But they all excelled in sport (including my sister), and I was rubbish at sport. One can see how sporting prowess could be a side-benefit to physical survival skills like hunting, but also achieving success in combat, which humans have a propensity for, going back to antiquity.
 
Yet our artistic skills are evident going back at least 30-40,000 years, in the form of cave-art, and one can imagine that other art forms like music and storytelling have been active for a similar period. My own view is that it’s sexual selection, which Greene discusses at length, citing Darwin among others, as well as detractors, like Pinker. The thing is that other species also show sexual selection, especially among birds, which I’ve discussed before a couple of times. The best known example is the peacock’s tail, but I suspect that birdsong also plays a role, not to mention the bower bird and the lyre bird. The lyre bird is an interesting one, because they too have an extravagant tail (I’m talking about the male of the species) which surely would be a hindrance to survival, and they perform a dance and are extraordinary mimics. And the only reason one can think that this might have evolutionary value at all is because the sole purpose of those specific attributes is to attract a mate.
 
And one can see how this is analogous to behaviour in humans, where it is the male who tends to attract females with their talents in music, in particular. As Greene points out, along with others, artistic attributes are a by-product of our formidable brains, but I think these talents would be useless if we hadn’t evolved in unison a particular liking for the product of these endeavours (also discussed by Greene), which we see even in the modern world. I’m talking about the fact that music and stories both seem to be essential sources of entertainment, evident in the success of streaming services, not to mention a rich history in literature, theatre, ballet and more recently, cinema.
 
I’ve written before that there are 2 distinct forms of cognitive ability: creative and analytical; and there is neurological evidence to support this. The point is that having an analytical brain is just as important as having a creative one, otherwise scientific theories and engineering feats, for which humans seem uniquely equipped to provide, would never have happened, even going back to ancient artefacts like Stonehenge and both the Egyptian and Mayan pyramids. Note that these all happened on different continents.
 
But there are times when the analytical and creative seem to have a synergistic effect, and this is particularly evident when it comes to scientific breakthroughs – a point, unsurprisingly, not lost on Greene, who cites Einstein’s groundbreaking discoveries in relativity theory as a case-in-point.
 
One point that Greene doesn’t make is that there has been a cultural evolution that has effectively overtaken biological evolution in humans, and only in humans I would suggest. And this has been a direct consequence of our formidable brains and everything that goes along with that, but especially language.
 
I’ve made the point before that our special skill – our superpower, if you will – is the ability to nest concepts within concepts, which we do with everything, not just language, but it would have started with language, one would think. And this is significant because we all think in a language, including the ability to manipulate abstract concepts in our minds that don’t even exist in the real world. And no where is this more apparent than in the art of storytelling, where we create worlds that only exist in the imagination of someone’s mind.
 
But this cultural evolution has created civilisations and all that they entail, and survival of the fittest has nothing to do with eking out an existence in some hostile wilderness environment. These days, virtually everyone who is reading this has no idea where their food comes from. However, success is measured by different parameters than the ability to produce food, even though food production is essential. These days success is measured by one’s ability to earn money and activities that require brain-power have a higher status and higher reward than so-called low-skilled jobs. In fact, in Australia, there is a shortage of trades because, for the last 2 generations at least, the emphasis, vocationally, has been in getting kids into university courses, when it’s not necessarily the best fit for the child. This is why the professional class (including myself) are often called ‘elitist’ in the culture wars and being a tradie is sometimes seen as a stigma, even though our society is just as dependent on them as they are on professionals. I know, because I’ve spent a working lifetime in a specific environment where you need both: engineering/construction.
 
Like all my posts, I’ve gone off-track but it’s all relevant. Like Greene, I can’t be sure how or why evolution in humans was propelled, if not hi-jacked, by art, but art in all its forms is part of the human condition. A life without music, stories and visual art – often in combination – is unimaginable.
 
And this brings me to the last question in my heading. It so happens that while I was reading about this in Greene’s thought-provoking book, I was also listening to a programme on ABC Classic (an Australian radio station) called Legends, which is weekly and where the presenter, Mairi Nicolson, talks about a legend in the classical music world for an hour, providing details about their life as well as broadcasting examples of their work. In this case, she had the legend in the studio (a rare occurrence), who was Anna Goldsworthy. To quote from Wikipedia: Anna Louise Goldsworthy is an Australian classical pianist, writer, academic, playwright, and librettist, known for her 2009 memoir Piano Lessons.

But the reason I bring this up is because Anna mentioned that she attended a panel discussion on the role of AI in the arts. Anna’s own position is that she sees a role for AI, but in doing the things that humans find boring, which is what we are already seeing in manufacturing. In fact, I’ve witnessed this first-hand. Someone on the panel made the point that AI would effectively democratise art (my term, based on what I gleaned from Anna’s recall) in the sense that anyone would be able to produce a work of art and it would cease to be seen as elitist as it is now. He obviously saw this as a good thing, but I suspect many in the audience, including Anna, would have been somewhat unimpressed if not alarmed. Apparently, someone on the panel challenged that perspective but Anna seemed to think the discussion had somehow veered into a particularly dissonant aberration of the culture wars.
 
I’m one of those who would be alarmed by such a development, because it’s the ultimate portrayal of art as a consumer product, similar to the way we now perceive food. And like food, it would mean that its consumption would be completely disconnected from its production.
 
What worries me is that the person on the panel making this announcement (remember, I’m reporting this second-hand) apparently had no appreciation of the creative process and its importance in a functioning human society going back tens of thousands of years.
 
I like to quote from one of the world’s most successful and best known artists, Paul McCartney, in a talk he gave to schoolchildren (don’t know where):
 
“I don't know how to do this. You would think I do, but it's not one of these things you ever know how to do.” (my emphasis)
 
And that’s the thing: creative people can’t explain the creative process to people who have never experienced it. It feels like we have made contact with some ethereal realm. On another post, I cite Douglas Hofstadter (from his famous Pulitzer-prize winning tome, Godel, Escher, Bach: An Eternal Golden Braid) quoting Escher:
 
"While drawing I sometimes feel as if I were a spiritualist medium, controlled by the creatures I am conjuring up."

 
Many people writing a story can identify with this, including myself. But one suspects that this also happens to people exploring the abstract world of mathematics. Humans have developed a sense that there is more to the world than what we see and feel and touch, which we attempt to reveal in all art forms, and this, in turn, has led to religion. Of course, Greene spends another entire chapter on that subject, and he also recognises the connection between mind, art and the seeking of meaning beyond a mortal existence.

29 April 2025

Writing and philosophy

 I’ve been watching a lot of YouTube videos of Alan Moore, who’s probably best known for his graphic novels, Watchmen and V for Vendetta, both of which were turned into movies. He also wrote a Batman graphic novel, The Killing Joke, which was turned into an R rated animated movie (due to Batman having sex with Batgirl) with Mark Hamill voicing the Joker. I’m unsure if it has any fidelity to Moore’s work, which was critically acclaimed, whereas the movie received mixed reviews. I haven’t read the graphic novel, so I can’t comment.
 
On the other hand, I read Watchmen and saw the movie, which I reviewed on this blog, and thought they were both very good. I also saw V for Vendetta, starring Natalie Portman and Hugo Weaving, without having read Moore’s original. Moore also wrote a novel, Jerusalem, which I haven’t read, but is referenced frequently by Robin Ince in a video I cite below.
 
All that aside, it’s hard to know where to start with Alan Moore’s philosophy on writing, but the 8 Alan Moore quotes video is as good a place as any if you want a quick overview. For a more elaborate dialogue, there is a 3-way interview, obviously done over a video link, between Moore and Brian Catling, hosted by Robin Ince, with the online YouTube channel, How to Academy. They start off talking about imagination, but get into philosophy when all 3 of them start questioning what reality is, or if there is an objective reality at all.
 
My views on this are well known, and it’s a side-issue in the context of writing or creating imaginary worlds. Nevertheless, had I been party to the discussion, I would have simply mentioned Kant, and how he distinguishes between the ‘thing-in-itself’ and our perception of it. Implicit in that concept is the belief that there is an independent reality to our internal model of it, which is mostly created by a visual representation, but other senses, like hearing, touch and smell, also play a role. This is actually important when one gets into a discussion on fiction, but I don’t want to get ahead of myself. I just wish to make the point that we know there is an external objective reality because it can kill you. Note that a dream can’t kill you, which is a fundamental distinction between reality and a dreamscape. I make this point because I think a story, which takes place in your imagination, is like a dreamscape; so that difference carries over into fiction.
 
And on the subject of life-and-death, Moore references something he’d read on how evolution selects for ‘survivability’ not ‘truth’, though he couldn’t remember the source or the authors. However, I can, because I wrote about that too. He’s obviously referring to the joint paper written by Donald Hoffman and Chetan Prakash called Objects of Consciousness (Frontiers of Psychology, 2014). This depends on what one means by ‘truth’. If you’re talking about mathematical truths then yes, it has little to do with survivability (our modern-day dependence on technical infrastructure notwithstanding). On the other hand, if you’re talking about the accuracy of the internal model in your mind matching the objective reality external to your body, then your survivability is very much dependent on it.
 
Speaking of mathematics, Ince mentions Bertrand Russell giving up on mathematics and embracing philosophy because he failed to find a foundation that ensured its truth (my wording interpretating his interpretation). Basically, that’s correct, but it was Godel who put the spanner in the works with his famous Incompleteness Theorem, which effectively tells us that there will always exist mathematical truths that can’t be proven true. In other words, he concretely demonstrated (proved, in fact) that there is a distinction between truth and proof in mathematics. Proofs rely on axioms and all axioms have limitations in what they can prove, so you need to keep finding new axioms, and this infers that mathematics is a neverending endeavour. So it’s not the end of mathematics as we know it, but the exact opposite.
 
All of this has nothing to do with writing per se, but since they raised these issues, I felt compelled to deal with them.
 
At the core of this part of their discussion, is the unstated tenet that fiction and non-fiction are distinct, even if the boundary sometimes becomes blurred. A lot of fiction, if not all, contains factual elements. I like to cite Ian Fleming’s James Bond novels containing details like the gun Bond used (a Walther PPK) and the Bentley he drove, which had an Amherst Villiers supercharger. Bizarrely, I remember these trivial facts from a teenage obsession with all things Bond.
 
And this allows me to segue into something that Moore says towards the end of this 3-way discussion, when he talks specifically about fantasy. He says it needs to be rooted in some form of reality (my words), otherwise the reader won’t be able to imagine it at all. I’ve made this point myself, and give the example of my own novel, Elvene, which contains numerous fantasy elements, including both creatures that don’t exist on our world and technology that’s yet to be invented, if ever.
 
I’ve written about imagination before, because I argue it’s essential to free will, which is not limited to humans, though others may disagree. Imagination is a form of time travel, into the past, but more significantly, into the future. Episodic memories and imagination use the same part of the brain (so we are told); but only humans seem to have the facility to time travel into realms that don’t exist anywhere else other than the imagination. And this is why storytelling is a uniquely human activity.
 
I mentioned earlier how we create an internal world that’s effectively a simulation of the external world we interact with. In fact, my entire philosophy is rooted in the idea that we each of us have an internal and external world, which is how I can separate religion from science, because one is completely internal and the other is an epistemology of the physical universe from the cosmic scale to the infinitesimal. Mathematics is a medium that bridges them, and contributes to the Kantian notion that our perception may never completely match the objective reality. Mathematics provides models that increase our understanding while never quite completing it. Godel’s incompleteness theorem (referenced earlier) effectively limits physics as well. Totally off-topic, but philosophically important.
 
Its relevance to storytelling is that it’s a visual medium even when there are no visuals presented, which is why I contend that if we didn’t dream, stories wouldn’t work. In response to a question, Moore pointed out that, because he worked on graphic novels, he had to think about the story visually. I’ve made the point before that the best thing I ever did for my own writing was to take some screenwriting courses, because one is forced to think visually and imagine the story being projected onto a screen. In a screenplay, you can only write down what is seen and heard. In other words, you can’t write what a character is thinking. On the other hand, you can write an entire novel from inside a character’s head, and usually more than one. But if you tell a story from a character’s POV (point-of-view) you axiomatically feel what they’re feeling and see what they’re witnessing. This is the whole secret to novel-writing. It’s intrinsically visual, because we automatically create images even if the writer doesn’t provide them. So my method is to provide cues, knowing that the reader will fill in the blanks. No one specifically mentions this in the video, so it’s my contribution.
 
Something else that Moore, Catling and Ince discuss is how writing something down effectively changes the way they think. This is something I can identify with, both in fiction and non-fiction, but fiction specifically. It’s hard to explain this if you haven’t experienced it, but they spend a lot of time on it, so it’s obviously significant to them. In fiction, there needs to be a spontaneity – I’ve often compared it to playing jazz, even though I’m not a musician. So most of the time, you don’t know what you’re going to write until it appears on the screen or on paper, depending which medium you’re using. Moore says it’s like it’s in your hands instead of your head, which is certainly not true. But the act of writing, as opposed to speaking, is a different process, at least for Moore, and also for me.
 
I remember many years ago (decades) when I told someone (a dentist, actually) that I was writing a book. He said he assumed that novelists must dictate it, because he couldn’t imagine someone writing down thousands upon thousands of words. At the time, I thought his suggestion just as weird as he thought mine to be. I suspect some writers do. Philip Adams (Australian broadcaster and columnist) once confessed that he dictated everything he wrote. In my professional life, I have written reports for lawyers in contractual disputes, both in Australia and the US, for which I’ve received the odd kudos. In one instance, someone I was working with was using a cassette-like dictaphone and insisted I do the same, believing it would save time. So I did, in spite of my better judgement, and it was just terrible. Based on that one example, you’d be forgiven for thinking that I had no talent or expertise in that role. Of course, I re-wrote the whole thing, and was never asked to do it again.
 
I originally became interested in Moore’s YouTube videos because he talked about how writing affects you as a person and can also affect the world. I think to be a good writer of fiction you need to know yourself very well, and I suspect that is what he meant without actually saying it. The paradox with this is that you are always creating characters who are not you. I’ve said many times that the best fiction you write is where you’re completely detached – in a Zen state – sometimes called ‘flow’. Virtuoso musicians and top sportspersons will often make the same admission.
 
I believe having an existential philosophical approach to life is an important aspect to my writing, because it requires an authenticity that’s hard to explain. To be true to your characters you need to leave yourself out of it. Virtually all writers, including Moore, talk about treating their characters like real people, and you need to extend that to your villains if you want them to be realistic and believable, not stereotypes. Moore talks about giving multiple dimensions to his characters, which I won’t go into. Not because I don’t agree, but because I don’t over-analyse it. Characters just come to me and reveal themselves as the story unfolds; the same as they do for the reader.
 
What I’ve learned from writing fiction (which I’d self-describe as sci-fi/fantasy) – as opposed to what I didn’t know – is that, at the end of the day (or story), it’s all about relationships. Not just intimate relationships, but relationships between family members, between colleagues, between protagonists and AI, and between protagonists and antagonists. This is the fundamental grist for all stories.
 
Philosophy is arguably more closely related to writing than any other artform: there is a crossover and interdependency; because fiction deals with issues relevant to living and being.

28 October 2024

Do we make reality?

 I’ve read 2 articles, one in New Scientist (12 Oct 2024) and one in Philosophy Now (Issue 164, Oct/Nov 2024), which, on the surface, seem unrelated, yet both deal with human exceptionalism (my term) in the context of evolution and the cosmos at large.
 
Staring with New Scientist, there is an interview with theoretical physicist, Daniele Oriti, under the heading, “We have to embrace the fact that we make reality” (quotation marks in the original). In some respects, this continues on with themes I raised in my last post, but with different emphases.
 
This helps to explain the title of the post, but, even if it’s true, there are degrees of possibilities – it’s not all or nothing. Having said that, Donald Hoffman would argue that it is all or nothing, because, according to him, even ‘space and time don’t exist unperceived’. On the other hand, Oriti’s argument is closer to Paul Davies’ ‘participatory universe’ that I referenced in my last post.
 
Where Oriti and I possibly depart, philosophically speaking, is that he calls the idea of an independent reality to us ‘observers’, “naïve realism”. He acknowledges that this is ‘provocative’, but like many provocative ideas it provides food-for-thought. Firstly, I will delineate how his position differs from Hoffman’s, even though he never mentions Hoffman, but I think it’s important.
 
Both Oriti and Hoffman argue that there seems to be something even more fundamental than space and time, and there is even a recent YouTube video where Hoffman claims that he’s shown mathematically that consciousness produces the mathematical components that give rise to spacetime; he has published a paper on this (which I haven’t read). But, in both cases (by Hoffman and Oriti), the something ‘more fundamental’ is mathematical, and one needs to be careful about reifying mathematical expressions, which I once discussed with physicist, Mark John Fernee (Qld University).
 
The main issue I have with Hoffman’s approach is that space-time is dependent on conscious agents creating it, whereas, from my perspective and that of most scientists (although I’m not a scientist), space and time exists external to the mind. There is an exception, of course, and that is when we dream.
 
If I was to meet Hoffman, I would ask him if he’s heard of proprioception, which I’m sure he has. I describe it as the 6th sense we are mostly unaware of, but which we couldn’t live without. Actually, we could, but with great difficulty. Proprioception is the sense that tells us where our body extremities are in space, independently of sight and touch. Why would we need it, if space is created by us? On the other hand, Hoffman talks about a ‘H sapiens interface’, which he likens to ‘desktop icons on a computer screen’. So, somehow our proprioception relates to a ‘spacetime interface’ (his term) that doesn’t exist outside the mind.
 
A detour, but relevant, because space is something we inhabit, along with the rest of the Universe, and so is time. In relativity theory there is absolute space-time, as opposed to absolute space and time separately. It’s called the fabric of the universe, which is more than a metaphor. As Viktor Toth points out, even QFT seems to work ‘just fine’ with spacetime as its background.
 
We can do quantum field theory just fine on the curved spacetime background of general relativity.

 
[However] what we have so far been unable to do in a convincing manner is turn gravity itself into a quantum field theory.
 
And this is where Oriti argues we need to find something deeper. To quote:
 
Modern approaches to quantum gravity say that space-time emerges from something deeper – and this could offer a new foundation for physical laws.
 
He elaborates: I work with quantum gravity models in which you don’t start with a space-time geometry, but from more abstract “atomic” objects described in purely mathematical language. (Quotation marks in the original.)
 
And this is the nub of the argument: all our theories are mathematical models and none of them are complete, in as much as they all have limitations. If one looks at the history of physics, we have uncovered new ‘laws’ and new ‘models’ when we’ve looked beyond the limitations of an existing theory. And some mathematical models even turned out to be incorrect, despite giving answers to what was ‘known’ at the time. The best example being Ptolemy’s Earth-centric model of the solar system. Whether string theory falls into the same category, only future historians will know.
 
In addition, different models work at different scales. As someone pointed out (Mile Gu at the University of Queensland), mathematical models of phenomena at one scale are different to mathematical models at an underlying scale. He gave the example of magnetism, demonstrating that mathematical modelling of the magnetic forces in iron could not predict the pattern of atoms in a 3D lattice as one might expect. In other words, there should be a causal link between individual atoms and the overall effect, but it could not be determined mathematically. To quote Gu: “We were able to find a number of properties that were simply decoupled from the fundamental interactions.” Furthermore, “This result shows that some of the models scientists use to simulate physical systems have properties that cannot be linked to the behaviour of their parts.”
 
This makes me sceptical that we will find an overriding mathematical model that will entail the Universe at all scales, which is what theories of quantum gravity attempt to do. One of the issues that some people raise is that a feature of QM is superposition, and the superposition of a gravitational field seems inherently problematic.
 
Personally, I think superposition only makes sense if it’s describing something that is yet to happen, which is why I agree with Freeman Dyson that QM can only describe the future, which is why it only gives us probabilities.
 
Also, in quantum cosmology, time disappears (according to Paul Davies, among others) and this makes sense (to me), if it’s attempting to describe the entire universe into the future. John Barrow once made a similar point, albeit more eruditely.
 
Getting off track, but one of the points that Oriti makes is whether the laws and the mathematics that describes them are epistemic or ontic. In other words, are they reality or just descriptions of reality. I think it gets blurred, because while they are epistemic by design, there is still an ontology that exists without them, whereas Oriti calls that ‘naïve realism’. He contends that reality doesn’t exist independently of us. This is where I always cite Kant: that we may never know the ‘thing-in-itself,’ but only our perception of it. Where I diverge from Kant is that the mathematical models are part of our perception. Where I depart from Oriti is that I argue there is a reality independently of us.
 
Both QM and relativity theory are observer-dependent, which means they could both be describing an underlying reality that continually eludes us. Whereas Oriti argues that ‘reality is made by our models, not just described by them’, which would make it subjective.
 
As I pointed out in my last post, there is an epistemological loop, whereby the Universe created the means to understand itself, through us. Whether there is also an ontological loop as both Davies and Oriti infer, is another matter: do we determine reality through our quantum mechanical observations? I will park that while I elaborate on the epistemic loop.
 
And this finally brings me to the article in Philosophy Now by James Miles titled, We’re as Smart as the Universe gets. He argues that, from an evolutionary perspective, there is a one-in-one-billion possibility that a species with our cognitive abilities could arise by natural selection, and there is no logical reason why we would evolve further, from an evolutionary standpoint. I have touched on this before, where I pointed out that our cultural evolution has overtaken our biological evolution and that would also happen to any other potential species in the Universe who developed cognitive abilities to the same level. Dawkins coined the term, ‘meme’, to describe cultural traits that have ‘survived’, which now, of course, has currency on social media way beyond its original intention. Basically, Dawkins saw memes as analogous to genes, which get selected; not by a natural process but by a cultural process.
 
I’ve argued elsewhere that mathematical theorems and scientific theories are not inherently memetic. This is because they are chosen because they are successful, whereas memes are successful because they are chosen. Nevertheless, such theorems and theories only exist because a culture has developed over millennia which explores them and builds on them.
 
Miles talks about ‘the high intelligence paradox’, which he associates with Darwin’s ‘highest and most interesting problem’. He then discusses the inherent selection advantage of co-operation, not to mention specialisation. He talks about the role that language has played, which is arguably what really separates us from other species. I’ve argued that it’s our inherent ability to nest concepts within concepts ad-infinitum (which is most obvious in our facility for language, like I’m doing now) that allows us to, not only tell stories, compose symphonies, explore an abstract mathematical landscape, but build motor cars, aeroplanes and fly men to the moon. Are we the only species in the Universe with this super-power? I don’t know, but it’s possible.
 
There are 2 quotes I keep returning to:
 
The most incomprehensible thing about the Universe is that it’s comprehensible. (Einstein)
 
The Universe gave rise to consciousness and consciousness gives meaning to the Universe.
(Wheeler)
 
I haven’t elaborated, but Miles makes the point, while referencing historical antecedents, that there appears no evolutionary 'reason’ that a species should make this ‘one-in-one-billion transition’ (his nomenclature). Yet, without this transition, the Universe would have no meaning that could be comprehended. As I say, that’s the epistemic loop.
 
As for an ontic loop, that is harder to argue. Photons exist in zero time, which is why I contend they are always in the future of whatever they interact with, even if they were generated in the CMBR some 13.5 billion years ago. So how do we resolve that paradox? I don’t know, but maybe that’s the link that Davies and Oriti are talking about, though neither of them mention it. But here’s the thing: when you do detect such a photon (for which time is zero) you instantaneously ‘see’ back to 380,000 years after the Universe’s birth.





02 June 2024

Radical ideas

 It’s hard to think of anyone I admire in physics and philosophy who doesn’t have at least one radical idea. Even Richard Feynman, who avoided hyperbole and embraced doubt as part of his credo: "I’d rather have doubt and be uncertain, than be certain and wrong."
 
But then you have this quote from his good friend and collaborator, Freeman Dyson:

Thirty-one years ago, Dick Feynman told me about his ‘sum over histories’ version of quantum mechanics. ‘The electron does anything it likes’, he said. ‘It goes in any direction at any speed, forward and backward in time, however it likes, and then you add up the amplitudes and it gives you the wave-function.’ I said, ‘You’re crazy.’ But he wasn’t.
 
In fact, his crazy idea led him to a Nobel Prize. That exception aside, most radical ideas are either still-born or yet to bear fruit, and that includes mine. No, I don’t compare myself to Feynman – I’m not even a physicist - and the truth is I’m unsure if I even have an original idea to begin with, be they radical or otherwise. I just read a lot of books by people much smarter than me, and cobble together a philosophical approach that I hope is consistent, even if sometimes unconventional. My only consolation is that I’m not alone. Most, if not all, the people smarter than me, also hold unconventional ideas.
 
Recently, I re-read Robert M. Pirsig’s iconoclastic book, Zen and the Art of Motorcycle Maintenance, which I originally read in the late 70s or early 80s, so within a decade of its publication (1974). It wasn’t how I remembered it, not that I remembered much at all, except it had a huge impact on a lot of people who would never normally read a book that was mostly about philosophy, albeit disguised as a road-trip. I think it keyed into a zeitgeist at the time, where people were questioning everything. You might say that was more the 60s than the 70s, but it was nearly all written in the late 60s, so yes, the same zeitgeist, for those of us who lived through it.
 
Its relevance to this post is that Pirsig had some radical ideas of his own – at least, radical to me and to virtually anyone with a science background. I’ll give you a flavour with some selective quotes. But first some context: the story’s protagonist, whom we assume is Pirsig himself, telling the story in first-person, is having a discussion with his fellow travellers, a husband and wife, who have their own motorcycle (Pirsig is travelling with his teenage son as pillion), so there are 2 motorcycles and 4 companions for at least part of the journey.
 
Pirsig refers to a time (in Western culture) when ghosts were considered a normal part of life. But then introduces his iconoclastic idea that we have our own ghosts.
 
Modern man has his own ghosts and spirits too, you know.
The laws of physics and logic… the number system… the principle of algebraic substitution. These are ghosts. We just believe in them so thoroughly they seem real.

 
Then he specifically cites the law of gravity, saying provocatively:
 
The law of gravity and gravity itself did not exist before Isaac Newton. No other conclusion makes sense.
And what that means, is that the law of gravity exists nowhere except in people’s heads! It’s a ghost! We are all of us very arrogant and conceited about running down other people’s ghosts but just as ignorant and barbaric and superstitious about our own.
Why does everybody believe in the law of gravity then?
Mass hypnosis. In a very orthodox form known as “education”.

 
He then goes from the specific to the general:
 
Laws of nature are human inventions, like ghosts. Laws of logic, of mathematics are also human inventions, like ghosts. The whole blessed thing is a human invention, including the idea it isn’t a human invention. (His emphasis)
 
And this is philosophy in action: someone challenges one of your deeply held beliefs, which forces you to defend it. Of course, I’ve argued the exact opposite, claiming that ‘in the beginning there was logic’. And it occurred to me right then, that this in itself, is a radical idea, and possibly one that no one else holds. So, one person’s radical idea can be the antithesis of someone else’s radical idea.
 
Then there is this, which I believe holds the key to our disparate points of view:
 
We believe the disembodied 'words' of Sir Isaac Newton were sitting in the middle of nowhere billions of years before he was born and that magically he discovered these words. They were always there, even when they applied to nothing. Gradually the world came into being and then they applied to it. In fact, those words themselves were what formed the world. (again, his emphasis)
 
Note his emphasis on 'words', as if they alone make some phenomenon physically manifest.
 
My response: don’t confuse or conflate the language one uses to describe some physical entity, phenomena or manifestation with what it describes. The natural laws, including gravity, are mathematical in nature, obeying sometimes obtuse and esoteric mathematical relationships, which we have uncovered over eons of time, which doesn’t mean they only came into existence when we discovered them and created the language to describe them. Mathematical notation only exists in the mind, correct, including the number system we adopt, but the mathematical relationships that notation describes, exist independently of mind in the same way that nature’s laws do.
 
John Barrow, cosmologist and Fellow of the Royal Society, made the following point about the mathematical ‘laws’ we formulated to describe the first moments of the Universe’s genesis (Pi in the Sky, 1992).
 
Specifically, he says our mathematical theories describing the first three minutes of the Universe predict specific ratios of the earliest ‘heavier’ elements: deuterium, 2 isotopes of helium and lithium, which are 1/1000, 1/1000, 22 and 1/100,000,000 respectively; with the remaining (roughly 78%) being hydrogen. And this has been confirmed by astronomical observations. He then makes the following salient point:



It confirms that the mathematical notions that we employ here and now apply to the state of the Universe during the first three minutes of its expansion history at which time there existed no mathematicians… This offers strong support for the belief that the mathematical properties that are necessary to arrive at a detailed understanding of events during those first few minutes of the early Universe exist independently of the presence of minds to appreciate them.
 
As you can see this effectively repudiates Pirsig’s argument; but to be fair to Pirsig, Barrow wrote this almost 2 decades after Pirsig’s book.
 
In the same vein, Pirsig then goes on to discuss Poincare’s Foundations of Science (which I haven’t read), specifically talking about Euclid’s famous fifth postulate concerning parallel lines never meeting, and how it created problems because it couldn’t be derived from more basic axioms and yet didn’t, of itself, function as an axiom. Euclid himself was aware of this, and never used it as an axiom to prove any of his theorems.
 
It was only in the 19th Century, with the advent of Riemann and other non-Euclidean geometries on curved surfaces that this was resolved. According to Pirsig, it led Poincare to question the very nature of axioms.
 
Are they synthetic a priori judgements, as Kant said? That is, do they exist as a fixed part of man’s consciousness, independently of experience and uncreated by experience? Poincare thought not…
Should we therefore conclude that the axioms of geometry are experimental verities? Poincare didn’t think that was so either…
Poincare concluded that the axioms of geometry are conventions, our choice among all possible conventions is guided by experimental facts, but it remains free and is limited only by the necessity of avoiding all contradiction.

 
I have my own view on this, but it’s worth seeing where Pirsig goes with it:
 
Then, having identified the nature of geometric axioms, [Poincare] turned to the question, Is Euclidean geometry true or is Riemann geometry true?
He answered, The question has no meaning.
[One might] as well as ask whether the metric system is true and the avoirdupois system is false; whether Cartesian coordinates are true and polar coordinates are false. One geometry can not be more true than another; it can only be more convenient. Geometry is not true, it is advantageous.
 
I think this is a false analogy, because the adoption of a system of measurement (i.e. units) and even the adoption of which base arithmetic one uses (decimal, binary, hexadecimal being the most common) are all conventions.
 
So why wouldn’t I say the same about axioms? Pirsig and Poincare are right in as much that both Euclidean and Riemann geometry are true because they’re dependent on the topology that one is describing. They are both used to describe physical phenomena. In fact, in a twist that Pirsig probably wasn’t aware of, Einstein used Riemann geometry to describe gravity in a way that Newton could never have envisaged, because Newton only had Euclidean geometry at his disposal. Einstein formulated a mathematical expression of gravity that is dependent on the geometry of spacetime, and has been empirically verified to explain phenomena that Newton couldn’t. Of course, there are also limits to what Einstein’s equations can explain, so there are more mathematical laws still to uncover.
 
But where Pirsig states that we adopt the axiom that is convenient, I contend that we adopt the axiom that is necessary, because axioms inherently expand the area of mathematics we are investigating. This is a consequence of Godel’s Incompleteness Theorem that states there are limits to what any axiom-based, consistent, formal system of mathematics can prove to be true. Godel himself pointed out that that the resolution lies in expanding the system by adopting further axioms. The expansion of Euclidean to non-Euclidean geometry is a case in point. The example I like to give is the adoption of √-1 = i, which gave us complex algebra and the means to mathematically describe quantum mechanics. In both cases, the axioms allowed us to solve problems that had hitherto been impossible to solve. So it’s not just a convenience but a necessity.
 
I know I’ve belaboured a point, but both of these: non-Euclidean geometry and complex algebra; were at one time radical ideas in the mathematical world that ultimately led to radical ideas: general relativity and quantum mechanics; in the scientific world. Are they ghosts? Perhaps ghost is an apt metaphor, given that they appear timeless and have outlived their discoverers, not to mention the rest of us. Most physicists and mathematicians tacitly believe that they not only continue to exist beyond us, but existed prior to us, and possibly the Universe itself.
 
I will briefly mention another radical idea, which I borrowed from Schrodinger but drew conclusions that he didn’t formulate. That consciousness exists in a constant present, and hence creates the psychological experience of the flow of time, because everything else becomes the past as soon as it happens. I contend that only consciousness provides a reference point for past, present and future that we all take for granted.

15 October 2023

What is your philosophy of life and why?

This was a question I answered on Quora, and, without specifically intending to, I brought together 2 apparently unrelated topics. The reason I discuss language is because it’s so intrinsic to our identity, not only as a species, but as an individual within our species. I’ve written an earlier post on language (in response to a Philosophy Now question-of-the-month), which has a different focus, and I deliberately avoided referencing that.
 
A ‘philosophy of life’ can be represented in many ways, but my perspective is within the context of relationships, in all their variety and manifestations. It also includes a recurring theme of mine.



First of all, what does one mean by ‘philosophy of life? For some people, it means a religious or cultural way-of-life. For others it might mean a category of philosophy, like post-modernism or existentialism or logical positivism.
 
For me, it means a philosophy on how I should live, and on how I both look at and interact with the world. This is not only dependent on my intrinsic beliefs that I might have grown up with, but also on how I conduct myself professionally and socially. So it’s something that has evolved over time.
 
I think that almost all aspects of our lives are dependent on our interactions with others, which starts right from when we were born, and really only ends when we die. And the thing is that everything we do, including all our failures and successes occur in this context.
 
Just to underline the significance of this dependence, we all think in a language, and we all gain our language from our milieu at an age before we can rationally and critically think, especially compared to when we mature. In fact, language is analogous to software that gets downloaded from generation to generation, so that knowledge can also be passed on and accumulated over ages, which has given rise to civilizations and disciplines like science, mathematics and art.
 
This all sounds off-topic, but it’s core to who we are and it’s what distinguishes us from other creatures. Language is also key to our relationships with others, both socially and professionally. But I take it further, because I’m a storyteller and language is the medium I use to create a world inside your head, populated by characters who feel like real people and who interact in ways we find believable. More than any other activity, this illustrates how powerful language is.
 
But it’s the necessity of relationships in all their manifestations that determines how one lives one’s life. As a consequence, my philosophy of life centres around one core value and that is trust. Without trust, I believe I am of no value. But more than that, trust is the foundational value upon which a society either flourishes or devolves into a state of oppression with its antithesis, rebellion.

 

14 January 2023

Why do we read?

This is the almost-same title of a book I bought recently (Why We Read), containing 70 short essays on the subject, featuring scholars of all stripes: historians, philosophers, and of course, authors. It even includes scientists: Paul Davies, Richard Dawkins and Carlo Rovelli, being 3 I’m familiar with.
 
One really can’t overstate the importance of the written word, because, oral histories aside, it allows us to extend memories across generations and accumulate knowledge over centuries that has led to civilisations and technologies that we all take for granted. By ‘we’, I mean anyone reading this post.
 
Many of the essayists write from their personal experiences and I’ll do the same. The book, edited by Josephine Greywoode and published by Penguin, specifically says on the cover in small print: 70 Writers on Non-Fiction; yet many couldn’t help but discuss fiction as well.
 
And books are generally divided between fiction and non-fiction, and I believe we read them for different reasons, and I wouldn’t necessarily consider one less important than the other. I also write fiction and non-fiction, so I have a particular view on this. Basically, I read non-fiction in order to learn and I read fiction for escapism. Both started early for me and I believe the motivation hasn’t changed.
 
I started reading extra-curricular books from about the age of 7 or 8, involving creatures mostly, and I even asked for an encyclopaedia for Christmas at around that time, which I read enthusiastically. I devoured non-fiction books, especially if they dealt with the natural world. But at the same time, I read comics, remembering that we didn’t have TV at that time, which was only just beginning to emerge.
 
I think one of the reasons that boys read less fiction than girls these days is because comics have effectively disappeared, being replaced by video games. And the modern comics that I have seen don’t even contain a complete narrative. Nevertheless, there are graphic novels that I consider brilliant. Neil Gaiman’s Sandman series and Hayao Miyazake’s Nausicaa of the Valley of the Wind, being standouts. Watchmen by Alan Moore also deserves a mention.
 
So the escapism also started early for me, in the world of superhero comics, and I started writing my own scripts and drawing my own characters pre-high school.
 
One of the essayists in the collection, Niall Ferguson (author of Doom) starts off by challenging a modern paradigm (or is it a meme?) that we live in a ‘simulation’, citing Oxford philosopher, Nick Bostrom, writing in the Philosophical Quarterly in 2003. Ferguson makes the point that reading fiction is akin to immersing the mind in a simulation (my phrasing, not his).
 
In fact, a dream is very much like a simulation, and, as I’ve often said, the language of stories is the language of dreams. But here’s the thing; the motivation for writing fiction, for me, is the same as the motivation for reading it: escapism. Whether reading or writing, you enter a world that only exists inside your head. The ultimate solipsism.

And this surely is a miracle of written language: that we can conjure a world with characters who feel real and elicit emotional responses, while we follow their exploits, failures, love life and dilemmas. It takes empathy to read a novel, and tests have shown that people’s empathy increases after they read fiction. You engage with the character and put yourself in their shoes. It’s one of the reasons we read.
 
 
Addendum: I would recommend the book, by the way, which contains better essays than mine, all with disparate, insightful perspectives.
 

07 September 2022

Ontology and epistemology; the twin pillars of philosophy

 I remember in my introduction to formal philosophy that there were 5 branches: ontology, epistemology, logic, aesthetics and ethics. Logic is arguably subsumed under mathematics, which has a connection with ontology and epistemology through physics, and ethics is part of all our lives, from politics to education to social and work-related relations to how one should individually live. Aesthetics is like an orphan in this company, yet art is imbued in all cultures in so many ways, it is unavoidable.
 
However, if you read about Western philosophy, the focus is often on epistemology and its close relation, if not utter dependence, on ontology. Why dependence? Because you can’t have knowledge of something without inferring its existence, even if the existence is purely abstract.
 
There are so many facets to this, that it’s difficult to know where to start, but I will start with Kant because he argued that we can never know ‘the-thing-in-itself’, only a perception of it, which, in a nutshell, is the difference between ontology and epistemology.
 
We need some definitions, and ontology is dictionary defined as the ‘nature of being’, while epistemology is ‘theory of knowledge’, and with these definitions, one can see straightaway the relationship, and Kant’s distillation of it.
 
Of course, one can also see how science becomes involved, because science, at its core, is an epistemological endeavour. In reading and researching this topic, I’ve come to the conclusion that, though science and philosophy have common origins in Western scholarship, going back to Plato, they’ve gone down different paths.
 
If one looks at the last century, which included the ‘golden age of physics’, in parallel with the dominant philosophical paradigm, heavily influenced, if not initiated, by Wittgenstein, we see that the difference can be definitively understood in terms of language. Wittgenstein effectively redefined epistemology as how we frame the world with language, while science, and physics in particular, frames the world in mathematics. I’ll return to this fundamental distinction later.
 
In my last post, I went to some lengths to argue that a fundamental assumption among scientists is that there is an ‘objective reality’. By this, I mean that they generally don’t believe in ‘idealism’ (like Donald Hoffman) which is the belief that objects don’t exist when you don’t perceive them (Hoffman describes it as the same experience as using virtual-reality goggles). As I’ve pointed out before, this is what we all experience when we dream, which I contend is different to the experience of our collective waking lives. It’s the word, ‘collective’, that is the key to understanding the difference – we share waking experiences in a way that is impossible to corroborate in a dream.
 
However, I’ve been reading a lot of posts on Quora by physicists, Viktor T Toth and Mark John Fernee (both of whom I’ve cited before and both of whom I have a lot of respect for). And they both point out that much of what we call reality is observer dependent, which makes me think of Kant.
 
Fernee, when discussing quantum mechanics (QM) keeps coming back to the ‘measurement problem’ and the role of the observer, and how it’s hard to avoid. He discusses the famous ‘Wigner’s friend’ thought experiment, which is an extension of the famous Schrodinger’s cat thought experiment, which infers you have the cat in 2 superpositional states: dead and alive. Eugne Wigner developed a thought experiment, whereby 2 experimenters could get contradictory results. Its relevance to this topic is that the ontology is completely dependent on the observer. My understanding of the scenario is that it subverts the distinction between QM and classical physics.
 
I’ve made the point before that a photon travelling across the Universe from some place and time closer to its beginning (like the CMBR) is always in the future of whatever it interacts with, like, for example, an ‘observer’ on Earth. The point I’d make is that billions of years of cosmological time have passed, so in another sense, the photon comes from the observer’s past, who became classical a long time ago. For the photon, time is always zero, but it links the past to the present across almost the entire lifetime of the observable universe.
 
Quantum mechanics, more than any other field, demonstrates the difference between ontology and epistemology, and this was discussed in another post by Fernee. Epistemologically, QM is described mathematically, and is so successful that we can ignore what it means ontologically. This has led to diverse interpretations from the multiple worlds interpretation (MWI) to so-called ‘hidden variables’ to the well known ‘Copenhagen interpretation’.
 
Fernee, in particular, discusses MWI, not that he’s an advocate, but because it represents an ontology that no one can actually observe. Both Toth and Fernee point out that the wave function, which arguably lies at the heart of QM is never observed and neither is its ‘decoherence’ (which is the measurement problem by another name), which leads many to contend that it’s a mathematical fiction. I argue that it exists in the future, and that only classical physics is actually observed. QM deals with probabilities, which is purely epistemological. After the ‘observation’, Schrodinger’s equation, which describes the wave function ceases to have any meaning. One is in the future and the observation becomes the past as soon as it happens.
 
I don’t know enough about it, but I think entanglement is the key to its ontology. Fernee points out in another post that entanglement is to do with conservation, whether it be the conservation of momentum or, more usually, the conservation of spin. It leads to what is called non-locality, according to Bell’s Theorem, which means it appears to break with relativistic physics. I say ‘appears’, because it’s well known that it can’t be used to send information faster than light; so, in reality, it doesn’t break relativity. Nevertheless, it led to Einstein’s famous quote about ‘spooky action at a distance’ (which is what non-locality means in layperson’s terms).
 
But entanglement is tied to the wave function decoherence, because that’s when it becomes manifest. It’s crucial to appreciate that entangled particles are described by the same wave function and that’s the inherent connection. It led Schrodinger to claim that entanglement is THE defining feature of QM; in effect, it’s what separates QM from classical physics.
 
I think QM is the best demonstration of Kant’s prescient claim that we can never know the-thing-in-itself, but only our perception of it. QM is a purely epistemological theory – the ontology it describes still eludes us.
 
But relativity theory also suggests that reality is observer dependent. Toth points out that even the number of particles that are detected in some scenarios are dependent on the frame of reference of the observer. This has led at least one physicist (on Quora) to argue that the word ‘particle’ should be banned from all physics text books – there are only fields. (Toth is an expert on QFT, quantum field theory, and argues that particles are a manifestation of QFT.) I won’t elaborate as I don’t really know enough, but what’s relevant to this topic is that time and space are observer dependent in relativity, or appear to be.
 
In a not-so-recent post, I described how different ‘observers’ could hypothetically ‘see’ the same event happening hundreds of years apart, just because they are walking across a street in opposite directions. I use quotation marks, because it’s all postulated mathematically, and, in fact, relativity theory prevents them from observing anything outside their past and future light cones. I actually discussed this with Fernee, and he pointed out that it’s to do with causality. Where there is no causal relation between events, we can’t determine an objective sequence let alone one relevant to a time frame independent of us (like a cosmic time frame). And this is where I personally have an issue, because, even though we can’t observe it or determine it, I argue that there is still an objective reality independently of us.
 
In relativity there is something called true time (Ï„) which is the time in the frame of reference of the observer. If spacetime is invariant, then it would logically follow that where you have true time you should have an analogous ‘true space’, yet I’ve never come across it. I also think there is a ‘true simultaneity’ but no one else does, so maybe I’m wrong.
 
There is, however, something called the Planck length, and someone asked Toth if this changed relativistically with the Lorenz transformation, like all other ‘rulers’ in relativity physics. He said that a version of relativity was formulated that made the Planck length invariant but it created problems and didn’t agree with experimental data. What I find interesting about this is that Planck’s constant, h, literally determines the size of atoms, and one doesn’t expect atoms to change size relativistically (but maybe they do). The point I’d make is that these changes are observer dependent, and I’d argue that there is a Planck length that is observer independent, which is the case when there is no observer.
 
This has become a longwinded way of explaining how 20th Century science has effectively taken this discussion away from philosophy, but it’s rarely acknowledged by philosophers, who take refuge in Wittgenstein’s conclusion that language effectively determines what we can understand of the world, because we think in a language and that limits what we can conceptualise. And he’s right, until we come up with new concepts requiring new language. Everything I’ve just discussed was completely unknown more than 120 years ago, for which we had no language, let alone concepts.
 
Some years ago, I reviewed a book by Don Cupitt titled, Above Us Only Sky, which was really about religion in a secular world. But, in it, Cupitt repeatedly argued that things only have meaning when they are ‘language-wrapped’ (his term) and I now realise that he was echoing Wittgenstein. However, there is a context in which language is magical, and that is when it creates a world inside your head, called a story.
 
I’ve been reading Bryan Magee’s The Great Philosophers, based on a series of podcasts with various academics in 1987, which started with Plato and ended with Wittgenstein. He discussed Plato with Myles Burnyeat, Professor of Ancient Philosophy at Oxford. Naturally, they discussed Socrates, the famous dialogues and the more famous Republic, but towards the end they turned to the Timaeus, which was a work on ‘mathematical science’, according to Burnyeat, that influenced Aristotle and Ptolemy.
 
It's worth quoting their last exchange verbatim:
 
Magee: For us in the twentieth century there is something peculiarly contemporary about the fact that, in the programme it puts forward for acquiring an understanding of the world, Plato’s philosophy gives a central role to mathematical physics.
 
Burnyeat: Yes. What Plato aspired to do, modern science has actually done. And so there is a sort of innate sympathy between the two which does not hold for Aristotle’s philosophy. (My emphasis)


Addendum: This is a very good exposition on the 'measurement problem' by Sabine Hossenfelder, which also provides a very good synopsis of the wave function (ψ), Schrodinger's equation and the Born rule.

22 May 2022

We are metaphysical animals

 I’m reading a book called Metaphysical Animals (How Four Women Brought Philosophy Back To Life). The four women were Mary Midgley, Iris Murdoch, Philippa Foot and Elizabeth Anscombe. The first two I’m acquainted with and the last two, not. They were all at Oxford during the War (WW2) at a time when women were barely tolerated in academia and had to be ‘chaperoned’ to attend lectures. Also a time when some women students ended up marrying their tutors. 

The book is authored by Clare Mac Cumhaill and Rachael Wiseman, both philosophy lecturers who became friends with Mary Midgley in her final years (Mary died in 2018, aged 99). The book is part biographical of all 4 women and part discussion of the philosophical ideas they explored.

 

Bringing ‘philosophy back to life’ is an allusion to the response (backlash is too strong a word) to the empiricism, logical positivism and general rejection of metaphysics that had taken hold of English philosophy, also known as analytical philosophy. Iris spent time in postwar Paris where she was heavily influenced by existentialism and Jean-Paul Sartre, in particular, whom she met and conversed with. 

 

If I was to categorise myself, I’m a combination of analytical philosopher and existentialist, which I suspect many would see as a contradiction. But this isn’t deliberate on my part – more a consequence of pursuing my interests, which are science on one hand (with a liberal dose of mathematical Platonism) and how-to-live a ‘good life’ (to paraphrase Aristotle) on the other.

 

Iris was intellectually seduced by Sartre’s exhortation: “Man is nothing else but that which he makes of himself”. But as her own love life fell apart along with all its inherent dreams and promises, she found putting Sartre’s implicit doctrine, of standing solitarily and independently of one’s milieu, difficult to do in practice. I’m not sure if Iris was already a budding novelist at this stage of her life, but anyone who writes fiction knows that this is what it’s all about: the protagonist sailing their lone ship on a sea full of icebergs and other vessels, all of which are outside their control. Life, like the best fiction, is an interaction between the individual and everyone else they meet. Your moral compass, in particular, is often tested. Existentialism can be seen as an attempt to arise above this, but most of us don’t. 

 

Not surprisingly, Wittgenstein looms large in many of the pages, and at least one of the women, Elizabeth Anscombe, had significant interaction with him. With Wittgenstein comes an emphasis on language, which has arguably determined the path of philosophy since. I’m not a scholar of Wittgenstein by any stretch of the imagination, but one thing he taught, or that people took from him, was that the meaning we give to words is a consequence of how they are used in ordinary discourse. Language requires a widespread consensus to actually work. It’s something we rarely think about but we all take for granted, otherwise there would be no social discourse or interaction at all. There is an assumption that when I write these words, they have the same meaning for you as they do for me, otherwise I am wasting my time.

 

But there is a way in which language is truly powerful, and I have done this myself. I can write a passage that creates a scene inside your mind complete with characters who interact and can cause you to laugh or cry, or pretty much any other emotion, as if you were present; as if you were in a dream.

 

There are a couple of specific examples in the book which illustrate Wittgenstein’s influence on Elizabeth and how she used them in debate. They are both topics I have discussed myself without knowing of these previous discourses.

 

In 1947, so just after the war, Elizabeth presented a paper to the Cambridge Moral Sciences Club, which she began with the following disclosure:

 

Everywhere in this paper I have imitated Dr Wittgenstein’s ideas and methods of discussion. The best that I have written is a weak copy of some features of the original, and its value depends only on my capacity to understand and use Dr Wittgenstein’s work.

 

The subject of her talk was whether one can truly talk about the past, which goes back to the pre-Socratic philosopher, Parmenides. In her own words, paraphrasing Parmenides, ‘To speak of something past’ would then to ‘point our thought’ at ‘something there’, but out of reach. Bringing Wittgenstein into the discussion, she claimed that Parmenides specific paradox about the past arose ‘from the way that thought and language connect to the world’.

 

We apply language to objects by naming them, but, in the case of the past, the objects no longer exist. She attempts to resolve this epistemological dilemma by discussing the nature of time as we experience it, which is like a series of pictures that move on a timeline while we stay in the present. This is analogous to my analysis that everything we observe becomes the past as soon as it happens, which is exemplified every time someone takes a photo, but we remain in the present – the time for us is always ‘now’.

 

She explains that the past is a collective recollection, documented in documents and photos, so it’s dependent on a shared memory. I would say that this is what separates our recollection of a real event from a dream, which is solipsistic and not shared with anyone else. But it doesn’t explain why the past appears fixed and the future unknown, which she also attempted to address. But I don’t think this can be addressed without discussing physics.

 

Most physicists will tell you that the asymmetry between the past and future can only be explained by the second law of thermodynamics, but I disagree. I think it is described, if not explained, by quantum mechanics (QM) where the future is probabilistic with an infinitude of possible paths and classical physics is a probability of ONE because it’s already happened and been ‘observed’. In QM, the wave function that gives the probabilities and superpositional states is NEVER observed. The alternative is that all the futures are realised in alternative universes. Of course, Elizabeth Anscombe would know nothing of these conjectures.

 

But I would make the point that language alone does not resolve this. Language can only describe these paradoxes and dilemmas but not explain them.

 

Of course, there is a psychological perspective to this, which many people claim, including physicists, gives the only sense of time passing. According to them, it’s fixed: past, present and future; and our minds create this distinction. I think our minds create the distinction because only consciousness creates a reference point for the present. Everything non-sentient is in a causal relationship that doesn’t sense time. Photons of light, for example, exist in zero time, yet they determine causality. Only light separates everything in time as well as space. I’ve gone off-topic.

 

Elizabeth touched on the psychological aspect, possibly unintentionally (I’ve never read her paper, so I could be wrong) that our memories of the past are actually imagined. We use the same part of the brain to imagine the past as we do to imagine the future, but again, Elizabeth wouldn’t have known this. Nevertheless, she understood that our (only) knowledge of the past is a thought that we turn into language in order to describe it.

 

The other point I wish to discuss is a famous debate she had with C.S. Lewis. This is quite something, because back then, C.S. Lewis was a formidable intellectual figure. Elizabeth’s challenge was all the more remarkable because Lewis’s argument appeared on the surface to be very sound. Lewis argued that the ‘naturalist’ position was self-refuting if it was dependent on ‘reason’, because reason by definition (not his terminology) is based on the premise of cause and effect and human reason has no cause. That’s a simplification, nevertheless it’s the gist of it. Elizabeth’s retort:

 

What I shall discuss is this argument’s central claim that a belief in the validity of reason is inconsistent with the idea that human thought can be fully explained as the product of non-rational causes.

 

In effect, she argued that reason is what humans do perfectly naturally, even if the underlying ‘cause’ is unknown. Not knowing the cause does not make the reasoning irrational nor unnatural. Elizabeth specifically cited the language that Lewis used. She accused him of confusing the concepts of “reason”, “cause” and “explanation”.

 

My argument would be subtly different. For a start, I would contend that by ‘reason’, he meant ‘logic’, because drawing conclusions based on cause and effect is logic, even if the causal relations (under consideration) are assumed or implied rather than observed. And here I contend that logic is not a ‘thing’ – it’s not an entity; it’s an action - something we do. In the modern age, machines perform logic; sometimes better than we do.

 

Secondly, I would ask Lewis, does he think reason only happens in humans and not other animals? I would contend that animals also use logic, though without language. I imagine they’d visualise their logic rather than express it in vocal calls. The difference with humans is that we can perform logic at a whole different level, but the underpinnings in our brains are surely the same. Elizabeth was right: not knowing its physical origins does not make it irrational; they are separate issues.

 

Elizabeth had a strong connection to Wittgenstein right up to his death. She worked with him on a translation and edit of Philosophical Investigations, and he bequeathed her a third of his estate and a third of his copyright.

 

It’s apparent from Iris’s diaries and other sources that Elizabeth and Iris fell in love at one point in their friendship, which caused them both a lot of angst and guilt because of their Catholicism. Despite marrying, Iris later had an affair with Pip (Philippa).

 

Despite my discussion of just 2 of Elizabeth’s arguments, I don’t have the level of erudition necessary to address most of the topics that these 4 philosophers published in. Just reading the 4 page Afterwards, it’s clear that I haven’t even brushed the surface of what they achieved. Nevertheless, I have a philosophical perspective that I think finds some resonance with their mutual ideas. 

 

I’ve consistently contended that the starting point for my philosophy is that for each of us individually, there is an inner and outer world. It even dictates the way I approach fiction. 

 

In the latest issue of Philosophy Now (Issue 149, April/May 2022), Richard Oxenberg, who teaches philosophy at Endicott College in Beverly, Massachusetts, wrote an article titled, What Is Truth? wherein he describes an interaction between 2 people, but only from a purely biological and mechanical perspective, and asks, ‘What is missing?’ Well, even though he doesn’t spell it out, what is missing is the emotional aspect. Our inner world is dominated by emotional content and one suspects that this is not unique to humans. I’m pretty sure that other creatures feel emotions like fear, affection and attachment. What’s more I contend that this is what separates, not just us, but the majority of the animal kingdom, from artificial intelligence.

 

But humans are unique, even among other creatures, in our ability to create an inner world every bit as rich as the one we inhabit. And this creates a dichotomy that is reflected in our division of arts and science. There is a passage on page 230 (where the authors discuss R.G. Collingwood’s influence on Mary), and provide an unexpected definition.

 

Poetry, art, religion, history, literature and comedy are all metaphysical tools. They are how metaphysical animals explore, discover and describe what is real (and beautiful and good). (My emphasis.)

 

I thought this summed up what they mean with their coinage, metaphysical animals, which titles the book, and arguably describes humanity’s most unique quality. Descriptions of metaphysics vary and elude precise definition but the word, ‘transcendent’, comes to mind. By which I mean it’s knowledge or experience that transcends the physical world and is most evident in art, music and storytelling, but also includes mathematics in my Platonic worldview.


 

Footnote: I should point out that certain chapters in the book give considerable emphasis to moral philosophy, which I haven’t even touched on, so another reader might well discuss other perspectives.