Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Sunday, 2 June 2024

Radical ideas

 It’s hard to think of anyone I admire in physics and philosophy who doesn’t have at least one radical idea. Even Richard Feynman, who avoided hyperbole and embraced doubt as part of his credo: "I’d rather have doubt and be uncertain, than be certain and wrong."
 
But then you have this quote from his good friend and collaborator, Freeman Dyson:

Thirty-one years ago, Dick Feynman told me about his ‘sum over histories’ version of quantum mechanics. ‘The electron does anything it likes’, he said. ‘It goes in any direction at any speed, forward and backward in time, however it likes, and then you add up the amplitudes and it gives you the wave-function.’ I said, ‘You’re crazy.’ But he wasn’t.
 
In fact, his crazy idea led him to a Nobel Prize. That exception aside, most radical ideas are either still-born or yet to bear fruit, and that includes mine. No, I don’t compare myself to Feynman – I’m not even a physicist - and the truth is I’m unsure if I even have an original idea to begin with, be they radical or otherwise. I just read a lot of books by people much smarter than me, and cobble together a philosophical approach that I hope is consistent, even if sometimes unconventional. My only consolation is that I’m not alone. Most, if not all, the people smarter than me, also hold unconventional ideas.
 
Recently, I re-read Robert M. Pirsig’s iconoclastic book, Zen and the Art of Motorcycle Maintenance, which I originally read in the late 70s or early 80s, so within a decade of its publication (1974). It wasn’t how I remembered it, not that I remembered much at all, except it had a huge impact on a lot of people who would never normally read a book that was mostly about philosophy, albeit disguised as a road-trip. I think it keyed into a zeitgeist at the time, where people were questioning everything. You might say that was more the 60s than the 70s, but it was nearly all written in the late 60s, so yes, the same zeitgeist, for those of us who lived through it.
 
Its relevance to this post is that Pirsig had some radical ideas of his own – at least, radical to me and to virtually anyone with a science background. I’ll give you a flavour with some selective quotes. But first some context: the story’s protagonist, whom we assume is Pirsig himself, telling the story in first-person, is having a discussion with his fellow travellers, a husband and wife, who have their own motorcycle (Pirsig is travelling with his teenage son as pillion), so there are 2 motorcycles and 4 companions for at least part of the journey.
 
Pirsig refers to a time (in Western culture) when ghosts were considered a normal part of life. But then introduces his iconoclastic idea that we have our own ghosts.
 
Modern man has his own ghosts and spirits too, you know.
The laws of physics and logic… the number system… the principle of algebraic substitution. These are ghosts. We just believe in them so thoroughly they seem real.

 
Then he specifically cites the law of gravity, saying provocatively:
 
The law of gravity and gravity itself did not exist before Isaac Newton. No other conclusion makes sense.
And what that means, is that the law of gravity exists nowhere except in people’s heads! It’s a ghost! We are all of us very arrogant and conceited about running down other people’s ghosts but just as ignorant and barbaric and superstitious about our own.
Why does everybody believe in the law of gravity then?
Mass hypnosis. In a very orthodox form known as “education”.

 
He then goes from the specific to the general:
 
Laws of nature are human inventions, like ghosts. Laws of logic, of mathematics are also human inventions, like ghosts. The whole blessed thing is a human invention, including the idea it isn’t a human invention. (His emphasis)
 
And this is philosophy in action: someone challenges one of your deeply held beliefs, which forces you to defend it. Of course, I’ve argued the exact opposite, claiming that ‘in the beginning there was logic’. And it occurred to me right then, that this in itself, is a radical idea, and possibly one that no one else holds. So, one person’s radical idea can be the antithesis of someone else’s radical idea.
 
Then there is this, which I believe holds the key to our disparate points of view:
 
We believe the disembodied 'words' of Sir Isaac Newton were sitting in the middle of nowhere billions of years before he was born and that magically he discovered these words. They were always there, even when they applied to nothing. Gradually the world came into being and then they applied to it. In fact, those words themselves were what formed the world. (again, his emphasis)
 
Note his emphasis on 'words', as if they alone make some phenomenon physically manifest.
 
My response: don’t confuse or conflate the language one uses to describe some physical entity, phenomena or manifestation with what it describes. The natural laws, including gravity, are mathematical in nature, obeying sometimes obtuse and esoteric mathematical relationships, which we have uncovered over eons of time, which doesn’t mean they only came into existence when we discovered them and created the language to describe them. Mathematical notation only exists in the mind, correct, including the number system we adopt, but the mathematical relationships that notation describes, exist independently of mind in the same way that nature’s laws do.
 
John Barrow, cosmologist and Fellow of the Royal Society, made the following point about the mathematical ‘laws’ we formulated to describe the first moments of the Universe’s genesis (Pi in the Sky, 1992).
 
Specifically, he says our mathematical theories describing the first three minutes of the Universe predict specific ratios of the earliest ‘heavier’ elements: deuterium, 2 isotopes of helium and lithium, which are 1/1000, 1/1000, 22 and 1/100,000,000 respectively; with the remaining (roughly 78%) being hydrogen. And this has been confirmed by astronomical observations. He then makes the following salient point:



It confirms that the mathematical notions that we employ here and now apply to the state of the Universe during the first three minutes of its expansion history at which time there existed no mathematicians… This offers strong support for the belief that the mathematical properties that are necessary to arrive at a detailed understanding of events during those first few minutes of the early Universe exist independently of the presence of minds to appreciate them.
 
As you can see this effectively repudiates Pirsig’s argument; but to be fair to Pirsig, Barrow wrote this almost 2 decades after Pirsig’s book.
 
In the same vein, Pirsig then goes on to discuss Poincare’s Foundations of Science (which I haven’t read), specifically talking about Euclid’s famous fifth postulate concerning parallel lines never meeting, and how it created problems because it couldn’t be derived from more basic axioms and yet didn’t, of itself, function as an axiom. Euclid himself was aware of this, and never used it as an axiom to prove any of his theorems.
 
It was only in the 19th Century, with the advent of Riemann and other non-Euclidean geometries on curved surfaces that this was resolved. According to Pirsig, it led Poincare to question the very nature of axioms.
 
Are they synthetic a priori judgements, as Kant said? That is, do they exist as a fixed part of man’s consciousness, independently of experience and uncreated by experience? Poincare thought not…
Should we therefore conclude that the axioms of geometry are experimental verities? Poincare didn’t think that was so either…
Poincare concluded that the axioms of geometry are conventions, our choice among all possible conventions is guided by experimental facts, but it remains free and is limited only by the necessity of avoiding all contradiction.

 
I have my own view on this, but it’s worth seeing where Pirsig goes with it:
 
Then, having identified the nature of geometric axioms, [Poincare] turned to the question, Is Euclidean geometry true or is Riemann geometry true?
He answered, The question has no meaning.
[One might] as well as ask whether the metric system is true and the avoirdupois system is false; whether Cartesian coordinates are true and polar coordinates are false. One geometry can not be more true than another; it can only be more convenient. Geometry is not true, it is advantageous.
 
I think this is a false analogy, because the adoption of a system of measurement (i.e. units) and even the adoption of which base arithmetic one uses (decimal, binary, hexadecimal being the most common) are all conventions.
 
So why wouldn’t I say the same about axioms? Pirsig and Poincare are right in as much that both Euclidean and Riemann geometry are true because they’re dependent on the topology that one is describing. They are both used to describe physical phenomena. In fact, in a twist that Pirsig probably wasn’t aware of, Einstein used Riemann geometry to describe gravity in a way that Newton could never have envisaged, because Newton only had Euclidean geometry at his disposal. Einstein formulated a mathematical expression of gravity that is dependent on the geometry of spacetime, and has been empirically verified to explain phenomena that Newton couldn’t. Of course, there are also limits to what Einstein’s equations can explain, so there are more mathematical laws still to uncover.
 
But where Pirsig states that we adopt the axiom that is convenient, I contend that we adopt the axiom that is necessary, because axioms inherently expand the area of mathematics we are investigating. This is a consequence of Godel’s Incompleteness Theorem that states there are limits to what any axiom-based, consistent, formal system of mathematics can prove to be true. Godel himself pointed out that that the resolution lies in expanding the system by adopting further axioms. The expansion of Euclidean to non-Euclidean geometry is a case in point. The example I like to give is the adoption of √-1 = i, which gave us complex algebra and the means to mathematically describe quantum mechanics. In both cases, the axioms allowed us to solve problems that had hitherto been impossible to solve. So it’s not just a convenience but a necessity.
 
I know I’ve belaboured a point, but both of these: non-Euclidean geometry and complex algebra; were at one time radical ideas in the mathematical world that ultimately led to radical ideas: general relativity and quantum mechanics; in the scientific world. Are they ghosts? Perhaps ghost is an apt metaphor, given that they appear timeless and have outlived their discoverers, not to mention the rest of us. Most physicists and mathematicians tacitly believe that they not only continue to exist beyond us, but existed prior to us, and possibly the Universe itself.
 
I will briefly mention another radical idea, which I borrowed from Schrodinger but drew conclusions that he didn’t formulate. That consciousness exists in a constant present, and hence creates the psychological experience of the flow of time, because everything else becomes the past as soon as it happens. I contend that only consciousness provides a reference point for past, present and future that we all take for granted.

Sunday, 19 May 2024

It all started with Euclid

 I’ve mentioned Euclid before, but this rumination was triggered by a post on Quora that someone wrote about Plato, where they argued, along with another contributor, that Plato is possibly overrated because he got a lot of things wrong, which is true. Nevertheless, as I’ve pointed out in other posts, his Academy was effectively the origin of Western philosophy, science and mathematics. It was actually based on the Pythagorean quadrivium of geometry, arithmetic, astronomy and music.
 
But Plato was also a student and devoted follower of Socrates and the mentor of Aristotle, who in turn mentored Alexander the Great. So Plato was a pivotal historical figure and without his writings, we probably wouldn’t know anything about Socrates. In the same way that, without Paul, we probably wouldn’t know anything about Jesus. (I’m sure a lot of people would find that debatable, but, if so, it’s a debate for another post.)
 
Anyway, I mentioned Euclid in my own comment (on Quora), who was the Librarian at Alexandria around 300BC, and thus a product of Plato’s school of thought. Euclid wrote The Elements, which I contend is arguably the most important book written in the history of humankind – more important than any religious text, including the Bible, Homer’s Iliad and the Mahabharata, which, I admit, is quite a claim. It's generally acknowledged as the most copied text in the secular world. In fact, according to Wikipedia:
 
It was one of the very earliest mathematical works to be printed after the invention of the printing press and has been estimated to be second only to the Bible in the number of editions published since the first printing in 1482.
 
Euclid was revolutionary in one very significant way: he was able to demonstrate what ‘truth’ was, using pure logic, albeit in a very abstract and narrow field of inquiry, which is mathematics.
 
Before then, and in other cultures, truth was transient and subjective and often prescribed by the gods. But Euclid changed all that, and forever. I find it extraordinary that I was examined on Euclid’s theorems in high school in the 20th Century.
 
And this mathematical insight has become, millennia later, a key ingredient (for want of a better term) in the hunt for truths in the physical world. In the 20th Century, in what has become known as the Golden Age of Physics, the marriage between mathematics and scientific inquiry at all scales, from the cosmic to the infinitesimal, has uncovered deeply held secrets of nature that the Pythagoreans, and Euclid for that matter, could never have dreamed of. Look no further than quantum mechanics (QM) and the General Theory of Relativity (GR). Between these 2 iconic developments, they underpin every theory we currently have in physics, and they both rely on mathematics that was pivotal in the development of the theories from the outset. In other words, without the mathematics of complex algebra and Riemann geometry respectively, these theories would have been stillborn.
 
I like to quote Richard Feynman from his book, The Character of Physical Law, in a chapter titled, The Relation of Mathematics to Physics:
 
…what turns out to be true is that the more we investigate, the more laws we find, and the deeper we penetrate nature, the more this disease persists. Every one of our laws is a purely mathematical statement in rather complex and abstruse mathematics... Why? I have not the slightest idea. It is only my purpose to tell you about this fact.
 
The strange thing about physics is that for the fundamental laws we still need mathematics.
 
Physicists cannot make a conversation in any other language. If you want to learn about nature, to appreciate nature, it is necessary to understand the language that she speaks in. She offers her information only in one form.

 
And this has only become more evident since Feynman wrote those words.
 
There was another revolution in the 20th Century, involving Alan Turing, Alonso Church and Kurt Godel; this time involving mathematics itself. Basically, each of these independently demonstrated that some mathematical truths were elusive to proof. Some mathematical conjectures could not be proved within the mathematical system from which they arose. The most famous example would be Riemann’s Hypothesis, involving primes. But the Goldbach conjecture (also involving primes) and the conjecture of twin primes also fit into this category. While most mathematicians believe them to be true, they are yet to be proven. I won’t elaborate on them, as they can easily be looked up.
 
But there is more: according to Gregory Chaitin, there are infinitely more incomputable Real numbers than computable Real numbers, which means that most of mathematics is inaccessible to logic.
 
So, when I say it all started with Euclid, I mean all the technology and infrastructure that we take for granted; and which allows me to write this so that virtually anyone anywhere in the world can read it; only exists because Euclid was able to derive ‘truths’ that stood for centuries and ultimately led to this.

Sunday, 5 May 2024

Why you need memory to have free will

 This is so obvious once I explain it to you, you’ll wonder why no one else ever mentions it. I’ve pointed out a number of times before that consciousness exists in a constant present, so the time is always ‘now’ for us. I credit Erwin Schrodinger for providing this insight in his lectures, Mind and Matter, appended to his short tome (an oxymoron), What is Life?
 
A logical consequence is that, without memory, you wouldn’t know you’re conscious. And this has actually happened, where people have been knocked unconscious, then acted as if they were conscious in order to defend themselves, but have no memory of it. It happened to my father in a boxing ring (I didn’t believe him when he first told me) and it happened to a woman security guard (in Sydney) where she shot her assailant after he knocked her out. In both cases, they claimed they had no memory of the incident.
 
And, as I’ve pointed out before, this begs a question: if we can survive an attack without being consciously aware of it, then why did evolution select for consciousness? In other words, we could be automatons. The difference is that we have memory.
 
The brain is effectively a memory storage device, without which we would function quite differently. Perhaps this is the real difference between animals and plants. Perhaps plants are sentient, but without memories they can’t ‘think’. There are different types of memory. There is so-called muscle-memory, whereby when we learn a new skill we don’t have to keep relearning it, and eventually we do it without really thinking about it. Driving a car is an example that most of us are familiar with, but it applies to most sports and the playing of musical instruments. I’ve learned that this applies to cognitive skills as well. For example, I write stories and creating characters is something I do without thinking about it too much.
 
People who suffer from retrograde amnesia (as described by Oliver Sacks in his seminal book, The Man Who Mistook His Wife for a Hat, in the chapter titled, The Lost Mariner) don’t lose their memory of specific skills, or what we call muscle-memory. So you could have muscle-memory and still be an automaton, as I described above.
 
Other types of memory are semantic memory and episodic memory. Semantic memory, which is essential to learning a language, is basically our ability to remember facts, which may or may not require a specific context. Rote learning is just exercising semantic memory, which doesn’t necessarily require a deep understanding of a subject, but that’s another topic.
 
Episodic memory is the one I’m most concerned with here. It’s the ability to recount an event in one’s life – a form of time-travelling we all indulge in from time to time. Unlike a computer memory, it’s not an exact recollection – we reconstruct it – which is why it can change over time and why it doesn’t necessarily agree with someone else’s recollection of the same event. Then there is imagination, which I believe is the key to it all. Apparently, imagination uses the same part of the brain as episodic memory. In effect, we are creating a memory of something that is yet to happen – an attempt to time-travel into the future. And this, I argue, is how free will works.

Philosophers have invented a term called ‘intentionality’, which is not what you might think it is. I’ll give a dictionary definition:
 
The quality of mental states (e.g. thoughts, beliefs, desires, hopes) which consists in their being directed towards some object or state of affairs.
 
Philosophers who write on the topic of consciousness, like Daniel C Dennett and John Searle, like to use the term ‘aboutness’ to describe intentionality, and if you break down the definition I gave above, you might discern what they mean. It’s effectively the ability to direct ‘thoughts… towards some object or state of affairs’. But I see this as either episodic memory or imagination. In other words, the ‘object or state of affairs’ could be historical or yet to happen or pure fantasy. We can imagine events we’ve never experienced, though we may have read or heard about them, and they may not only have happened in another time but also another place – so mental time-travelling.
 
As well as a memory storage device, the brain is also a predictability device – it literally thinks a fraction of a second ahead. I’ve pointed out in another post that the brain creates a model in space and time so we can interact with the real world of space and time, which allows us to survive it. And one of the facets of that model is that it’s actually, minisculy ahead of the real world, otherwise we wouldn’t even be able to catch a ball. In other words, it makes predictions that our life depends on. But I contend that this doesn’t need episodic memory or imagination either, because it happens subconsciously and is part of our automaton brain.
 
My point is that the automaton brain, as I’ve coined it, could have evolved by natural selection, without memory. The major difference memory makes is that we become self-aware, and it gives consciousness a role it would otherwise not possess. And that role is what we call free will. I like a definition that philosopher and neuroscientist, Raymond Tallis, gave:
 
Free agents, then, are free because they select between imagined possibilities, and use actualities to bring about one rather than another.
 
So, as I said earlier, I think imagination is key. Free will requires imagination, which I argue is called ‘aboutness’ or ‘intentionality’ in philosophical jargon (though others may differ). And imagination requires episodic memory or mental time-travelling, without which we would all be automatons; still able to interact with the real world of space and time and to acquire skills necessary for survival.
 
And if one goes back to the very beginning of this essay, it is all premised on the observed and experiential phenomenon that consciousness exists in a constant present. We take this for granted, yet nothing else does. Everything becomes the past as soon as it happens, which I keep repeating, is demonstrated every time someone takes a photo. The only exception I can think of is a photon of light, for which time is zero. Our very thoughts become memory as soon as we think them, otherwise we wouldn’t know we exist, yet we could apparently survive without it.
 
Just today, I read a review in New Scientist (27 April 2024) of a book, The Elephant and the Blind: The experience of pure consciousness – philosophy, science and 500+ experiential reports by Thomas Metzinger. Apparently, Metzinger did an ‘online survey of meditators from 57 countries providing over 500 reports for the book.’ Basically, he argues that one can achieve a state that he calls ‘pure consciousness’ whereby the practitioner loses all sense of self. In effect, he argues (according to the reviewer, Alun Anderson):
 
 That a first-person perspective isn’t necessary for consciousness at all: your sense of self, of a continuous “you”, is part of the content of consciousness, not consciousness itself.

 
A provocative and contentious perspective, yet it reminds me of studies, also reported in New Scientist, many years ago, using brain-scan-imagery, of people experiencing ‘God’ also having a sense of being ‘self-less’, if I can use that term. Personally, I think consciousness is something fundamental with a possible independent existence to anything physical. It has a physical manifestation, if you like, purely because of memory, because our brains are effectively a storage device for consciousness.
 
This is a radical idea, but it is one I woke up with one day as if it was an epiphany, and realised that it was quite a departure from what I normally think. Raymond Tallis, whom I’ve already mentioned, once made the claim that science can only study objects and phenomena that can be measured. I claim that consciousness can’t be measured, but because we can measure brain waves and neuron activity many people argue that we are measuring consciousness.
 
But here’s the thing: if we didn’t experience consciousness, then scientists would tell us it doesn’t exist in the same way they tell us that free will doesn’t exist. I can make this claim because the same scientists argue that eventually AI will exhibit consciousness while simultaneously telling us that we will know this from the way the AI behaves, not because anyone will be measuring anything.

 

Addendum: I came across this related video by self-described philosopher-physicist, Avshalom Elitzur, who takes a subtly different approach to the same issue, giving examples from the animal kingdom. Towards the end, he talks about specific 'isms' (e.g. physicalism and dualism), but he doesn't mention the one I'm an advocate of, which is a 'loop' - that matter interacts with consciousness, via neurons, and then consciousness interacts with matter, which is necessary for free will.

Basically, he argues that consciousness interacting with matter breaks conservation laws (watch the video) but the brain consumes energy whether it's doing a maths calculation, running around an oval or lying asleep. Running around an oval is arguably consciousness interacting with matter - the same for an animal chasing prey - because one assumes they're based on a conscious decision, which is based on an imagined future, as per my thesis above. Also, processing information uses energy, which is why computers get hot, with no consciousness required. I fail to see what the difference is.

Tuesday, 30 April 2024

Logic rules

I’ve written on this topic before, but a question on Quora made me revisit it.
 
Self-referencing can lead to contradiction or to illumination. It was a recurring theme in Douglas Hofstadter’s Godel Escher Bach, and it’s key to Godel’s famous Incompleteness Theorem, which has far-reaching ramifications for mathematics if not epistemology generally. We can never know everything there is to know, which effectively means there will always be known unknowns and unknown unknowns, with possibly infinitely more of the latter than the former.
 
I recently came across a question on Quora: Will a philosopher typically say that their belief that the phenomenal world "abides by all the laws of logic" is an entailment of those laws being tautologies? Or would they rather consider that belief to be an assumption made outside of logic?

If you’re like me, you might struggle with even understanding this question. But it seems to me to be a question about self-referencing. In other words, my understanding is that it’s postulating, albeit as a question, that a belief in logic requires logic. The alternative being ‘the belief is an assumption made outside of logic’. It’s made more confusing by suggesting that the belief is a tautology because it’s self-referencing.
 
I avoided all that, by claiming that logic is fundamental even to the extent that it transcends the Universe, so not a ‘belief’ as such. And you will say that even making that statement is a belief. My response is that logic exists independently of us or any belief system. Basically, I’m arguing that logic is fundamental in that its rules govern the so-called laws of the Universe, which are independent of our cognisance of them. Therefore, independent of whether we believe in them or not.
 
I’ve said on previous occasions that logic should be a verb, because it’s something we do, and not just humans, but other creatures, and even machines. But that can’t be completely true if it really does transcend the Universe. My main argument is hypothetical in that, if there is a hypothetical God, then said God also has to obey the rules of logic. God can’t tell us the last digit of pi (it doesn’t exist) and he can’t make a prime number non-prime or vice versa, because they are determined by pure logic, not divine fiat.
 
And now, of course, I’ve introduced mathematics into the equation (pun intended) because mathematics and logic are inseparable, as probably best demonstrated by Godel’s famous theorem. It was Euclid (circa 300BC) who introduced the concept of proof into mathematics, and a lynch pin of many mathematical proofs is the fundamental principle of logic that you can’t have a contradiction, including Euclid’s own relatively simple proof that there are an infinity of primes. Back to Godel (or forward 2,300 years, to be more accurate), and he effectively proved that there is a distinction between 'proof' and 'truth' in mathematics, in as much as there will always be mathematical truths that can’t be proven true within a given axiom based, consistent, mathematical system. In practical terms, you need to keep extending the ‘system’ to formulate more truths into proofs.
 
It's not a surprise that the ‘laws of the Universe’ that I alluded to above, seem to obey mathematical ‘rules', and in fact, it’s only because of our prodigious abilities to mine the mathematical landscape that we understand the Universe (at every observable scale) to the extent that we do, including scales that were unimaginable even a century ago.
 
I’ve spoken before about Penrose’s 3 Worlds: Physical, Mental and Platonic; which represent the Universe, consciousness and mathematics respectively. What links them all is logic. The Universe is riddled with paradoxes, yet even paradoxes obey logic, and the deeper we look into the Universe’s secrets the more advanced mathematics we need, just to describe it, let alone understand it. And logic is the means by which humans access mathematics, which closes the loop.
 


 Addendum:
I'd forgotten that I wrote a similar post almost 5 years ago, where, unsurprisingly, I came to much the same conclusion. However, there's no reference to God, and I provide a specific example.

Monday, 22 April 2024

Kant’s 300th Birthday (22nd April)

 I wouldn’t have known this if I hadn’t read about it in Philosophy Now. I have to confess I’ve only read the first and most famous of his 3 ‘Critiques’, The Critique of Pure Reason. I have to say that I think Kant was the first philosopher I read where I realised that it’s not about trying to convince everyone you’re right (even though, that’s effectively the methodology) so much as making people think outside their own box.
 
Kant famously attempted to bridge ‘empiricism’ (a la Hume) with ‘reason’ (a la Leibniz), as both necessary in the pursuit of knowledge. In other words, you can’t rely on just one of these fundamental approaches to epistemology. He also famously categorised them as ‘post priori’ and ‘a priori’ respectively, meaning that reason or logic is knowledge gained prior or independently of observation, while empirically derived evidence is derived after an observed event (by necessity). Curiously, he categorised space and time, as a priori, meaning they were mental states. I’ve quoted this often from The Critique of Pure Reason.
 
But this space and this time, and with them all appearances, are not in themselves things; they are nothing but representations and cannot exist outside our minds.
 
I’ve always fundamentally disagreed with this, but the fact that Kant challenges our intuitively held comprehension of space and time, based on our everyday experience, makes one think more deeply about it, if one wants to present a counter-argument.
 
He’s also famous for coining the term, ‘transcendental idealism’, which is like some exotic taxonomy in the library of philosophical ideas. Again, I’ll quote from the source:

All these faculties have a transcendental (as well as an empirical) employment which concerns the form alone, and is possible apriori. 
 
By ‘all these faculties’, he’s talking about our mental faculties to use reason to understand something ‘a priori’. I concluded in an essay I wrote on this very topic, when I studied Kant, that the logical and practical realisation of ‘transcendental idealism’ is mathematics, though I doubt that’s what Kant meant. The fact is that in the intervening 200+ years, epistemology has been dominated by physics, which combines empirical evidence with mathematics in a dialectical relationship, so it’s become impossible to do one without the other. So, in a way, I think Kant foresaw this relationship before it evolved into the profound and materially successful enterprise that we call science.
 
A couple of things I didn’t know. In his early years before he gained tenure, he supplemented his meagre income by private tutoring and hustling at billiards – who would have thought.
 
He also got into trouble with newly elected king, Friedrich Wilhelm II, for his critiques on religion, when he published The General Natural History and Theory of the Heavens in 1755, arguing for a purely physical explanation of the Universe’s origins, a good 200 years before it became acceptable. In effect, he was censored, and he didn’t publish anything else on religion until after Friedrich died, whereupon he immediately made up for lost time.

Saturday, 20 April 2024

Sigmund Freud’s and C.S. Lewis’s fictional encounter

Last week I went and saw the movie, Freud’s Last Session, where Anthony Hopkins plays Freud, when he was in London on the very cusp of WW2 and dying of cancer of the mouth, and Mathew Goode plays the Oxford Don, C.S. Lewis. It’s a fictional account, taken from a play I believe, about their meeting at Freud’s home. Its historical veracity is put into question by a disclaimer given after the movie proper finishes, saying that it’s recorded that Freud did, in fact, meet an Oxford Don, but whose identity was never revealed or confirmed.
 
It's the sort of movie that would attract people with a philosophical bent like myself. I thought the cinema better attended than I expected, though it was far from full. Anthony Hopkin’s Freud is playful in the way he challenges Mathew Goode’s Lewis, whilst still being very direct and not pulling any punches. There is an interruption to their conversation by an air-raid siren, and when they go into a bunker, Lewis has a panic-attack, because of his experience in the trenches of WW1. Freud helps him to deal with it in the moment.
 
I’ve read works by both of them, though I’m hardly a scholar. I actually studied Freud in a philosophy class, believe it or not. I’m better read in Jung than Freud. I think Lewis is a good essayist, though I disagree with him philosophically on many counts. Having said that, I expect if I’d met him, I’d have a different opinion of him than just his ideas. I have very good friends who hold almost exactly the same views, so you don’t just judge someone for what they believe, if you get to know them in the flesh.
 
And that’s what came across in this hypothetical exchange – that you have 2 intellectuals who can find mutual respect despite having antithetical views about God and religion and other things, like homosexuality. On that last point, Sigmund’s daughter, Anna, was in a relationship with a woman, which Freud obviously didn’t approve of. In fact, the father-daughter relationship in the movie, was portrayed as very Freudian, where they both seemed to suffer from an unhealthy attachment. Nevertheless, Anna Freud went on to make a name for herself in child psychoanalysis, and there’s a scene where she has to deal with an overbearing and arrogant young man, and her putdown made me want to clap; I just wish I could remember it. Anyway, Anna’s story provides a diversionary, yet not irrelevant, subplot, which makes the movie a bit more than just a two-hander.
 
There are scenes where Mathew Goode’s Lewis has dreams or visions and finds himself in a forest where he comes across a deer and one where he sees a bright overwhelming light. There was a sense in these scenes that he felt he was in the presence of God, and it made me realise that I couldn’t judge him for that. I’ve long argued that God is a personal experience that can’t be shared, but we overlay it with our cultural norms. It was in these scenes that I felt his character was portrayed most authentically.