Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

18 August 2023

The fabric of the Universe

Brian Greene wrote an excellent book with a similar title (The Fabric of the Cosmos) which I briefly touched on here. Basically, it’s space and time, and the discipline of physics can’t avoid it. In fact, if you add mass and charge, you’ve got the whole gamut that we’re aware of. I know there’s the standard model along with dark energy and dark matter, but as someone said, if you throw everything into a black hole, the only thing you know about it is its mass, charge and angular momentum. Which is why they say, ‘a black hole has no hair.’ That was before Stephen Hawking applied the laws of thermodynamics and quantum mechanics and came up with Hawking radiation, but I’ve gone off-track, so I’ll come back to the topic-at-hand.
 
I like to tell people that I read a lot of books by people a lot smarter than me, and one of those books that I keep returning to is The Constants of Nature by John D Barrow. He makes a very compelling case that the only Universe that could be both stable and predictable enough to support complex life would be one with 3 dimensions of space and 1 of time. A 2-dimensional universe means that any animal with a digestive tract (from mouth to anus) would fall apart. Only a 3-dimensional universe allows planets to maintain orbits for millions of years. As Barrow points out in his aforementioned tome, Einstein’s friend, Paul Ehrenfest (1890-1933) was able to demonstrate this mathematically. It’s the inverse square law of gravity that keeps planets in orbit and that’s a direct consequence of everything happening in 3 dimensions. Interestingly, Kant thought it was the other way around – that 3 dimensions were a consequence of Newton’s universal law of gravity being an inverse square law. Mind you, Kant thought that both space and time were a priori concepts that only exist in the mind:
 
But this space and this time, and with them all appearances, are not in themselves things; they are nothing but representations and cannot exist outside our minds.
 
And this gets to the nub of the topic alluded to in the title of this post: are space and time ‘things’ that are fundamental to everything else we observe?
 
I’ll start with space, because, believe it or not, there is an argument among physicists that space is not an entity per se, but just dimensions between bodies that we measure. I’m going to leave aside, for the time being, that said ‘measurements’ can vary from observer to observer, as per Einstein’s special theory of relativity (SR).
 
This argument arises because we know that the Universe is expanding (by measuring the Doppler-shift of stars); but does space itself expand or is it just objects moving apart? In another post, I referenced a paper by Tamara M. Davis and Charles H. Lineweaver from UNSW (Expanding Confusion: Common Misconceptions of Cosmological Horizons and the Superluminal Expansion of the Universe), which I think puts an end to this argument, when they explain the difference between an SR and GR Doppler shift interpretation of an expanding universe.
 
The general relativistic interpretation of the expansion interprets cosmological redshifts as an indication of velocity since the proper distance between comoving objects increases. However, the velocity is due to the rate of expansion of space, not movement through space, and therefore cannot be calculated with the special relativistic Doppler shift formula. (My emphasis)
 
I’m now going to use a sleight-of-hand and attempt a description of GR (general theory of relativity) without gravity, based on my conclusion from their exposition.
 
The Universe has a horizon that’s directly analogous to the horizon one observes at sea, because it ‘moves’ as the observer moves. In other words, other hypothetical ‘observers’ in other parts of the Universe would observe a different horizon to us, including hypothetical observers who are ‘over-the-horizon’ relative to us.
 
But the horizon of the Universe is a direct consequence of bodies (or space) moving faster-than-light (FTL) over the horizon, as expounded upon in detail in Davis’s and Lineweaver’s paper. But here’s the thing: if you were an observer on one of these bodies moving FTL relative to Earth, the speed of light would still be c. How is that possible? My answer is that the light travels at c relative to the ‘space’* (in which it’s observed), but the space itself can travel faster than light.
 
There are, of course, other horizons in the Universe, which are event horizons of black holes. Now, you have the same dilemma at these horizons as you do at the Universe’s horizon. According to an external observer, time appears to ‘stop’ at the event horizon, because the light emitted by an object can’t reach us. However, for an observer at the event horizon, the speed of light is still c, and if the black hole is big enough, it’s believed (obviously no one can know) that someone could cross the event horizon without knowing they had. But what if it’s spacetime that crosses the event horizon? Then both the external observer’s perception and the comoving observer’s perception would be no different if the latter was at the horizon of the entire universe.
 
But what happens to time? Well, if you measure time by the frequency of light being emitted from an object at any of these horizons, it gets Doppler-shifted to zero, so time ‘stops’ for the ‘local’ observer (on Earth) but not for the observer at the horizon.
 
So far, I’ve avoided talking about quantum mechanics (QM), but something curious happens when you apply QM to cosmology: time disappears. According to Paul Davies in The Goldilocks Enigma: ‘…vanishing of time for the entire universe becomes very explicit in quantum cosmology, where the time variable simply drops out of the quantum description.’ This is consistent with Freeman Dyson’s argument that QM can only describe the future. Thus, if you apply a description of the future to the entire cosmos, there would be no time.
 
 
* Note: you can still apply SR within that ‘space’.

 

Addendum: I've since learned that in 1958, David Finkelstein (a postdoc with the Stevens Institute of Technology in Hoboken, New Jersey) wrote an article in Physical Review that gave the same explanation for how time appears different to different observers of a black hole, as I do above. It immediately grabbed the attention (and approval) of Oppenheimer, Wheeler and Penrose (among others), who had struggled to resolve this paradox. (Ref. Black Holes And Time Warps; Einstein's Outrageous Legacy, Kip S. Thorne, 1994)
 

07 June 2023

Consciousness, free will, determinism, chaos theory – all connected

 I’ve said many times that philosophy is all about argument. And if you’re serious about philosophy, you want to be challenged. And if you want to be challenged you should seek out people who are both smarter and more knowledgeable than you. And, in my case, Sabine Hossenfelder fits the bill.
 
When I read people like Sabine, and others whom I interact with on Quora, I’m aware of how limited my knowledge is. I don’t even have a university degree, though I’ve attempted a number of times. I’ve spent my whole life in the company of people smarter than me, including at school. Believe it or not, I still have occasional contact with them, through social media and school reunions. I grew up in a small rural town, where the people you went to school with feel like siblings.
 
Likewise, in my professional life, I have always encountered people cleverer than me – it provides perspective.
 
In her book, Existential Physics; A Scientist’s Guide to Life’s Biggest Questions, Sabine interviews people who are possibly even smarter than she is, and I sometimes found their conversations difficult to follow. To be fair to Sabine, she also sought out people who have different philosophical views to her, and also have the intellect to match her.
 
I’m telling you all this to put things in perspective. Sabine has her prejudices like everyone else, some of which she defends better than others. I concede that my views are probably more simplistic than hers, and I support my challenges with examples that are hopefully easy to follow. Our points of disagreement can be distilled down to a few pertinent topics, which are time, consciousness, free will and chaos. Not surprisingly, they are all related – what you believe about one, affects what you believe about the others.
 
Sabine is very strict about what constitutes a scientific theory. She argues that so-called theories like the multiverse have ‘no explanatory power’, because they can’t be verified or rejected by evidence, and she calls them ‘ascientific’. She’s critical of popularisers like Brian Cox who tell us that there could be an infinite number of ‘you(s)’ in an infinite multiverse. She distinguishes between beliefs and knowledge, which is a point I’ve made myself. Having said that, I’ve also argued that beliefs matter in science. She puts all interpretations of quantum mechanics (QM) in this category. She keeps emphasising that it doesn’t mean they are wrong, but they are ‘ascientific’. It’s part of the distinction that I make between philosophy and science, and why I perceive science as having a dialectical relationship with philosophy.
 
I’ll start with time, as Sabine does, because it affects everything else. In fact, the first chapter in her book is titled, Does The Past Still Exist? Basically, she argues for Einstein’s ‘block universe’ model of time, but it’s her conclusion that ‘now is an illusion’ that is probably the most contentious. This critique will cite a lot of her declarations, so I will start with her description of the block universe:
 
The idea that the past and future exist in the same way as the present is compatible with all we currently know.
 
This viewpoint arises from the fact that, according to relativity theory, simultaneity is completely observer-dependent. I’ve discussed this before, whereby I argue that for an observer who is moving relative to a source, or stationary relative to a moving source, like the observer who is standing on the platform of Einstein’s original thought experiment, while a train goes past, knows this because of the Doppler effect. In other words, an observer who doesn’t see a Doppler effect is in a privileged position, because they are in the same frame of reference as the source of the signal. This is why we know the Universe is expanding with respect to us, and why we can work out our movement with respect to the CMBR (cosmic microwave background radiation), hence to the overall universe (just think about that).
 
Sabine clinches her argument by drawing a spacetime diagram, where 2 independent observers moving away from each other, observe a pulsar with 2 different simultaneities. One, who is traveling towards the pulsar, sees the pulsar simultaneously with someone’s birth on Earth, while the one travelling away from the pulsar sees it simultaneously with the same person’s death. This is her slam-dunk argument that ‘now’ is an illusion, if it can produce such a dramatic contradiction.
 
However, I drew up my own spacetime diagram of the exact same scenario, where no one is travelling relative to anyone one else, yet create the same apparent contradiction.


 My diagram follows the convention in that the horizontal axis represents space (all 3 dimensions) and the vertical axis represents time. So the 4 dotted lines represent 4 observers who are ‘stationary’ but ‘travelling through time’ (vertically). As per convention, light and other signals are represented as diagonal lines of 45 degrees, as they are travelling through both space and time, and nothing can travel faster than them. So they also represent the ‘edge’ of their light cones.
 
So notice that observer A sees the birth of Albert when he sees the pulsar and observer B sees the death of Albert when he sees the pulsar, which is exactly the same as Sabine’s scenario, with no relativity theory required. Albert, by the way, for the sake of scalability, must have lived for thousands of years, so he might be a tree or a robot.
 
But I’ve also added 2 other observers, C and D, who see the pulsar before Albert is born and after Albert dies respectively. But, of course, there’s no contradiction, because it’s completely dependent on how far away they are from the sources of the signals (the pulsar and Earth).
 
This is Sabine’s perspective:
 
Once you agree that anything exists now elsewhere, even though you see it only later, you are forced to accept that everything in the universe exists now. (Her emphasis.)
 
I actually find this statement illogical. If you take it to its logical conclusion, then the Big Bang exists now and so does everything in the universe that’s yet to happen. If you look at the first quote I cited, she effectively argues that the past and future exist alongside the present.
 
One of the points she makes is that, for events with causal relationships, all observers see the events happening in the same sequence. The scenario where different observers see different sequences of events have no causal relationships. But this begs a question: what makes causal events exceptional? What’s more, this is fundamental, because the whole of physics is premised on the principle of causality. In addition, I fail to see how you can have causality without time. In fact, causality is governed by the constant speed of light – it’s literally what stops everything from happening at once.
 
Einstein also believed in the block universe, and like Sabine, he argued that, as a consequence, there is no free will. Sabine is adamant that both ‘now’ and ‘free will’ are illusions. She argues that the now we all experience is a consequence of memory. She quotes Carnap that our experience of ‘past, present and future can be described and explained by psychology’ – a point also made by Paul Davies. Basically, she argues that what separates our experience of now from the reality of no-now (my expression, not hers) is our memory.
 
Whereas, I think she has it back-to-front, because, as I’ve pointed out before, without memory, we wouldn’t know we are conscious. Our brains are effectively a storage device that allows us to have a continuity of self through time, otherwise we would not even be aware that we exist. Memory doesn’t create the sense of now; it records it just like a photograph does. The photograph is evidence that the present becomes the past as soon as it happens. And our thoughts become memories as soon as they happen, otherwise we wouldn’t know we think.
 
Sabine spends an entire chapter on free will, where she persistently iterates variations on the following mantra:
 
The future is fixed except for occasional quantum events that we cannot influence.

 
But she acknowledges that while the future is ‘fixed’, it’s not predictable. And this brings us to chaos theory. Sabine discusses chaos late in the book and not in relation to free will. She explicates what she calls the ‘real butterfly effect’.
 
The real butterfly effect… means that even arbitrarily precise initial data allow predictions for only a finite amount of time. A system with this behaviour would be deterministic and yet unpredictable.
 
Now, if deterministic means everything physically manifest has a causal relationship with something prior, then I agree with her. If she means that therefore ‘the future is fixed’, I’m not so sure, and I’ll explain why. By specifying ‘physically manifest’, I’m excluding thoughts and computer algorithms that can have an effect on something physical, whereas the cause is not so easily determined. For example, In the case of the algorithm, does it go back to the coder who wrote it?
 
My go-to example for chaos is tossing coins, because it’s so easy to demonstrate and it’s linked to probability theory, as well as being the very essence of a random event. One of the key, if not definitive, features of a chaotic phenomenon is that, if you were to rerun it, you’d get a different result, and that’s fundamental to probability theory – every coin toss is independent of any previous toss – they are causally independent. Unrepeatability is common among chaotic systems (like the weather). Even the Earth and Moon were created from a chaotic event.
 
I recently read another book called Quantum Physics Made Me Do It by Jeremie Harris, who argues that tossing a coin is not random – in fact, he’s very confident about it. He’s not alone. Mark John Fernee, a physicist with Qld Uni, in a personal exchange on Quora argued that, in principle, it should be possible to devise a robot to perform perfectly predictable tosses every time, like a tennis ball launcher. But, as another Quora contributor and physicist, Richard Muller, pointed out: it’s not dependent on the throw but the surface it lands on. Marcus du Sautoy makes the same point about throwing dice and provides evidence to support it.
 
Getting back to Sabine. She doesn’t discuss tossing coins, but she might think that the ‘imprecise initial data’ is the actual act of tossing, and after that the outcome is determined, even if can’t be predicted. However, the deterministic chain is broken as soon as it hits a surface.
 
Just before she gets to chaos theory, she talks about computability, with respect to Godel’s Theorem and a discussion she had with Roger Penrose (included in the book), where she says:
 
The current laws of nature are computable, except for that random element from quantum mechanics.
 
Now, I’m quoting this out of context, because she then argues that if they were uncomputable, they open the door to unpredictability.
 
My point is that the laws of nature are uncomputable because of chaos theory, and I cite Ian Stewart’s book, Does God Play Dice? In fact, Stewart even wonders if QM could be explained using chaos (I don’t think so). Chaos theory has mathematical roots, because not only are the ‘initial conditions’ of a chaotic event impossible to measure, they are impossible to compute – you have to calculate to infinite decimal places. And this is why I disagree with Sabine that the ‘future is fixed’.
 
It's impossible to discuss everything in a 223 page book on a blog post, but there is one other topic she raises where we disagree, and that’s the Mary’s Room thought experiment. As she explains it was proposed by philosopher, Frank Jackson, in 1982, but she also claims that he abandoned his own argument. After describing the experiment (refer this video, if you’re not familiar with it), she says:
 
The flaw in this argument is that it confuses knowledge about the perception of colour with the actual perception of it.
 
Whereas, I thought the scenario actually delineated the difference – that perception of colour is not the same as knowledge. A person who was severely colour-blind might never have experienced the colour red (the specified colour in the thought experiment) but they could be told what objects might be red. It’s well known that some animals are colour-blind compared to us and some animals specifically can’t discern red. Colour is totally a subjective experience. But I think the Mary’s room thought experiment distinguishes the difference between human perception and AI. An AI can be designed to delineate colours by wavelength, but it would not experience colour the way we do. I wrote a separate post on this.
 
Sabine gives the impression that she thinks consciousness is a non-issue. She talks about the brain like it’s a computer.
 
You feel you have free will, but… really, you’re running a sophisticated computation on your neural processor.
 
Now, many people, including most scientists, think that, because our brains are just like computers, then it’s only a matter of time before AI also shows signs of consciousness. Sabine doesn’t make this connection, even when she talks about AI. Nevertheless, she discusses one of the leading theories of neuroscience (IIT, Information Integration Theory), based on calculating the amount of information processed, which gives a number called phi (Φ). I came across this when I did an online course on consciousness through New Scientist, during COVID lockdown. According to the theory, this number provides a ‘measure of consciousness’, which suggests that it could also be used with AI, though Sabine doesn’t pursue that possibility.
 
Instead, Sabine cites an interview in New Scientist with Daniel Bor from the University of Cambridge: “Phi should decrease when you go to sleep or are sedated… but work in Bor’s laboratory has shown that it doesn’t.”
 
Sabine’s own view:
 
Personally, I am highly skeptical that any measure consisting of a single number will ever adequately represent something as complex as human consciousness.
 
Sabine discusses consciousness at length, especially following her interview with Penrose, and she gives one of the best arguments against panpsychism I’ve read. Her interview with Penrose, along with a discussion on Godel’s Theorem, which is another topic, discusses whether consciousness is computable or not. I don’t think it is and I don’t think it’s algorithmic.
 
She makes a very strong argument for reductionism: that the properties we observe of a system can be understood from studying the properties of its underlying parts. In other words, that emergent properties can be understood in terms of the properties that it emerges from. And this includes consciousness. I’m one of those who really thinks that consciousness is the exception. Thoughts can cause actions, which is known as ‘agency’.
 
I don’t claim to understand consciousness, but I’m not averse to the idea that it could exist outside the Universe – that it’s something we tap into. This is completely ascientific, to borrow from Sabine. As I said, our brains are storage devices and sometimes they let us down, and, without which, we wouldn’t even know we are conscious. I don’t believe in a soul. I think the continuity of the self is a function of memory – just read The Lost Mariner chapter in Oliver Sacks’ book, The Man Who Mistook His Wife For A Hat. It’s about a man suffering from retrograde amnesia, so his life is stuck in the past because he’s unable to create new memories.
 
At the end of her book, Sabine surprises us by talking about religion, and how she agrees with Stephen Jay Gould ‘that religion and science are two “nonoverlapping magisteria!”. She makes the point that a lot of scientists have religious beliefs but won’t discuss them in public because it’s taboo.
 
I don’t doubt that Sabine has answers to all my challenges.
 
There is one more thing: Sabine talks about an epiphany, following her introduction to physics in middle school, which started in frustration.
 
Wasn’t there some minimal set of equations, I wanted to know, from which all the rest could be derived?
 
When the principle of least action was introduced, it was a revelation: there was indeed a procedure to arrive at all these equations! Why hadn’t anybody told me?

 
The principle of least action is one concept common to both the general theory of relativity and quantum mechanics. It’s arguably the most fundamental principle in physics. And yes, I posted on that too.

 

31 May 2023

Immortality; from the Pharaohs to cryonics

 I thought the term was cryogenics, but a feature article in the Weekend Australian Magazine (27-28 May 2023) calls the facilities that perform this process, cryonics, and looking up my dictionary, there is a distinction. Cryogenics is about low temperature freezing in general, and cryonics deals with the deep-freezing of bodies specifically, with the intention of one day reviving them.
 
The article cites a few people, but the author, Ross Bilton, features an Australian, Peter Tsolakides, who is in my age group. From what the article tells me, he’s a software engineer who has seen many generations of computer code and has also been a ‘globe-trotting executive for ExxonMobil’.
 
He’s one of the drivers behind a cryonic facility in Australia – its first – located at Holbrook, which is roughly halfway between Melbourne and Sydney. In fact, I often stop at Holbrook for a break and meal on my interstate trips. According to my car’s odometer it is almost exactly half way between my home and my destination, which is a good hour short of Sydney, so it’s actually closer to Melbourne, but not by much.
 
I’m not sure when Tsolakides plans to enter the facility, but he’s forecasting his resurrection in around 250 years time, when he expects he may live for another thousand years. Yes, this is science fiction to most of us, but there are some science facts that provide some credence to this venture.
 
For a start, we already cryogenically freeze embryos and sperm, and we know it works for them. There is also the case of Ewa Wisnierska, 35, a German paraglider taking part in an international competition in Australia, when she was sucked into a storm and elevated to 9947 metres (jumbo jet territory, and higher than Everest). Needless to say, she lost consciousness and spent a frozen 45 mins before she came back to Earth. Quite a miracle and I’ve watched a doco on it. She made a full recovery and was back at her sport within a couple of weeks. And I know of other cases, where the brain of a living person has been frozen to keep them alive, as counter-intuitive as that may sound.
 
Believe it or not, scientists are divided on this, or at least cautious about dismissing it outright. Many take the position, ‘Never say never’. And I think that’s fair enough, because it really is impossible to predict the future when it comes to humanity. It’s not surprising that advocates, like Tsolakides, can see a future where this will become normal for most humans. People who decline immortality will be the exception and not the norm. And I can imagine, if this ‘procedure’ became successful and commonplace, who would say no?
 
Now, I write science fiction, and I have written a story where a group of people decided to create an immortal human race, who were part machine. It’s a reflection of my own prejudices that I portrayed this as a dystopia, but I could have done the opposite.
 
There may be an assumption that if you write science fiction then you are attempting to predict the future, but I make no such claim. My science fiction is complete fantasy, but, like all science fiction, it addresses issues relevant to the contemporary society in which it was created.
 
Getting back to the article in the Weekend Australian, there is an aspect of this that no one addressed – not directly, anyway. There’s no point in cheating death if you can’t cheat old age. In the case of old age, you are dealing with a fundamental law of the Universe, entropy, the second law of thermodynamics. No one asked the obvious question: how do you expect to live for 1,000 years without getting dementia?
 
I think some have thought about this, because, in the same article, they discuss the ultimate goal of downloading their memories and their thinking apparatus (for want of a better term) into a computer. I’ve written on this before, so I won’t go into details.
 
Curiously, I’m currently reading a book by Sabine Hossenfelder called Existential Physics; A Scientist’s Guide to Life’s Biggest Questions, which you would think could not possibly have anything to say on this topic. Nevertheless:
 
The information that makes you you can be encoded in many different physical forms. The possibility that you might one day upload yourself to a computer and continue living a virtual life is arguably beyond present-day technology. It might sound entirely crazy, but it’s compatible with all we currently know.
 
I promise to write another post on Sabine’s book, because she’s nothing if not thought-provoking.
 
So where do I stand? I don’t want immortality – I don’t even want a gravestone, and neither did my father. I have no dependents, so I won’t live on in anyone’s memory. The closest I’ll get to immortality are the words on this blog.

25 May 2023

Philosophy’s 2 disparate strands: what can we know; how can we live

The question I’d like to ask, is there a philosophical view that encompasses both? Some may argue that Aristotle attempted that, but I’m going to take a different approach.
 
For a start, the first part can arguably be broken into 2 further strands: physics and metaphysics. And even this divide is contentious, with some arguing that metaphysics is an ‘abstract theory with no basis in reality’ (one dictionary definition).
 
I wrote an earlier post arguing that we are ‘metaphysical animals’ after discussing a book of the same name, though it was really a biography of 4 Oxford women in the 20th Century: Elizabeth Anscombe, Mary Midgley, Philippa Foot and Iris Murdoch. But I’ll start with this quote from said book.
 
Poetry, art, religion, history, literature and comedy are all metaphysical tools. They are how metaphysical animals explore, discover and describe what is real (and beautiful and good). (My emphasis.)
 
So, arguably, metaphysics could give us a connection between the 2 ‘strands’ in the title. Now here’s the thing: I contend that mathematics should be part of that list, hence part of metaphysics. And, of course, we all know that mathematics is essential to physics as an epistemology. So physics and metaphysics, in my philosophy, are linked in a rather intimate  way.
 
The curious thing about mathematics, or anything metaphysical for that matter, is that, without human consciousness, they don’t really exist, or are certainly not manifest. Everything on that list is a product of human consciousness, notwithstanding that there could be other conscious entities somewhere in the universe with the same capacity.
 
But again, I would argue that mathematics is an exception. I agree with a lot of mathematicians and physicists that while we create the symbols and language of mathematics, we don’t create the intrinsic relationships that said language describes. And furthermore, some of those relationships seem to govern the universe itself.
 
And completely relevant to the first part of this discussion, the limits of our knowledge of mathematics seems to determine the limits of our knowledge of the physical world.
 
I’ve written other posts on how to live, specifically, 3 rules for humans and How should I live? But I’m going to go via metaphysics again, specifically storytelling, because that’s something I do. Storytelling requires an inner and outer world, manifest as character and plot, which is analogous to free will and fate in the real world. Now, even these concepts are contentious, especially free will, because many scientists tell us it’s an illusion. Again, I’ve written about this many times, but it’s relevance to my approach to fiction is that I try and give my characters free will. An important part of my fiction is that the characters are independent of me. If my characters don’t take on a life of their own, then I know I’m wasting my time, and I’ll ditch that story.
 
Its relevance to ‘how to live’ is authenticity. Artists understand better than most the importance of authenticity in their work, which really means keeping themselves out of it. But authenticity has ramifications, as any existentialist will tell you. To live authentically requires an honesty to oneself that is integral to one’s being. And ‘being’ in this sense is about being human rather than its broader ontological meaning. In other words, it’s a fundamental aspect of our psychology, because it evolves and changes according to our environment and milieu. Also, in the world of fiction, it's a fundamental dynamic.
 
What's more, if you can maintain this authenticity (and it’s genuine), then you gain people’s trust, and that becomes your currency, whether in your professional life or your social life. However, there is nothing more fake than false authenticity; examples abound.
 
I’ll give the last word to Socrates; arguably the first existentialist.
 
To live with honour in this world, actually be what you try to appear to be.


29 April 2023

Can philosophy be an antidote to dogma?

 This is similar to another post I wrote recently, both of which are answers to questions I found on Quora. The reason I’m posting this is because I think it’s better than the previous one. Not surprisingly, it also references Socrates and the role of argument in philosophical discourse.
 
What qualities are needed to be a good philosopher?
 
I expect you could ask 100 different philosophers and get 100 different answers. Someone (Gregory Scott), in answer to a similar question, claimed that everyone is a philosopher, but not necessarily a good one.
 
I will suggest 2 traits that I try to cultivate in myself: to be intellectually curious and to be analytical. But I’m getting ahead of myself.
 
For a start, there are many ‘branches’ or categories of philosophy: epistemology and ethics, being the best known and most commonly associated with philosophy. Some might include ontology as well, which has a close relationship with epistemology, like 2 sides of the same coin. There is also logic and aesthetics but then the discussion becomes interminable.
 
But perhaps the best way to answer this question is to look at philosophers you admire and ask yourself, what qualities do they possess that merit your admiration?
 
Before I answer that for myself, I’m going to provide some context. Sandy Grant (philosopher at the University of Cambridge) published an essay titled Dogmas (Philosophy Now, Issue 127, Aug/Sep 2018), whereby she points out the pitfalls of accepting points of view on ‘authority’ without affording them critical analysis. And I would argue that philosophy is an antidote to dogma going back to Socrates, who famously challenged the ‘dogmas’ of his day. Prior to Socrates, philosophy was very prescriptive where you followed someone’s sayings, be they from the Bible, or Confucius or the Upanishads. Socrates revolutionary idea was to introduce argument, and philosophy has been based on argument ever since.
 
Socrates is famously attributed with the saying, The unexamined life is not worth living, which he apparently said before he was forced to take his own life. But there is another saying attributed to Socrates, which is more germane, given the context of his death.
 
To live with honour in this world, actually be what you try to appear to be.
 
Socrates also acquitted himself well in battle, apparently, so he wasn’t afraid of dying for a cause and a principle. Therefore, I would include integrity as the ‘quality’ of a good person, let alone a philosopher.
 
We currently live in an age where the very idea of truth is questioned, whether it be in the realm of science or politics or media. Which is why I think that critical thinking is essential, whereby one looks at evidence and the expertise behind that evidence. I’ve spent a working lifetime in engineering, where, out of necessity, one looks to expertise that one doesn’t have oneself. Trust has gone AWOL in our current social media environment and the ability to analyse without emotion and ideology is paramount. To accept evidence when it goes against your belief system is the mark of a good philosopher. Evidence is the keystone to scientific endeavour and also in administering justice. But perhaps the greatest quality required of a philosopher is to admit, I don’t know, which is also famously attributed to Socrates.

16 April 2023

From Plato to Kant to physics

 I recently wrote a post titled Kant and modern physics, plus I’d written a much more extensive essay on Kant previously, as well as an essay on Plato, whose famous Academy was arguably the origin of Western philosophy, science and mathematics.
 
This is in answer to a question on Quora. The first thing I did was turn the question inside out or upside down, as I explain in the opening paragraph. It was upvoted by Kip Wheeler, who describes himself as “Been teaching medieval stuff at Uni since 1993.” He provided his own answer to the same question, giving a contrary response to mine, so I thought his upvote very generous.
 
There are actually a lot of answers on Quora addressing this theme, and I only reference one of them. But, as far as I can tell, I’m the only one who links Plato to Kant to modern physics.
 
Why could Plato's theory of forms not help us to know things better?
 
I think this question is back-to-front. If you change ‘could’ to ‘would’ and eliminate ‘not’, the question makes more sense – at least, to me. Nevertheless, it ‘could… not help us to know things better’ if it’s misconstrued or if it’s merely considered a religious artefact with no relevance to contemporary epistemology.
 
There are some good answers to similar questions, with Paul Robinson’s answer to Is Plato’s “Theory of Ideas” True? being among the more erudite and scholarly. I won’t attempt to emulate him, but take a different tack using a different starting point, which is more widely known.
 
Robinson, among others, makes reference to Plato’s famous shadows on the wall of a cave allegory (or analogy in modern parlance), and that’s a good place to start. Basically, the shadows represent our perceptions of reality whilst ‘true’ reality remains unknown to us. Plato believed that there was a world of ‘forms’, which were perfect compared to the imperfect world we inhabit. This is similar to the Christian idea of Heaven as distinct from Earth, hence the religious connotation, which is still referenced today.
 
But there is another way to look at this, which is closer to Kant’s idea of the thing-in-itself. Basically, we may never know the true nature of something just based on our perceptions, and I’d contend that modern science, especially physics, has proved Kant correct, specifically in ways he couldn’t foresee.
 
That’s partly because we now have instruments and technologies that can change what we can perceive at all scales, from the cosmological to the infinitesimal. But there’s another development which has happened apace and contributed to both the technology and the perception in a self-reinforcing dialectic between theory and observation. I’m talking about physics, which is arguably the epitome of epistemological endeavour.
 
And the key to physics is mathematics, only there appears to be more mathematics than we need. Ever since the Scientific Revolution, mathematics has proven fundamental in our quest for the elusive thing-in-itself. And this has resulted in a resurgence in the idea of a Platonic realm, only now it’s exclusive to mathematics. I expect Plato would approve, since his famous Academy was based on Pythagoras’s quadrivium of arithmetic, geometry, astronomy and music, all of which involve mathematics.

04 April 2023

Finding purpose without a fortune teller

 I just started watching a show on Apple TV+ called The Big Door Prize, starring Irish actor, Chris O’Dowd, set in suburban America (Deerfield). It’s listed as a comedy, but it might be a black comedy or a satire; I haven’t watched it long enough to judge.
 
It has an interesting premise: the local store has a machine, which, for small change, will tell you what your ‘potential’ is. Not that surprisingly, people start queuing up to find their potential (or purpose). I say, ‘not surprising’, because people consult Tarot cards or the I Ching for the same reason, not to mention weekly astrological charts found in the local newspaper, magazine or whatever. And of course, if the ‘reading’ coincides with our specific desire or wish, we wholeheartedly agree, whereas, if it doesn’t, we dismiss it as rubbish.
 
I’ve written previously about the importance of finding purpose, and, in fact, it’s considered necessary for one’s psychological health. But this is a subtly different take on it, prompted by the aforementioned premise. I have the advantage of over half a century of hindsight because I think I found my purpose late, yet it was hiding in plain sight all along.
 
We sometimes think of our purpose as a calling or vocation. In my case, I believe it was to be a writer. Now, even though I’m not a successful writer by any stretch of the imagination, the fact that I do write is important to me. It gives me a sense of purpose that I don’t find in my job or my relationships, even though they are all important to me. I don’t often agree with Jordan Peterson, but he once made the comment that creative people who don’t create are like ‘broken sticks’. I totally identify with that.
 
I only have to look to my early childhood (pre-high school) when I started to write stories and draw my own superheroes. But as a teenager and a young adult (in my 20s), I found I couldn’t write to save myself, including essays (like I write on this blog), let alone attempts at fiction. But here’s the thing: when I did start writing fiction, I knew it was terrible – so terrible, I didn’t even tell anyone – yet I persevered because I ‘knew’ that I could. And I think that’s the key point: if you have a purpose, you can visualise it even when everything you’re doing tells you that you should give it up.
 
So, you don’t need a ‘machine’ or Tarot cards, just self-belief. Purpose comes to those who look for it, and know it when they see it, even in its emerging phase, when no one else can see it.
 
 
Now, I’m going to tell you a story about someone else, whom I knew for over 4 decades and who found their ‘purpose’ in spite of circumstances that might have prevented it, or at least, worked against it. She was a single Mum who raised 3 daughters and simultaneously found a role in theatre. The thing is that she never gained any substantial financial reward, yet she won awards, both as an actor and director. She even partook in a theatre festival in Monaco, even though it took a government grant to get her there. The thing is that she had very little in terms of material wealth but it never bothered her and she was generous to a fault. She was a trained nurse, but had no other qualifications – certainly none relevant to her theatrical career. She passed last year and she is sorely missed, not only by me, but by the many lives she touched. She was, by anyone’s judgement, a force of nature.
 
 
 
This is a review of a play, Tuesdays with Morrie, for which Liz Bradley won an award. I happened to attend the opening with her, so it has a special memory for me. Dylan Muir, especially mentioned as providing the vocal, is Liz’s daughter.


28 March 2023

Why do philosophers think differently?

 This was a question on Quora, and this is my answer, which, hopefully, explains the shameless self-referencing to this blog.

 

Who says they do? I think this is one of those questions that should be reworded: what distinguishes a philosopher’s thinking from most other people’s? I’m not sure there is a definitive answer to this, because, like other individuals, every philosopher is unique. The major difference is that they spend more time writing down what they’re thinking than most people, and I’m a case in point.
 
Not that I’m a proper philosopher, in that it’s not my profession – I’m an amateur, a dilettante. I wrote a little aphorism at the head of my blog that might provide a clue.

Philosophy, at its best, challenges our long held views, such that we examine them more deeply than we might otherwise consider.

Philosophy, going back to Socrates, is all about argument. Basically, Socrates challenged the dogma of his day and it ultimately cost him his life. I write a philosophy blog and it’s full of arguments, not that I believe I can convince everyone to agree with my point of view. But basically, I hope to make people think outside their comfort zone, and that’s the best I can do.
 
Socrates is my role model, because he was the first (that we know of) who challenged the perceived wisdom provided by figures of authority. In Western traditions tracing the more than 2 millennia since Socrates, figures of authority were associated with the Church, in all its manifestations, where challenging them could result in death or torture or both.
 
That’s no longer the case - well, not quite true - try following that path if you’re a woman in Saudi Arabia or Iran. But, for most of us, living in a Western society, one can challenge anything at all, including whether the Earth is a sphere.
 
Back to the question, I don’t think it can be answered, even in the transcribed form that I substituted. Personally, I think philosophy in the modern world requires analysis and a healthy dose of humility. The one thing I’ve learned from reading and listening to many people much smarter than me is that the knowledge we actually know is but a blip and it always will be. Nowhere is this more evident than in mathematics. There are infinitely more incomputable numbers than computable numbers. So, if our knowledge of maths is just the tip of a universe-sized iceberg, what does that say about anything else we can possibly know.
 
Perhaps what separates a philosopher’s thinking from most other people’s is that they are acutely aware of how little we know. Come to think of it, Socrates famously made the same point.

22 March 2023

The Library of Babel

 You may have heard of this mythic place. There was an article in the same Philosophy Now magazine I referenced in my last post, titled World Wide Web or Library of Babel? By Marco Nuzzaco. Apparently, Jorge Luis Borges (1899-1986) wrote a short story, The Library of Babel in 1941. A little bit of research reveals there are layers of abstraction in this imaginary place, extrapolated upon by another book, The Unimaginable Mathematics of Borges’ Library of Babel, by Mathematical Professor, William Goldbloom Bloch, published in 2008 by Oxford University Press and receiving an ‘honourable mention’ in the 2009 PROSE Awards. I should point out that I haven’t read either of them, but the concept fascinates me, as I expound upon below.
 
The Philosophy Now article compares it with the Internet (as per the title), because the Internet is quickly becoming the most extensive collection of knowledge in the history of humanity. To quote the author, Nuzzarco:
 
The amount of information produced on the Internet in the span of 10 years from 2010 to 2020 is exponentially and incommensurably larger than all the information produced by humanity in the course of its previous history.
 
And yes, the irony is not lost on me that this blog is responsible for its own infinitesimal contribution. But another quote from the same article provides the context that I wish to explore.
 
The Library of Babel contains all the knowledge of the universe that we can possibly gain. It has always been there, and it always will be. In this sense, the knowledge of the library reflects the universe from a God’s eye perspective and the librarians’ relentless research is to decipher its secrets and its mysterious order and purpose – or maybe, as Borges wonders, the ultimate lack of any of these.

 
One can’t read this without contemplating the history of philosophy and science (at least, in the Western tradition) that has attempted to do exactly that. In fact, the whole enterprise has a distinctive Platonic flavour to it, because there is one sense in which the fictional Library of Babel is ‘real’, and it links back to my last post.
 
I haven’t read Borges’ or Bloch’s books, so I’m simply referring to the concept alluded to in that brief quote, that there is an abstract landscape or territory that humans have the unique capacity to explore. And anyone who has considered the philosophy of mathematics knows that it fulfills that criterion.
 
Mathematics has unlocked more secrets about the Universe than any other endeavour. There is a similarity here to Paul Davies’ metaphor of a ‘warehouse’ (which he expounds upon in this video) but I think a Library is an even more apposite allusion. We are like ‘librarians’ trying to decipher God’s view of the Universe that we inhabit, and to extend the metaphor, God left behind a code that only we can decipher (as far as we know) and that code is mathematics.
 
To quote Feynman (The Character of Physical Law, specifically in a chapter titled The Relation of Mathematics to Physics):
 
Physicists cannot make a conversation in any other language. If you want to learn about nature, to appreciate nature, it is necessary to understand the language that she speaks in. She offers her information only in one form.

 
And if we have the knowledge of Gods then we also have the power of Gods, and that is what we’re witnessing, right now, in our current age. We have the power to destroy the world on which we live, either in a nuclear conflagration or runaway climate change (we are literally changing the weather). But we can also use the same knowledge to make the world a more inhabitable place, but to do that we need to be less human-centric.
 
If there is a God, then (he/she) has left us in charge. I think I’ve written about that before. So yes, we are the ‘Librarians’ who have access to extraordinary knowledge and with that knowledge comes extraordinary responsibilities.

 

17 March 2023

In the beginning there was logic

 I recently read an article in Philosophy Now (Issue 154, Feb/Mar 2023), jointly written by Owen Griffith and A.C. Paseau, titled One Logic, Or Many? Apparently, they’ve written a book on this topic (One True Logic, Oxford University Press, May 2022).
 
One of the things that struck me was that they differentiate between logic and reason, because ‘reason is something we do’. This is interesting because I’ve argued previously that logic should be a verb, but I concede they have a point. In the past I saw logic as something that’s performed, by animals and machines as well as humans. And one of the reasons I took this approach was to distinguish logic from mathematics. I contend that we use logic to access mathematics via proofs, which we then call theorems. But here’s the thing: Kurt Godel proved, in effect, that there will always be mathematical ‘truths’ that we can’t prove within any formal system of mathematics that is consistent. The word ‘consistent’ is important (as someone once pointed out to me) because, if it’s inconsistent, then all bets are off.
 
What this means is that there is potentially mathematics that can’t be accessed by logic, and that’s what we’ve found, in practice, as well as in principle. Matt Parker provides a very good overview in this YouTube video on what numbers we know and what we don’t know. And what we don’t know is infinitely greater than what we do know. Gregory Chaitin has managed to prove that there are infinitely greater incomputable numbers than computable numbers, arguing that Godel’s Incompleteness Theorem goes to the very foundation of mathematics.
 
This detour is slightly off-topic, but very relevant. There was a time when people believed that mathematics was just logic, because that’s how we learned it, and certainly there is a strong relationship. Without our prodigious powers of logic, mathematics would be an unexplored territory to us, and remain forever unknown. There are even scholars today who argue that mathematics that can’t be computed is not mathematics, which rules out infinity. That’s another discussion which I won’t get into, except to say that infinity is unavoidable in mathematics. Euclid (~300 BC) proved (using very simple logic) that you can have an infinite number of primes, and primes are the atoms of arithmetic, because all other numbers can be derived therefrom.
 
The authors pose the question in their title: is there a pluralism of logic? And compare a logic relativism with moral relativism, arguing that they both require an absolutism, because moral relativism is a form of morality and logic relativism is a form of logic, neither of which are relative in themselves. In other words, they always apply by self-definition, so contradict the principle that they endorse – they are outside any set of rules of morality or logic, respectively.
 
That’s their argument. My argument is that there are tenets that always apply, like you can’t have a contradiction. They make this point themselves, but one only has to look at mathematics again. If you could allow contradictions, an extraordinary number of accepted proofs in mathematics would no longer apply, including Euclid’s proof that there are an infinity of primes. The proof starts with the premise that you have the largest prime number and then proves that it isn’t.
 
I agree with their point that reason and logic are not synonymous, because we can use reason that’s not logical. We make assumptions that can’t be confirmed and draw conclusions that rely on heuristics or past experiences, out of necessity and expediency. I wrote another post that compared analytical thinking with intuition and I don’t want to repeat myself, but all of us take mental shortcuts based on experience, and we wouldn’t function efficiently if we didn’t.
 
One of the things that the authors don’t discuss (maybe they do in their book) is that the Universe obeys rules of logic. In fact, the more we learn about the machinations of the Universe, on all scales, the more we realise that its laws are fundamentally mathematical. Galileo expressed this succinctly in the 17th Century, and Richard Feynman reiterated the exact same sentiment in the last century.
 
Cliffard A Pickover wrote an excellent book, The Paradox of God And the Science of Omniscience, where he points out that even God’s omniscience has limits. To give a very trivial example, even God doesn’t know the last digit of pi, because it doesn’t exist. What this tells me is that even God has to obey the rules of logic. Now, I’ve come across someone (Sye Ten Bruggencate) who argued that the existence of logic proves the existence of God, but I think he has it back-to-front (if God can’t breach the rules of logic). In other words, if God invented logic, ‘He’ had no choice. And God can’t make a prime number nonprime or vice versa. There are things an omnipotent God can’t do and there are things an omniscient God can’t know. So, basically, even if there is a God, logic came first, hence the title of this essay.

14 January 2023

Why do we read?

This is the almost-same title of a book I bought recently (Why We Read), containing 70 short essays on the subject, featuring scholars of all stripes: historians, philosophers, and of course, authors. It even includes scientists: Paul Davies, Richard Dawkins and Carlo Rovelli, being 3 I’m familiar with.
 
One really can’t overstate the importance of the written word, because, oral histories aside, it allows us to extend memories across generations and accumulate knowledge over centuries that has led to civilisations and technologies that we all take for granted. By ‘we’, I mean anyone reading this post.
 
Many of the essayists write from their personal experiences and I’ll do the same. The book, edited by Josephine Greywoode and published by Penguin, specifically says on the cover in small print: 70 Writers on Non-Fiction; yet many couldn’t help but discuss fiction as well.
 
And books are generally divided between fiction and non-fiction, and I believe we read them for different reasons, and I wouldn’t necessarily consider one less important than the other. I also write fiction and non-fiction, so I have a particular view on this. Basically, I read non-fiction in order to learn and I read fiction for escapism. Both started early for me and I believe the motivation hasn’t changed.
 
I started reading extra-curricular books from about the age of 7 or 8, involving creatures mostly, and I even asked for an encyclopaedia for Christmas at around that time, which I read enthusiastically. I devoured non-fiction books, especially if they dealt with the natural world. But at the same time, I read comics, remembering that we didn’t have TV at that time, which was only just beginning to emerge.
 
I think one of the reasons that boys read less fiction than girls these days is because comics have effectively disappeared, being replaced by video games. And the modern comics that I have seen don’t even contain a complete narrative. Nevertheless, there are graphic novels that I consider brilliant. Neil Gaiman’s Sandman series and Hayao Miyazake’s Nausicaa of the Valley of the Wind, being standouts. Watchmen by Alan Moore also deserves a mention.
 
So the escapism also started early for me, in the world of superhero comics, and I started writing my own scripts and drawing my own characters pre-high school.
 
One of the essayists in the collection, Niall Ferguson (author of Doom) starts off by challenging a modern paradigm (or is it a meme?) that we live in a ‘simulation’, citing Oxford philosopher, Nick Bostrom, writing in the Philosophical Quarterly in 2003. Ferguson makes the point that reading fiction is akin to immersing the mind in a simulation (my phrasing, not his).
 
In fact, a dream is very much like a simulation, and, as I’ve often said, the language of stories is the language of dreams. But here’s the thing; the motivation for writing fiction, for me, is the same as the motivation for reading it: escapism. Whether reading or writing, you enter a world that only exists inside your head. The ultimate solipsism.

And this surely is a miracle of written language: that we can conjure a world with characters who feel real and elicit emotional responses, while we follow their exploits, failures, love life and dilemmas. It takes empathy to read a novel, and tests have shown that people’s empathy increases after they read fiction. You engage with the character and put yourself in their shoes. It’s one of the reasons we read.
 
 
Addendum: I would recommend the book, by the way, which contains better essays than mine, all with disparate, insightful perspectives.