Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Sunday, 31 December 2023

What are the limits of knowledge?

 This was the Question of the Month in Philosophy Now (Issue 157, August/September 2023) and 11 answers were published in Issue 159, December 2023/January 2024, including mine, which I now post complete with minor edits.

 

Some people think that language determines the limits of knowledge, yet it merely describes what we know rather than limits it, and humans have always had the facility to create new language to depict new knowledge.

There are many types of knowledge, but I’m going to restrict myself to knowledge of the natural world. The ancient Greeks were possibly the first to intuit that the natural world had its own code. The Pythagoreans appreciated that musical pitch had a mathematical relationship, and that some geometrical figures contained numerical ratios. They made the giant conceptual leap that this could possibly be a key to understanding the Cosmos itself.

Jump forward two millennia, and their insight has borne more fruit than they could possibly have imagined. Richard Feynman made the following observation about mathematics in The Character of Physical Law: “Physicists cannot make a conversation in any other language. If you want to learn about nature, to appreciate nature, it is necessary to understand the language that she speaks in. She offers her information only in one form.”

Meanwhile, the twentieth century logician Kurt Gödel proved that in any self-consistent, axiom-based, formal mathematical system, there will always be mathematical truths that can’t be proved true using that system. However, they potentially can be proved if one expands the axioms of the system. This infers that there is no limit to mathematical truths.

Alonso Church’s ‘paradox of unknowability’ states, “unless you know it all, there will always be truths that are by their very nature unknowable.” This applies to the physical universe itself. Specifically, since the vast majority of the Universe is unobservable, and possibly infinite in extent, most of it will remain forever unknowable. Given that the limits of knowledge are either infinite or unknowable in both the mathematical and physical worlds, then those limits are like a horizon that retreats as we advance towards it.

Sunday, 3 December 2023

Philosophy in practice

 As someone recently pointed out, my posts on this blog invariably arise from things I have read (sometimes watched) and I’ve already written a post based on a column I read in the last issue of Philosophy Now (No 158, Oct/Nov 2023).
 
Well, I’ve since read a few more articles and they have prompted quite a lot of thinking. Firstly, there is an article called What Happened to Philosophy? By Dr Alexander Jeuk, who is to quote: “an independent researcher writing on philosophy, economics, politics and the institutional structure of science.” He compares classical philosophy (in his own words, the ‘great philosophers’) with the way philosophy is practiced today in academia – that place most us of don’t visit and wouldn’t understand the language if we did.
 
I don’t want to dwell on it, but it’s relevance to this post is that he laments the specialisation of philosophy, which he blames (if I can use that word) on the specialisation of science. The specialisation of most things is not a surprise to anyone who works in a technical field (I work in engineering). I should point out that I’m not a technical person, so I’m a non-specialist who works in a specialist field. Maybe that puts me in a better position than most to address this. I have a curious mind that started young and my curiosity shifted as I got older, which means I never really settled into one area of knowledge, and, if I had, I didn’t quite have the intellectual ability to become competent in it. And that’s why this blog is a bit eclectic.
 
In his conclusion, Jeuk suggests that ‘great philosophy’ should be looked for ‘in the classics, and perhaps encourage a re-emergence of great philosophical thought from outside academia.’ He mentions social media and the internet, which is relevant to this blog. I don’t claim to do ‘great philosophy’; I just attempt to disperse ideas and provoke thought. But I think that’s what philosophy represents to most people outside of academia. Academic philosophy has become lost in its obsession with language, whilst using language that most find obtuse, if not opaque.
 
Another article was titled Does a Just Society Require Just Citizens? By Jimmy Aflonso Licon, Assistant Teaching Professor in Philosophy at Arizona State University. I wouldn’t call the title misleading, but it doesn’t really describe the content of the essay, or even get to the gist of it, in my view. Licon introduces a term, ‘moral mediocrity’, which might have been a better title, if an enigmatic one, as it’s effectively what he discusses for the next, not-quite 3 pages.
 
He makes the point that our moral behaviour stems from social norms – a point I’ve made myself – but he makes it more compellingly. Most of us do ‘moral’ acts because that’s what our peers do, and we are species-destined (my term, not his) to conform. This is what he calls moral mediocrity, because we don’t really think it through or deliberate on whether it’s right or wrong, though we might convince ourselves that we do. He makes the salient point that if we had lived when slavery was the norm, we would have been slave-owners (assuming the reader is white, affluent and male). Likewise, suffrage was once anathema to a lot of women, as well as men. This supports my view that morality changes, and what was once considered radical becomes conservative. And such changes are usually generational, as we are witnessing in the current age with marriage equality.
 
He coins another term, when he says ‘we are the recipients of a moral inheritance’ (his italics). In other words, the moral norms we follow today, we’ve inherited from our forebears. Towards the end of his essay, he discusses Kant’s ideas on ‘duty’. I won’t go into that, but, if I understand Licon’s argument correctly, he’s saying that a ‘just society’ is one that has norms and laws that allow moral mediocrity, whereby its members don’t have to think about what’s right or wrong; they just follow the rules. This leads to his very last sentence: And this is fundamentally the moral problem with moral mediocrity: it is wrongly motivated.
 
I’ve written on this before, and, given the title as well as the content, I needed to think on what I consider leads to a ‘just society’. And I keep coming back to the essential need for trust. Societies don’t function without some level of trust, but neither do personal relationships, contractual arrangements or the raising of children.
 
And this leads to the third article in the same issue, Seeing Through Transparency, by Paul Doolan, who ‘teaches philosophy at Zurich International School and is the author of Collective Memory and the Dutch East Indies; Unremembering Decolonization (Amsterdam Univ Press, 2021).
 
In effect, he discusses the paradoxical nature of modern societies, whereby we insist on ‘transparency’ yet claim that privacy is sacrosanct – see the contradiction? Is this hypocrisy? And this relates directly to trust. Without transparency, be it corporate or governmental, we have trust issues. My experience is that when it comes to personal relationships, it’s a given, a social norm in fact, that a person reveals as much of their interior life as they want to, and it’s not ours to mine. An example of moral mediocrity perhaps. And yet, as Doolan points out, we give away so much on social media, where our online persona takes on a life of its own, which we cultivate (this blog not being an exception).
 
I think there does need to be transparency about decisions that affect our lives collectively, as opposed to secrets we all keep for the sake of our sanity. I have written dystopian fiction where people are surveilled to the point of monitoring all speech, and explored how it affects personal relationships. This already happens in some parts of the world. I’ve also explored a dystopian scenario where the surveillance is less obvious – every household has an android that monitors all activity. We might already have that with certain devices in our homes. Can you turn them off?  Do you have a device that monitors everyone who comes to your door?
 
The thing is that we become habituated to their presence, and it becomes part of our societal structure. As I said earlier, social norms change and are largely generational. Now they incorporate AI as well, and it’s happening without a lot of oversight or consultation with users. I don’t want to foster paranoia, but the genie has already escaped and I’d suggest it’s a matter of how we use it rather than how we put it back in the bottle.

Leaving that aside, Doolan also asks if you would behave differently if you could be completely invisible, which, of course, has been explored in fiction. We all know that anonymity fosters bad behaviour – just look online. One of my tenets is that honesty starts with honesty to oneself; it determines how we behave towards others.
 
I also know that an extreme environment, like a prison camp, can change one’s moral compass. I’ve never experienced it, but my father did. It brings out the best and worst in people, and I’d contend that you wouldn’t know how you’d be affected if you haven’t experienced it. This is an environment that turns Licon’s question on its head: can you be just in an intrinsically unjust environment?

Saturday, 25 November 2023

Are people on the Left more intelligent?

 Now there’s a provocative question, and the short answer is, No. Political leanings are more associated with personality traits than IQ, according to studies I’ve read about, though I’m no expert. Having said that, I raise this subject, because I think there’s a perception on both sides that there is, which is why people on the Right love to use the word, ‘elites’, to describe what they see as a distortion of reality on subjects like climate change, the COVID pandemic and just about anything they disagree with that involves a level of expertise that most of us don’t have.
 
We live in a world overflowing with information (of which, ironically, I am a contributor) and most, if not all of it, is imbibed through a political filter. On social media we live in echo-chambers, so that confirmation bias is unplugged from all conduits of dissent.
 
To provide a personal example, I watch panel discussions facilitated by The Australian Institute using Zoom, on topics like plastic-waste, whistleblower protection, Pacific nations relations, economics of inflation (all relatively recent topics). The titles alone have a Leftish flavour (though not all), and would be dismissed as ‘woke’ by many on the Right. They are a leftwing think tank, and the panellists are all academics or experts in their field. Whether you agree with them or not, they are well informed.
 
Of course, there are rightwing thinktanks as well; the most obvious in Australia being the Institute of Public Affairs (IPA) with the catchcry, The Voice for Freedom. The Australia Institute has its own catchcry, We Change Minds, which is somewhat optimistic given it appears to be always preaching to the choir. It should be pointed out that the IPA can also provide their own experts and research into individual topics.
 
I’ve never hidden my political leanings, and only have to look at my own family to appreciate that personality traits play a greater role than intelligence. I’m the political black sheep, yet we still socialise and exhibit mutual respect. The same with some of my neighbours, who have strong religious views, yet I count as friends.
 
It’s not a cliché that people of an artistic bent tend to be leftists. I think this is especially true in theatre, where many an eccentric personality took refuge, not to mention people with different sexual orientation to the norm. We are generally more open to new ideas and more tolerant of difference. Negative traits include a vulnerability to neurosis, even depression, and a lack of discipline or willingness to abide by rules.
 
One of the contentious points-of-view I hold is that people on the Left have a propensity for being ahead of their time. It’s why they are often called ‘progressives’, but usually only by history. In their own time, they could be called ratbags, radicals or nowadays, ‘elitist’. History tends to bear this out, and it’s why zeitgeist changes are often generational.
 
Recently, I’ve come across a couple of discussions on Russell (including a 1960 interview with him) and was surprised to learn how much we have in common, philosophically. Not only in regard to epistemology and science (which is another topic), but also ethics and morality. To quote from an article in Philosophy Now (Issue 158, Oct/Nov 2023) titled Russell’s Moral Quandary by David Berman (Professor Emiritus Fellow, Philosophy Department, Trinity College Dublin).
 
…our moral judgements [According to Russell] come from a combination of our nurture and education, but primarily from our feelings and their consequences. Hence they do not arise from any timeless non-natural absolutes [like God], for they are different in different times and places.
 

It’s the very last phrase that is relevant to this essay, though it needed to be put in context. Where I possibly depart from Russell is in the role of empathy, but that’s also another discussion.
 
Even more recently, I had a conversation with a mother of a son and daughter, aged 22 and 19 respectively, where she observed that her daughter was living in a different world to the one she grew up in, particularly when it came to gender roles and expectations. I imagine many would dismiss this as a manifestation of wokeism, but I welcome it. I’ve long argued that there should be more cross-generational conversation. I’ve seen this in my professional life (in engineering), where there is a natural synergy between myself and cleverer, younger people, because we are willing to learn from each other. It naturally mitigates against close-mindedness.
 
The Right are associated with 2 social phenomena that tend to define them. Firstly, they wish to maintain the status quo, even turn back the clock, to the point that they will find their own ‘evidence’ to counter proposed changes. This is not surprising, as it’s almost the definition of conservatism. But the second trait, for want of a better word, has become more evident and even dangerous in modern politics, both locally and overseas. It’s particularly virulent in America, and I’m talking about the propensity to oppose all alternative views to the point of self-defeatism. I know that extremists on the Left can be guilty as well, but there are personalities on the Right who thrive on division; who both cultivate and exploit it. The end result is often paralysis, as we’ve seen recently in America with the House Speaker debacle, and its close-encounter with a nationwide catastrophe.
 
There is a view held by many, including people who work in my profession, that the best way to achieve the most productive outcome is through competition. In theory, it sounds good, but in practice – and I’ve seen it many times – you end up with 2 parties in constant argument and opposition to each other. Even if there are more than 2, they tend to align into 2. What you get is decision-paralysis, delays, stalemate and a neverending blame-game. On the other hand, when parties co-operate and collaborate, you get the exact opposite. Is this a surprise? No.
 
From my experience, the best leaders in project management are the ones who can negotiate compromises and it’s the same in politics. The qualities are openness, tolerance and persuasive negotiation skills. I’ve seen it in action numerous times.
 
In a post I wrote on Plato, I talked about his philosopher-king idea, which is an ideal that could never work in practice. Nevertheless, one of the problems with democracy, as it’s practiced virtually everywhere, is that the most popular opinion on a particular topic is not necessarily the best informed. I can see a benefit in experts playing a greater role in determining policies. We saw this in Australia during the pandemic and I believe it worked, though not everyone agrees. Some argue that the economy suffered unnecessarily. But this was a worldwide experiment, and we saw that where medical advice was ignored and fatalities arose accordingly, the economy suffered anyway.

Friday, 17 November 2023

On the philosophy of reality

 This follows on from my last post, after I saw a YouTube interview with Raymond Tallis on Closer to Truth. He’s all but saying that physics has lost the plot, or at least that’s my takeaway. I happen to know that he’s also writing a book on ‘reality’ – might even have finished it – which is why he can’t stop talking about it, and, it seems, neither can I.
 
I think there are 3 aspects to this discussion, even though they are not clearly delineated. Nevertheless, it might be worth watching the video to better appreciate what I’m talking about. While I agree with some of his points, I think Tallis’s main thrust that physicists contend that ‘reality dissolves’ is a strawman argument as I’ve never heard or read a physicist make that claim. Robert Lawrence Kuhn, who hosts all the talks on Closer To Truth, appears to get uncharacteristically flustered, but I suspect it’s because he intuitively thought the argument facile but couldn’t easily counter it. It would have been far more interesting and edifying if Tallis was debating with someone like Paul Davies, who is not only a physicist, but knows some philosophy.
 
At one point they get onto evolution, as Kuhn attempts to make the distinction between how we’ve evolved to understand the world but culturally moved beyond that. This leads to the 3 aspects I alluded to earlier.
 
The first aspect is that there is an objective reality independent of us, which we need to take seriously because it can kill us. As Tallis points out, this is what we’ve evolved to avoid, otherwise we wouldn’t be here. As I’ve pointed out many times, our brains create a model of that reality so we can interact with it. This is the second aspect, and is part of our evolutionary heritage.
 
The third aspect appears to be completely at odds with this and that appears to be what Tallis has an issue with. The third aspect is that we make mathematical models of reality, which seem, on the surface at least, to have no bearing on the reality that we experience. We don’t see wavefunctions of particles or twins aging at different rates when one goes on a journey somewhere.
 
It doesn’t help that different physicists attempt to give different accounts of what’s happening. For example, a lot of physicists believe that the wavefunction is just a useful mathematical fiction. Others believe that it carries on in another universe after the ‘observation’ or ‘measurement’. All acknowledge that we can’t explain exactly what happens, which is why it’s called the ‘measurement problem’.
 
What many people don’t tell you is that QM only makes predictions about events, which is why it deals in probabilities, and logically, observations require a time lapse, no matter how small, before it’s recorded, so it axiomatically happens in the past. As Paul Davies points out there is an irreversibility in time once the ‘observation’ has been made.
 
The very act of measurement breaks the time symmetry of quantum mechanics in a process sometimes described as the collapse of the wave function…. the rewind button is destroyed as soon as that measurement is made.
 
So, nothing ‘dissolves’, it’s just not observable until after the event, and the event could be a photon hitting a photo-sensitive surface or an isotope undergoing some form of radioactive decay or an electron hitting a screen and emitting light. Even Sabine Hossenfleder (in one of her videos) points out that the multiple paths of Feynman’s ‘sum-over-histories path-integral’ are in the future of the measurement that they predict via calculation.
 
Tallis apparently thinks that QM infers that there is nothing solid in the world, yet it was Freeman Dyson, in collaboration with Andrew Leonard, who used Wolfgang Pauli’s Exclusion Principle to demonstrate why solid objects don’t meld into each other. Dyson acknowledged that ‘the proof was extraordinarily complicated, difficult and opaque’, which might explain why it took so long for someone to calculate it (1967).
 
Humans are unique within the animal kingdom in that we’ve developed tools that allow us to ‘sense’ phenomena that can’t be detected through our biological senses. It’s this very attribute that has led to the discipline of science, and in the last century it has taken giant strides beyond anything our predecessors could have imagined. Not only have we learned that we live in a galaxy that is one among trillions and that the Universe is roughly 14 billion years old, but we can ‘sense’ radiation only 380,000 years after its birth. Who would have thought? At the other end of the scale, we’ve built a giant underground synchrotron that ‘senses’ the smallest known particle in nature, called quarks. They are sub-sub-atomic.
 
But, in conjunction with these miracle technologies, we have discovered, or developed (a combination of both), mathematical tools that allow us to describe these phenomena. In fact, as Richard Feynman pointed out, mathematics is the only language in which ‘nature speaks’. It’s like the mathematical models are another tool in addition to the technological ones that extend our natural senses.
 
Having said that, sometimes these mathematical models don’t actually reflect the real world. A good example is Ptolemy’s model of the solar system using epicycles, that had Earth at its centre. A possible modern example is String Theory, which predicts up to 10 spatial dimensions when we are only aware of 3.
 
Sabine Hossenfelder (already mentioned) wrote a book called Lost in Math, where she challenges this paradigm. I think that this is where Tallis is coming from, though he doesn’t specifically say so. He mentions a wavefunction (in passing), and I’ve already pointed out that some physicists see it as a convenient and useful mathematical fiction. One is Viktor T Toth (on Quora) who says:
 
The mathematical fiction of wavefunction collapse was “invented” to deal with the inconvenient fact that otherwise, we’d have to accept what the equations tell us, namely that quantum mechanics is nonlocal (as per Bell’s theorem)…

 
But it’s this very ‘wavefunction collapse’ that Davies was referring to when he pointed out that it ‘destroys the rewind button’. Toth has a different perspective:
 
As others pointed out, wavefunction collapse is, first and foremost, a mathematical abstraction, not a physical process. If it were a physical process, it would be even weirder. Rather than subdividing spacetime with an arbitrarily chosen hypersurface called “now” into a “before observation” and an “after observation” half, connected by the non-unitary transformation of the “collapse”, wavefunction collapse basically implies throwing away the entire universe, replacing it with a different one (past, present, and future included) containing the collapsed wavefunction instead of the original.
 
Most likely, it’s expositions like this that make Tallis throw up his hands (figuratively speaking), even though I expect he’s never read anything by Toth. Just to address Toth’s remark, I would contend that the ‘arbitrarily chosen hypersurface called “now”’ is actually the edge in time of the entire universe. A conundrum that is rarely acknowledged, let alone addressed, is that the Universe appears to have no edge in space while having an edge in time. Notice how different his ‘visualisation’ is to Davies’, yet both of them are highly qualified and respected physicists.
 
So, while there are philosophical differences among physicists, one can possibly empathise with the frustrations of a self-identified philosopher. (Tallis’s professional background is in neuroscience.)
 
Nevertheless, Tallis uses quantum mechanics just like the rest of us, because all electronic devices are dependent on it, and we all exploit Einstein’s relativity theories when we use our smartphones to tell us where we are.
 
So the mathematical models, by and large, work. And they work so well, that we don’t need to know anything about them, in the same way you don’t need to know anything about all the technology your car uses in order for you to drive it.
 
Tallis, like many philosophers, sees mathematics as a consequence of our ability to measure things, which we then turn into equations that conveniently describe natural phenomena. But the history of Western science reveals a different story, where highly abstract mathematical discoveries later provide an epistemological key to our comprehension of the most esoteric natural phenomena. The wavefunction is a good example: using an unexpected mathematical relationship discovered by Euler in the 1700s, it encapsulates in one formula (Shrodinger’s), superposition, entanglement and Heisenberg’s Uncertainty Principle. So it may just be a mathematical abstraction, yet it describes the most enigmatic features discovered in the natural world thus far.
 
From what I read and watch (on YouTube), I don’t think you can do theoretical physics without doing philosophy. Philosophy (specifically, epistemology) looks at questions that don’t have answers using our current bank of knowledge. Examples include the multiverse, determinism and free will. Philosophers with a limited knowledge of physics (and that includes me) are not in the same position as practicing physicists to address questions about reality. This puts Tallis at a disadvantage. Physicists can’t agree on topics like the multiverse, superdeterminism, free will or the anthropic principle, yet often hold strong views regardless.
 
I’m always reminded of John Wheeler’s metaphor of science as an island of knowledge in a sea of ignorance, with the shoreline being philosophy. Note that as the island expands so does the shoreline of our ignorance.

Monday, 23 October 2023

The mystery of reality

Many will say, ‘What mystery? Surely, reality just is.’ So, where to start? I’ll start with an essay by Raymond Tallis, who has a regular column in Philosophy Now called, Tallis in Wonderland – sometimes contentious, often provocative, always thought-expanding. His latest in Issue 157, Aug/Sep 2023 (new one must be due) is called Reflections on Reality, and it’s all of the above.
 
I’ve written on this topic many times before, so I’m sure to repeat myself. But Tallis’s essay, I felt, deserved both consideration and a response, partly because he starts with the one aspect of reality that we hardly ever ponder, which is doubting its existence.
 
Actually, not so much its existence, but whether our senses fool us, which they sometimes do, like when we dream (a point Tallis makes himself). And this brings me to the first point about reality that no one ever seems to discuss, and that is its dependence on consciousness, because when you’re unconscious, reality ceases to exist, for You. Now, you might argue that you’re unconscious when you dream, but I disagree; it’s just that your consciousness is misled. The point is that we sometimes remember our dreams, and I can’t see how that’s possible unless there is consciousness involved. If you think about it, everything you remember was laid down by a conscious thought or experience.
 
So, just to be clear, I’m not saying that the objective material world ceases to exist without consciousness – a philosophical position called idealism (advocated by Donald Hoffman) – but that the material objective world is ‘unknown’ and, to all intents and purposes, might as well not exist if it’s unperceived by conscious agents (like us). Try to imagine the Universe if no one observed it. It’s impossible, because the word, ‘imagine’, axiomatically requires a conscious agent.
 
Tallis proffers a quote from celebrated sci-fi author, Philip K Dick: 'Reality is that which, when you stop believing in it, doesn’t go away' (from The Shifting Realities of Philip K Dick, 1955). And this allows me to segue into the world of fiction, which Tallis doesn’t really discuss, but it’s another arena where we willingly ‘suspend disbelief' to temporarily and deliberately conflate reality with non-reality. This is something I have in common with Dick, because we have both created imaginary worlds that are more than distorted versions of the reality we experience every day; they’re entirely new worlds that no one has ever experienced in real life. But Dick’s aphorism expresses this succinctly. The so-called reality of these worlds, in these stories, only exist while we believe in them.
 
I’ve discussed elsewhere how the brain (not just human but animal brains, generally) creates a model of reality that is so ‘realistic’, we actually believe it exists outside our head.
 
I recently had a cataract operation, which was most illuminating when I took the bandage off, because my vision in that eye was so distorted, it made me feel sea sick. Everything had a lean to it and it really did feel like I was looking through a lens; I thought they had botched the operation. With both eyes open, it looked like objects were peeling apart. So I put a new eye patch on, and distracted myself for an hour by doing a Sudoku problem. When I had finished it, I took the patch off and my vision was restored. The brain had made the necessary adjustments to restore the illusion of reality as I normally interacted with it. And that’s the key point: the brain creates a model so accurately, integrating all our senses, but especially, sight, sound and touch, that we think the model is the reality. And all creatures have evolved that facility simply so they can survive; it’s a matter of life-and-death.
 
But having said all that, there are some aspects of reality that really do only exist in your mind, and not ‘out there’. Colour is the most obvious, but so is sound and smell, which all may be experienced differently by other species – how are we to know? Actually, we do know that some animals can hear sounds that we can’t and see colours that we don’t, and vice versa. And I contend that these sensory experiences are among the attributes that keep us distinct from AI.
 
Tallis makes a passing reference to Kant, who argued that space and time are also aspects of reality that are produced by the mind. I have always struggled to understand how Kant got that so wrong. Mind you, he lived more than a century before Einstein all-but proved that space and time are fundamental parameters of the Universe. Nevertheless, there are more than a few physicists who argue that the ‘flow of time’ is a purely psychological phenomenon. They may be right (but arguably for different reasons). If consciousness exists in a constant present (as expounded by Schrodinger) and everything else becomes the past as soon as it happens, then the flow of time is guaranteed for any entity with consciousness. However, many physicists (like Sabine Hossenfelder), if not most, argue that there is no ‘now’ – it’s an illusion.
 
Speaking of Schrodinger, he pointed out that there are fundamental differences between how we sense sight and sound, even though they are both waves. In the case of colour, we can blend them to get a new colour, and in fact, as we all know, all the colours we can see can be generated by just 3 colours, which is how the screens on all your devices work. However, that’s not the case with sound, otherwise we wouldn’t be able to distinguish all the different instruments in an orchestra. Just think: all the complexity is generated by a vibrating membrane (in the case of a speaker) and somehow our hearing separates it all. Of course, it can be done mathematically with a Fourier transform, but I don’t think that’s how our brains work, though I could be wrong.
 
And this leads me to discuss the role of science, and how it challenges our everyday experience of reality. Not surprisingly, Tallis also took his discussion in that direction. Quantum mechanics (QM) is the logical starting point, and Tallis references Bohr’s Copenhagen interpretation, ‘the view that the world has no definite state in the absence of observation.’ Now, I happen to think that there is a logical explanation for this, though I’m not sure anyone else agrees. If we go back to Schrodinger again, but this time his eponymous equation, it describes events before the ‘observation’ takes place, albeit with probabilities. What’s more, all the weird aspects of QM, like the Uncertainty Principle, superposition and entanglement, are all mathematically entailed in that equation. What’s missing is relativity theory, which has since been incorporated into QED or QFT.
 
But here’s the thing: once an observation or ‘measurement’ has taken place, Schrodinger’s equation no longer applies. In other words, you can’t use Schrodinger’s equation to describe something that has already happened. This is known as the ‘measurement problem’, because no one can explain it. But if QM only describes things that are yet to happen, then all the weird aspects aren’t so weird.
 
Tallis also mentions Einstein’s 'block universe', which infers past, present and future all exist simultaneously. In fact, that’s what Sabine Hossenfelder says in her book, Existential Physics:
 
The idea that the past and future exist in the same way as the present is compatible with all we currently know.

 
And:

Once you agree that anything exists now elsewhere, even though you see it only later, you are forced to accept that everything in the universe exists now. (Her emphasis.)
 
I’m not sure how she resolves this with cosmological history, but it does explain why she believes in superdeterminism (meaning the future is fixed), which axiomatically leads to her other strongly held belief that free will is an illusion; but so did Einstein, so she’s in good company.
 
In a passing remark, Tallis says, ‘science is entirely based on measurement’. I know from other essays that Tallis has written, that he believes the entire edifice of mathematics only exists because we can measure things, which we then applied to the natural world, which is why we have so-called ‘natural laws’. I’ve discussed his ideas on this elsewhere, but I think he has it back-to-front, whilst acknowledging that our ability to measure things, which is an extension of counting, is how humanity was introduced to mathematics. In fact, the ancient Greeks put geometry above arithmetic because it’s so physical. This is why there were no negative numbers in their mathematics, because the idea of a negative volume or area made no sense.
 
But, in the intervening 2 millennia, mathematics took on a life of its own, with such exotic entities like negative square roots and non-Euclidean geometry, which in turn suddenly found an unexpected home in QM and relativity theory respectively. All of a sudden, mathematics was informing us about reality before measurements were even made. Take Schrodinger’s wavefunction, which lies at the heart of his equation, and can’t be measured because it only exists in the future, assuming what I said above is correct.
 
But I think Tallis has a point, and I would argue that consciousness can’t be measured, which is why it might remain inexplicable to science, correlation with brain waves and their like notwithstanding.
 
So what is the mystery? Well, there’s more than one. For a start there is consciousness, without which reality would not be perceived or even be known, which seems to me to be pretty fundamental. Then there are the aspects of reality which have only recently been discovered, like the fact that time and space can have different ‘measurements’ dependent on the observer’s frame of reference. Then there is the increasing role of mathematics in our comprehension of reality at scales both cosmic and subatomic. In fact, given the role of numbers and mathematical relationships in determining fundamental constants and natural laws of the Universe, it would seem that mathematics is an inherent facet of reality.

 

Addendum:

As it happens, I wrote a letter to Philosophy Now on this topic, which they published, and also passed onto Raymond Tallis. As a consequence, we had a short correspondence - all very cordial and mutually respectful.

One of his responses can be found, along with my letter, under Letters, Issue 160. Scroll down to Lucky Guesses.
 

Sunday, 15 October 2023

What is your philosophy of life and why?

This was a question I answered on Quora, and, without specifically intending to, I brought together 2 apparently unrelated topics. The reason I discuss language is because it’s so intrinsic to our identity, not only as a species, but as an individual within our species. I’ve written an earlier post on language (in response to a Philosophy Now question-of-the-month), which has a different focus, and I deliberately avoided referencing that.
 
A ‘philosophy of life’ can be represented in many ways, but my perspective is within the context of relationships, in all their variety and manifestations. It also includes a recurring theme of mine.



First of all, what does one mean by ‘philosophy of life? For some people, it means a religious or cultural way-of-life. For others it might mean a category of philosophy, like post-modernism or existentialism or logical positivism.
 
For me, it means a philosophy on how I should live, and on how I both look at and interact with the world. This is not only dependent on my intrinsic beliefs that I might have grown up with, but also on how I conduct myself professionally and socially. So it’s something that has evolved over time.
 
I think that almost all aspects of our lives are dependent on our interactions with others, which starts right from when we were born, and really only ends when we die. And the thing is that everything we do, including all our failures and successes occur in this context.
 
Just to underline the significance of this dependence, we all think in a language, and we all gain our language from our milieu at an age before we can rationally and critically think, especially compared to when we mature. In fact, language is analogous to software that gets downloaded from generation to generation, so that knowledge can also be passed on and accumulated over ages, which has given rise to civilizations and disciplines like science, mathematics and art.
 
This all sounds off-topic, but it’s core to who we are and it’s what distinguishes us from other creatures. Language is also key to our relationships with others, both socially and professionally. But I take it further, because I’m a storyteller and language is the medium I use to create a world inside your head, populated by characters who feel like real people and who interact in ways we find believable. More than any other activity, this illustrates how powerful language is.
 
But it’s the necessity of relationships in all their manifestations that determines how one lives one’s life. As a consequence, my philosophy of life centres around one core value and that is trust. Without trust, I believe I am of no value. But more than that, trust is the foundational value upon which a society either flourishes or devolves into a state of oppression with its antithesis, rebellion.

 

Tuesday, 10 October 2023

Oppenheimer and lessons for today

 I watched Chris Nolan’s 3hr movie, Oppenheimer, and then read the 600 page book it was based on, American Prometheus, by Kai Bird and Martin J. Sherwin, which deservedly won a Pulitzer prize. Its subtitle is The Triumph and Tragedy of J. Robert Oppenheimer, which really does sum up his life.
 
I think the movie should win a swag of Oscars, not just because of the leading actors, but the way the story was told. In the movie, the ‘triumph’ and the ‘tragedy’ are more-or-less told in parallel, using the clever device of colour for the ‘bomb’ story and black and white for the political story. From memory, the bomb is detonated at the 2hr mark and the remainder of the film focuses on what I’d call the ‘inquisition’, though ‘kangaroo court’ is possibly a more accurate description and is used at least once in the book by a contemporary commentator.
 
Despite its length, the book is a relatively easy read and is hard to put down, or at least it was for me – it really does read like a thriller in places.
 
It so happened that I followed it up with The Last Days of Socrates by Plato, and I couldn’t help but draw comparisons. Both were public figures who had political influence that wasn’t welcome or even tolerated in some circles.
 
I will talk briefly about Socrates, as I think its relevant, even though its 2400 years ago. Plato, of course, adopts Socrates’ perspective, and though I expect Plato was present at his trial, we don’t know how accurate a transcription it is. Nevertheless, the most interesting and informative part of the text is the section titled The Apology of Socrates (‘Socrates’ Defence’). Basically, Socrates argued that he had been the victim of what we would call a ‘smear campaign’ or even slander, and this is well and truly before social media, but perhaps they had something equivalent in Athens (4-300 BC). Socrates makes the point that he’s a private citizen, not a public figure, and says, …you can be quite sure, men of Athens, that if I’d set about a political career all those years ago, I’d long ago have come to a sticky end… Anyone who is really fighting for justice must live as a private citizen and not a public figure if he’s going to survive even a short time.
 
One of the reasons, if not the main reason, according to Plato, that Socrates accepted his fate was that he refused to change. Practicing philosophy in the way he did was, in effect, his essence.
 
The parallels with Oppenheimer, is that Oppenheimer publicly advocated policies that were not favourable among certain politicians and certainly not the military. But to appreciate this, one must see it in the political context of its time.
 
Firstly, one must understand that immediately after the second world war, most if not all the nations that had been involved, didn’t really have an appetite for another conflict, especially on that scale, let alone one involving nuclear weapons, which I believe, is how the cold war came to be.
 
If one looks at warfare through a historical lens, the side with a technological advantage invariably prevails. A good example is the Roman empire who could build roads, bridges and viaducts, all in the service of its armies.
 
So, there was a common view among the American military, as well as the politicians of the day, that, because they had the atomic bomb, they had a supreme technological superiority and all they had to do was keep the knowledge from the enemy.
 
Oppenheimer knew this was folly and was advocating an arms treaty with Russia decades before it became accepted. Not only Oppenheimer, but most scientists, knew that humanity would not survive a nuclear holocaust, but many politicians believed that the threat of a nuclear war was the only road to peace. For this reason, many viewed Oppenheimer as a very dangerous man. Oppenheimer opposed the hydrogen bomb because it was effectively a super-bomb that would make the atomic bomb look like a comparative non-event.
 
He also knew that the US Air Force had already circled which cities in Russia they would eliminate should another hot war start. Oppenheimer knew this was madness, and today there’s few people who would not agree with him. Hindsight is a remarkable facility.
 
On February 17 1953, Oppenheimer gave a speech in New York before an audience comprising a ‘closed meeting of the Council on Foreign Relations’, in which he attempted to relay the precarious state the world was in and the pivotal role that the US was playing, while all the time acknowledging that he was severely limited in what he could actually tell them. Here are some excerpts that give a flavour:

Looking a decade ahead, it is likely to be small comfort that the Soviet Union is four years behind us… the very least we can conclude is that our twenty-thousandth bomb… will not in any deep strategic sense offset their two-thousandth.
 
We have from the first, maintained that we should be free to use these weapons… [and] one ingredient of this plan is a rather rigid commitment to their use in a very massive, initial, unremitting strategic assault on the enemy.
 
Without putting it into actual words, Oppenheimer was spelling out America’s defence policy towards the Soviets at that time. What he couldn’t tell them was that this was the strategy of the Strategic Air Command – to obliterate scores of Russian cities in a genocidal air strike.
 
In his summing up, he said, We may anticipate a state of affairs in which the two Great Powers will each be in a position to put an end to civilization and life of the other, though not without risking its own.
 
He then gave this chilling analogy: We may be likened to two scorpions in a bottle, each capable of killing the other, but only at the risk of its own life.
 
This all happened against the backdrop and hysteria of McCarthyism, which Einstein compared to Nazi Germany. Oppenheimer, his wife and his brother all had links with the Communist party, though Oppenheimer distanced himself when he became aware of the barbaric excesses of Stalin’s Russia. The FBI had him under surveillance for much of his career, both during and after the war, and it was countless files of FBI wiretaps that was used in evidence against him, in his so-called hearing. They would have been inadmissible in a proper court of law, and in the hearing, his counsel was not allowed to access them because they were ‘classified’. There were 3 panel members and one of them, a Dr Evans, wrote a dissent, arguing that there was no new evidence, and that if Oppenheimer had been cleared in 1947, he was even less of a security risk in 1954.
 
After the ‘hearing’, media was divided, just like it would be today, and that’s its relevance to modern America. The schism was the left and right of politics and that schism is still there today, and possibly even deeper than it was then.
 
If one looks at the downfall of great people – I’m thinking Alan Turing and Galileo Galilei, not to mention Socrates – history judges them differently to how they were judged in their day, and that also goes for Oppenheimer. Hypatia is another who comes to mind, though she lived (and died) 400 AD. What all these have in common, other than being persecuted, is that they were ahead of their time. People will say the same about advocates for same-sex marriage, not to mention the Cassandras warning about climate change.


Addendum: I recently wrote a post on Quora that’s made me revisit this. Basically, I gave this as an example of when the world was on the brink of madness – specifically, the potential for nuclear Armageddon – and Oppenheimer was almost a lone voice in trying to warn people, while having neither the authority nor the legal right to do so.
 
It made me consider that we are now possibly on the brink of a different madness, that I referenced in my Quora post:
 
But the greatest harbinger of madness on the world stage is that the leading contender for the next POTUS is a twice-impeached, 4-times indicted ex-President. To quote Robert De Niro: “Democracy won’t survive the return of a wannabe dictator.” We are potentially about to enter an era where madness will reign in the most powerful nation in the world. It’s happened before, so we are well aware of the consequences. Trump may not lead us into a world war, but despots will thrive and alliances will deteriorate if not outright crumble.


Saturday, 16 September 2023

Modes of thinking

 I’ve written a few posts on creative thinking as well as analytical and critical thinking. But, not that long ago, I read a not-so-recently published book (2015) by 2 psychologists (John Kounios and Mark Beeman) titled, The Eureka Factor; Creative Insights and the Brain. To quote from the back fly-leaf:
 
Dr John Kounios is Professor of Psychology at Drexel University and has published cognitive neuroscience research on insight, creativity, problem solving, memory, knowledge representation and Alzheimer’s disease.
 
Dr Mark Beeman is Professor of Psychology and Neuroscience at Northwestern University, and researches creative problem solving and creative cognition, language comprehension and how the right and left hemispheres process information.

 
They divide people into 2 broad groups: ‘Insightfuls’ and ‘analytical thinkers’. Personally, I think the coined term, ‘insightfuls’ is misleading or too narrow in its definition, and I prefer the term ‘creatives’. More on that below.
 
As the authors say, themselves, ‘People often use the terms “insight” and “creativity” interchangeably.’ So that’s obviously what they mean by the term. However, the dictionary definition of ‘insight’ is ‘an accurate and deep understanding’, which I’d argue can also be obtained by analytical thinking. Later in the book, they describe insights obtained by analytical thinking as ‘pseudo-insights’, and the difference can be ‘seen’ with neuro-imaging techniques.
 
All that aside, they do provide compelling arguments that there are 2 distinct modes of thinking that most of us experience. Very early in the book (in the preface, actually), they describe the ‘ah-ha’ experience that we’ve all had at some point, where we’re trying to solve a puzzle and then it comes to us unexpectedly, like a light-bulb going off in our head. They then relate something that I didn’t know, which is that neurological studies show that when we have this ‘insight’ there’s a spike in our brain waves and it comes from a location in the right hemisphere of the brain.
 
Many years ago (decades) I read a book called Drawing on the Right Side of the Brain by Betty Edwards. I thought neuroscientists would disparage this as pop-science, but Kounios and Beeman seem to give it some credence. Later in the book, they describe this in more detail, where there are signs of activity in other parts of the brain, but the ah-ha experience has a unique EEG signature and it’s in the right hemisphere.
 
The authors distinguish this unexpected insightful experience from an insight that is a consequence of expertise. I made this point myself, in another post, where experts make intuitive shortcuts based on experience that the rest of us don’t have in our mental toolkits.
 
They also spend an entire chapter on examples involving a special type of insight, where someone spends a lot of time thinking about a problem or an issue, and then the solution comes to them unexpected. A lot of scientific breakthroughs follow this pattern, and the point is that the insight wouldn’t happen at all without all the rumination taking place beforehand, often over a period of weeks or months, sometimes years. I’ve experienced this myself, when writing a story, and I’ll return to that experience later.
 
A lot of what we’ve learned about the brain’s functions has come from studying people with damage to specific areas of the brain. You may have heard of a condition called ‘aphasia’, which is when someone develops a serious disability in language processing following damage to the left hemisphere (possibly from a stroke). What you probably don’t know (I didn’t) is that damage to the right hemisphere, while not directly affecting one’s ability with language can interfere with its more nuanced interpretations, like sarcasm or even getting a joke. I’ve long believed that when I’m writing fiction, I’m using the right hemisphere as much as the left, but it never occurred to me that readers (or viewers) need the right hemisphere in order to follow a story.
 
According to the authors, the difference between the left and right neo-cortex is one of connections. The left hemisphere has ‘local’ connections, whereas the right hemisphere has more widely spread connections. This seems to correspond to an ‘analytic’ ability in the left hemisphere, and a more ‘creative’ ability in the right hemisphere, where we make conceptual connections that are more wideranging. I’ve probably oversimplified that, but it was the gist I got from their exposition.
 
Like most books and videos on ‘creative thinking’ or ‘insights’ (as the authors prefer), they spend a lot of time giving hints and advice on how to improve your own creativity. It’s not until one is more than halfway through the book, in a chapter titled, The Insightful and the Analyst, that they get to the crux of the issue, and describe how there are effectively 2 different types who think differently, even in a ‘resting state’, and how there is a strong genetic component.
 
I’m not surprised by this, as I saw it in my own family, where the difference is very distinct. In another chapter, they describe the relationship between creativity and mental illness, but they don’t discuss how artists are often moody and neurotic, which is a personality trait. Openness is another personality trait associated with creative people. I would add another point, based on my own experience, if someone is creative and they are not creating, they can suffer depression. This is not discussed by the authors either.
 
Regarding the 2 types they refer to, they acknowledge there is a spectrum, and I can’t help but wonder where I sit on it. I spent a working lifetime in engineering, which is full of analytic types, though I didn’t work in a technical capacity. Instead, I worked with a lot of technical people of all disciplines: from software engineers to civil and structural engineers to architects, not to mention lawyers and accountants, because I worked on disputes as well.
 
The curious thing is that I was aware of 2 modes of thinking, where I was either looking at the ‘big-picture’ or looking at the detail. I worked as a planner, and one of my ‘tricks’ was the ability to distil a large and complex project into a one-page ‘Gantt’ chart (bar chart). For the individual disciplines, I’d provide a multipage detailed ‘program’ just for them.
 
Of course, I also write stories, where the 2 components are plot and character. Creating characters is purely a non-analytic process, which requires a lot of extemporising. I try my best not to interfere, and I do this by treating them as if they are real people, independent of me. Plotting, on the other hand, requires a big-picture approach, but I almost never know the ending until I get there. In the last story I wrote, I was in COVID lockdown when I knew the ending was close, so I wrote some ‘notes’ in an attempt to work out what happens. Then, sometime later (like a month), I had one sleepless night when it all came to me. Afterwards, I went back and looked at my notes, and they were all questions – I didn’t have a clue.

Sunday, 10 September 2023

A philosophical school of thought with a 2500 year legacy

I’ve written about this before, but revisited it with a recent post I published on Quora in response to a question, where I didn’t provide the answer expected, but ended up giving a very brief history of philosophy as seen through the lens of science.
 
I’ve long contended that philosophy and science are joined at the hip, and one might extend the metaphor by saying the metaphysical bond is mathematics.
 
When I say a very brief history, what I mean is that I have selected a few specific figures, albeit historically prominent, who provide links in a 2500 year chain, while leaving out countless others. I explain how I see this as a ‘school of thought’, analogous to how some people might see a religion that also goes back centuries. The point is that we in the West have inherited this, and it’s determined the technological world that we currently live in, which would have been unimaginable even as recently as the renaissance or the industrial revolution, let alone in ancient Greece or Alexandria.
 
Which philosopher can you best relate yourself to?
 
It would take a certain hubris to claim that I relate to any philosopher whom I admire, but there are some whom I feel, not so much a kinship with, but an agreement in spirit and principle. Philosophers, like scientists and mathematicians, stand on the shoulders of those who went before.
 
I go back to Socrates because I think he was ahead of his time, and he effectively brought argument into philosophy, which is what separates it from dogma.
 
Plato was so influenced by Socrates that he gave us the ‘Socratic dialogue’ method of analysing an issue, whereby fictional characters (albeit with historical names) discuss hypotheticals in the form of arguments.
 
But Plato was also heavily influenced by Pythagorean philosophy, and even adopted its quadrivium of arithmetic, geometry, astronomy and music for his famous Academy. This tradition was carried over to the famous school or Library of Alexandria, from which sprang such luminaries as Euclid, Eratosthenes, who famously ‘measured’ the circumference of the Earth (around 230BC) and Hypatia, the female mathematician, mentor to a Bishop and a Roman Prefect, as well as speaker in the Senate, who was killed for her sins by a Christian mob in 414AD.
 
Plato is most famously known for his cave allegory, whereby we observe shadows on a wall, without knowing that there is another reality beyond our kin, consequently called the Platonic realm. In later years, this was associated with the Christian ideal of ‘heaven’, but was otherwise considered an outdated notion.
 
Then, jumping forward a couple of centuries from Plato, we come to Kant, who inadvertently resurrected the idea with his concept of ‘transcendental idealism’. Kant famously postulated that there is a difference between what we observe and the ‘thing-in-itself’, which we may never know. I find this reminiscent of Plato’s cave analogy.
 
Even before Kant there was a scientific revolution led by Galileo, Kepler and Newton, who took Pythagorean ideals to a new level when they used geometry and a new mathematical method called calculus to describe the motions of the planets that had otherwise escaped a proper and consistent exposition.
 
Then came the golden age of physics that not only built on Newton, but also Faraday and Maxwell, whereby newly discovered mathematical tools like complex algebra and non-Euclidean geometry opened up a Pandora’s box called quantum mechanics and relativity theory, which have led the way for over a hundred years in our understanding of the infinitesimally small and the cosmologically large, respectively.
 
But here’s the thing: since the start of the last century, all our foundational theories have been led by mathematics rather than experimentation, though the latter is required to validate the former.
 
To quote Richard Feynman from a chapter in his book, The Character of Physical Law, titled, The Relation of Mathematics to Physics:


Physicists cannot make a conversation in any other language. If you want to learn about nature, to appreciate nature, it is necessary to understand the language that she speaks in. She offers her information only in one form.
 
And this leads me to conclude that Kant's ‘transcendental idealism’ is mathematics*, which has its roots going back to Plato and possibly Pythagoras before him.
 
In answer to the question, I don’t think there is any specific philosopher that I ‘best relate to’, but there is a school of thought going back 2500 years that I have an affinity for.
 
 
*Note: Kant didn’t know that most of mathematics is uncomputable and unknown.
 

Thursday, 31 August 2023

Can relativity theory be reconciled with common sense?

 You might think I write enough posts on Einstein’s theories of relativity, including the last one, but this one is less esoteric. It arose from a question I answered on Quora. Like a lot of questions on Quora, it’s provocative and you wonder whether the questioner is serious or not.
 
Before I came up with the title, I rejected 2 others: Relativity theory for dummies (which seemed patronising) and Relativity explained without equations or twins (which is better). But I settled on the one above, because it contains a thought experiment, which does exactly that. It’s a thought experiment I’ve considered numerous times in the past, but never expressed in writing.
 
I feel that the post also deals with some misconceptions: that SR arose from the failure of the Michelson-Morley experiments to measure the aether, and that GR has no relationship to Newton’s theory of gravity.
 
If the theories of relativity are so "revolutionary," why are they so incompatible with the 'real' world? In others(sic), why are the theories based on multiple assumptions in mathematics rather than the physical world?
 
You got one thing right, which is ‘theories’ plural – there is the special theory (SR) and the general theory (GR). As for ‘multiple assumptions in mathematics’, there was really only one fundamental assumption and that determined the mathematical formulation of both theories, but SR in particular (GR followed 10 years later).
 
The fundamental assumption was that the speed of light, c, is the same for all observers irrespective of their frame of reference, so not dependent on how fast they’re travelling relative to someone else, or, more importantly, the source of the light. This is completely counter-intuitive but is true based on all observations, including from the far reaches of the Universe. Imagine if, as per our common sense view of the world, that light travelled slower from a source receding from us and faster from a source approaching us.
 
That means that observing a galaxy far far away, the spiral arm travelling away from us would become increasingly out-of-sync with the arm travelling towards us. It’s hard to come up with a more graphic illustration that SR is true. The alternative is that the galaxy arms are travelling through an aether that permeates all of space. This was the accepted view before Einstein’s ‘revolutionary’ idea.
 
True: Einstein’s idea was premised on mathematics (not observation), but the mathematics of Maxwell’s equations, which ‘predicts’ the constant speed of light and provides a value for it. As someone said (Heinrich Hertz): “we get more out of [these equations] than was originally put into them.”
 

But SR didn’t take into account gravity, which unlike the fictitious aether, does permeate the whole universe, so Einstein developed GR. This was a mathematical theory, so not based on empirical observations, but it had to satisfy 3 criteria, established by Einstein at the outset.
 
1)    It had to satisfy the conservation laws of energy, momentum and angular momentum
2)    It had to allow for the equivalence of gravitational and inertial mass.
3)    It had to reduce mathematically to Newton’s formula when relativistic effects were negligible.
 
Many people overlook the last one, when they claim that Einstein’s theory made Newton’s theory obsolete, when in fact, it extended it into realms it couldn’t compute. Likewise, Einstein’s theory also has limitations, yet to be resolved. Observations that confirmed the theory followed its mathematical formulation, which was probably a first in physics.

Note that the curvature of spacetime is a consequence of Einstein’s theory and not a presupposition, and was one of the earliest observational confirmations of said theory.
 
 
Source: The Road to Relativity; The History and Meaning of Einstein’s “The Foundation of General Relativity” (the original title of his paper) by Hanoch Gutfreund and Jurgen Renn.
 

Addendum: I elaborate on the relationship between Newton's and Einstein's theories on another post, in the context of How does science work?

Friday, 18 August 2023

The fabric of the Universe

Brian Greene wrote an excellent book with a similar title (The Fabric of the Cosmos) which I briefly touched on here. Basically, it’s space and time, and the discipline of physics can’t avoid it. In fact, if you add mass and charge, you’ve got the whole gamut that we’re aware of. I know there’s the standard model along with dark energy and dark matter, but as someone said, if you throw everything into a black hole, the only thing you know about it is its mass, charge and angular momentum. Which is why they say, ‘a black hole has no hair.’ That was before Stephen Hawking applied the laws of thermodynamics and quantum mechanics and came up with Hawking radiation, but I’ve gone off-track, so I’ll come back to the topic-at-hand.
 
I like to tell people that I read a lot of books by people a lot smarter than me, and one of those books that I keep returning to is The Constants of Nature by John D Barrow. He makes a very compelling case that the only Universe that could be both stable and predictable enough to support complex life would be one with 3 dimensions of space and 1 of time. A 2-dimensional universe means that any animal with a digestive tract (from mouth to anus) would fall apart. Only a 3-dimensional universe allows planets to maintain orbits for millions of years. As Barrow points out in his aforementioned tome, Einstein’s friend, Paul Ehrenfest (1890-1933) was able to demonstrate this mathematically. It’s the inverse square law of gravity that keeps planets in orbit and that’s a direct consequence of everything happening in 3 dimensions. Interestingly, Kant thought it was the other way around – that 3 dimensions were a consequence of Newton’s universal law of gravity being an inverse square law. Mind you, Kant thought that both space and time were a priori concepts that only exist in the mind:
 
But this space and this time, and with them all appearances, are not in themselves things; they are nothing but representations and cannot exist outside our minds.
 
And this gets to the nub of the topic alluded to in the title of this post: are space and time ‘things’ that are fundamental to everything else we observe?
 
I’ll start with space, because, believe it or not, there is an argument among physicists that space is not an entity per se, but just dimensions between bodies that we measure. I’m going to leave aside, for the time being, that said ‘measurements’ can vary from observer to observer, as per Einstein’s special theory of relativity (SR).
 
This argument arises because we know that the Universe is expanding (by measuring the Doppler-shift of stars); but does space itself expand or is it just objects moving apart? In another post, I referenced a paper by Tamara M. Davis and Charles H. Lineweaver from UNSW (Expanding Confusion: Common Misconceptions of Cosmological Horizons and the Superluminal Expansion of the Universe), which I think puts an end to this argument, when they explain the difference between an SR and GR Doppler shift interpretation of an expanding universe.
 
The general relativistic interpretation of the expansion interprets cosmological redshifts as an indication of velocity since the proper distance between comoving objects increases. However, the velocity is due to the rate of expansion of space, not movement through space, and therefore cannot be calculated with the special relativistic Doppler shift formula. (My emphasis)
 
I’m now going to use a sleight-of-hand and attempt a description of GR (general theory of relativity) without gravity, based on my conclusion from their exposition.
 
The Universe has a horizon that’s directly analogous to the horizon one observes at sea, because it ‘moves’ as the observer moves. In other words, other hypothetical ‘observers’ in other parts of the Universe would observe a different horizon to us, including hypothetical observers who are ‘over-the-horizon’ relative to us.
 
But the horizon of the Universe is a direct consequence of bodies (or space) moving faster-than-light (FTL) over the horizon, as expounded upon in detail in Davis’s and Lineweaver’s paper. But here’s the thing: if you were an observer on one of these bodies moving FTL relative to Earth, the speed of light would still be c. How is that possible? My answer is that the light travels at c relative to the ‘space’* (in which it’s observed), but the space itself can travel faster than light.
 
There are, of course, other horizons in the Universe, which are event horizons of black holes. Now, you have the same dilemma at these horizons as you do at the Universe’s horizon. According to an external observer, time appears to ‘stop’ at the event horizon, because the light emitted by an object can’t reach us. However, for an observer at the event horizon, the speed of light is still c, and if the black hole is big enough, it’s believed (obviously no one can know) that someone could cross the event horizon without knowing they had. But what if it’s spacetime that crosses the event horizon? Then both the external observer’s perception and the comoving observer’s perception would be no different if the latter was at the horizon of the entire universe.
 
But what happens to time? Well, if you measure time by the frequency of light being emitted from an object at any of these horizons, it gets Doppler-shifted to zero, so time ‘stops’ for the ‘local’ observer (on Earth) but not for the observer at the horizon.
 
So far, I’ve avoided talking about quantum mechanics (QM), but something curious happens when you apply QM to cosmology: time disappears. According to Paul Davies in The Goldilocks Enigma: ‘…vanishing of time for the entire universe becomes very explicit in quantum cosmology, where the time variable simply drops out of the quantum description.’ This is consistent with Freeman Dyson’s argument that QM can only describe the future. Thus, if you apply a description of the future to the entire cosmos, there would be no time.
 
 
* Note: you can still apply SR within that ‘space’.

 

Addendum: I've since learned that in 1958, David Finkelstein (a postdoc with the Stevens Institute of Technology in Hoboken, New Jersey) wrote an article in Physical Review that gave the same explanation for how time appears different to different observers of a black hole, as I do above. It immediately grabbed the attention (and approval) of Oppenheimer, Wheeler and Penrose (among others), who had struggled to resolve this paradox. (Ref. Black Holes And Time Warps; Einstein's Outrageous Legacy, Kip S. Thorne, 1994)
 

Wednesday, 7 June 2023

Consciousness, free will, determinism, chaos theory – all connected

 I’ve said many times that philosophy is all about argument. And if you’re serious about philosophy, you want to be challenged. And if you want to be challenged you should seek out people who are both smarter and more knowledgeable than you. And, in my case, Sabine Hossenfelder fits the bill.
 
When I read people like Sabine, and others whom I interact with on Quora, I’m aware of how limited my knowledge is. I don’t even have a university degree, though I’ve attempted a number of times. I’ve spent my whole life in the company of people smarter than me, including at school. Believe it or not, I still have occasional contact with them, through social media and school reunions. I grew up in a small rural town, where the people you went to school with feel like siblings.
 
Likewise, in my professional life, I have always encountered people cleverer than me – it provides perspective.
 
In her book, Existential Physics; A Scientist’s Guide to Life’s Biggest Questions, Sabine interviews people who are possibly even smarter than she is, and I sometimes found their conversations difficult to follow. To be fair to Sabine, she also sought out people who have different philosophical views to her, and also have the intellect to match her.
 
I’m telling you all this to put things in perspective. Sabine has her prejudices like everyone else, some of which she defends better than others. I concede that my views are probably more simplistic than hers, and I support my challenges with examples that are hopefully easy to follow. Our points of disagreement can be distilled down to a few pertinent topics, which are time, consciousness, free will and chaos. Not surprisingly, they are all related – what you believe about one, affects what you believe about the others.
 
Sabine is very strict about what constitutes a scientific theory. She argues that so-called theories like the multiverse have ‘no explanatory power’, because they can’t be verified or rejected by evidence, and she calls them ‘ascientific’. She’s critical of popularisers like Brian Cox who tell us that there could be an infinite number of ‘you(s)’ in an infinite multiverse. She distinguishes between beliefs and knowledge, which is a point I’ve made myself. Having said that, I’ve also argued that beliefs matter in science. She puts all interpretations of quantum mechanics (QM) in this category. She keeps emphasising that it doesn’t mean they are wrong, but they are ‘ascientific’. It’s part of the distinction that I make between philosophy and science, and why I perceive science as having a dialectical relationship with philosophy.
 
I’ll start with time, as Sabine does, because it affects everything else. In fact, the first chapter in her book is titled, Does The Past Still Exist? Basically, she argues for Einstein’s ‘block universe’ model of time, but it’s her conclusion that ‘now is an illusion’ that is probably the most contentious. This critique will cite a lot of her declarations, so I will start with her description of the block universe:
 
The idea that the past and future exist in the same way as the present is compatible with all we currently know.
 
This viewpoint arises from the fact that, according to relativity theory, simultaneity is completely observer-dependent. I’ve discussed this before, whereby I argue that for an observer who is moving relative to a source, or stationary relative to a moving source, like the observer who is standing on the platform of Einstein’s original thought experiment, while a train goes past, knows this because of the Doppler effect. In other words, an observer who doesn’t see a Doppler effect is in a privileged position, because they are in the same frame of reference as the source of the signal. This is why we know the Universe is expanding with respect to us, and why we can work out our movement with respect to the CMBR (cosmic microwave background radiation), hence to the overall universe (just think about that).
 
Sabine clinches her argument by drawing a spacetime diagram, where 2 independent observers moving away from each other, observe a pulsar with 2 different simultaneities. One, who is traveling towards the pulsar, sees the pulsar simultaneously with someone’s birth on Earth, while the one travelling away from the pulsar sees it simultaneously with the same person’s death. This is her slam-dunk argument that ‘now’ is an illusion, if it can produce such a dramatic contradiction.
 
However, I drew up my own spacetime diagram of the exact same scenario, where no one is travelling relative to anyone one else, yet create the same apparent contradiction.


 My diagram follows the convention in that the horizontal axis represents space (all 3 dimensions) and the vertical axis represents time. So the 4 dotted lines represent 4 observers who are ‘stationary’ but ‘travelling through time’ (vertically). As per convention, light and other signals are represented as diagonal lines of 45 degrees, as they are travelling through both space and time, and nothing can travel faster than them. So they also represent the ‘edge’ of their light cones.
 
So notice that observer A sees the birth of Albert when he sees the pulsar and observer B sees the death of Albert when he sees the pulsar, which is exactly the same as Sabine’s scenario, with no relativity theory required. Albert, by the way, for the sake of scalability, must have lived for thousands of years, so he might be a tree or a robot.
 
But I’ve also added 2 other observers, C and D, who see the pulsar before Albert is born and after Albert dies respectively. But, of course, there’s no contradiction, because it’s completely dependent on how far away they are from the sources of the signals (the pulsar and Earth).
 
This is Sabine’s perspective:
 
Once you agree that anything exists now elsewhere, even though you see it only later, you are forced to accept that everything in the universe exists now. (Her emphasis.)
 
I actually find this statement illogical. If you take it to its logical conclusion, then the Big Bang exists now and so does everything in the universe that’s yet to happen. If you look at the first quote I cited, she effectively argues that the past and future exist alongside the present.
 
One of the points she makes is that, for events with causal relationships, all observers see the events happening in the same sequence. The scenario where different observers see different sequences of events have no causal relationships. But this begs a question: what makes causal events exceptional? What’s more, this is fundamental, because the whole of physics is premised on the principle of causality. In addition, I fail to see how you can have causality without time. In fact, causality is governed by the constant speed of light – it’s literally what stops everything from happening at once.
 
Einstein also believed in the block universe, and like Sabine, he argued that, as a consequence, there is no free will. Sabine is adamant that both ‘now’ and ‘free will’ are illusions. She argues that the now we all experience is a consequence of memory. She quotes Carnap that our experience of ‘past, present and future can be described and explained by psychology’ – a point also made by Paul Davies. Basically, she argues that what separates our experience of now from the reality of no-now (my expression, not hers) is our memory.
 
Whereas, I think she has it back-to-front, because, as I’ve pointed out before, without memory, we wouldn’t know we are conscious. Our brains are effectively a storage device that allows us to have a continuity of self through time, otherwise we would not even be aware that we exist. Memory doesn’t create the sense of now; it records it just like a photograph does. The photograph is evidence that the present becomes the past as soon as it happens. And our thoughts become memories as soon as they happen, otherwise we wouldn’t know we think.
 
Sabine spends an entire chapter on free will, where she persistently iterates variations on the following mantra:
 
The future is fixed except for occasional quantum events that we cannot influence.

 
But she acknowledges that while the future is ‘fixed’, it’s not predictable. And this brings us to chaos theory. Sabine discusses chaos late in the book and not in relation to free will. She explicates what she calls the ‘real butterfly effect’.
 
The real butterfly effect… means that even arbitrarily precise initial data allow predictions for only a finite amount of time. A system with this behaviour would be deterministic and yet unpredictable.
 
Now, if deterministic means everything physically manifest has a causal relationship with something prior, then I agree with her. If she means that therefore ‘the future is fixed’, I’m not so sure, and I’ll explain why. By specifying ‘physically manifest’, I’m excluding thoughts and computer algorithms that can have an effect on something physical, whereas the cause is not so easily determined. For example, In the case of the algorithm, does it go back to the coder who wrote it?
 
My go-to example for chaos is tossing coins, because it’s so easy to demonstrate and it’s linked to probability theory, as well as being the very essence of a random event. One of the key, if not definitive, features of a chaotic phenomenon is that, if you were to rerun it, you’d get a different result, and that’s fundamental to probability theory – every coin toss is independent of any previous toss – they are causally independent. Unrepeatability is common among chaotic systems (like the weather). Even the Earth and Moon were created from a chaotic event.
 
I recently read another book called Quantum Physics Made Me Do It by Jeremie Harris, who argues that tossing a coin is not random – in fact, he’s very confident about it. He’s not alone. Mark John Fernee, a physicist with Qld Uni, in a personal exchange on Quora argued that, in principle, it should be possible to devise a robot to perform perfectly predictable tosses every time, like a tennis ball launcher. But, as another Quora contributor and physicist, Richard Muller, pointed out: it’s not dependent on the throw but the surface it lands on. Marcus du Sautoy makes the same point about throwing dice and provides evidence to support it.
 
Getting back to Sabine. She doesn’t discuss tossing coins, but she might think that the ‘imprecise initial data’ is the actual act of tossing, and after that the outcome is determined, even if can’t be predicted. However, the deterministic chain is broken as soon as it hits a surface.
 
Just before she gets to chaos theory, she talks about computability, with respect to Godel’s Theorem and a discussion she had with Roger Penrose (included in the book), where she says:
 
The current laws of nature are computable, except for that random element from quantum mechanics.
 
Now, I’m quoting this out of context, because she then argues that if they were uncomputable, they open the door to unpredictability.
 
My point is that the laws of nature are uncomputable because of chaos theory, and I cite Ian Stewart’s book, Does God Play Dice? In fact, Stewart even wonders if QM could be explained using chaos (I don’t think so). Chaos theory has mathematical roots, because not only are the ‘initial conditions’ of a chaotic event impossible to measure, they are impossible to compute – you have to calculate to infinite decimal places. And this is why I disagree with Sabine that the ‘future is fixed’.
 
It's impossible to discuss everything in a 223 page book on a blog post, but there is one other topic she raises where we disagree, and that’s the Mary’s Room thought experiment. As she explains it was proposed by philosopher, Frank Jackson, in 1982, but she also claims that he abandoned his own argument. After describing the experiment (refer this video, if you’re not familiar with it), she says:
 
The flaw in this argument is that it confuses knowledge about the perception of colour with the actual perception of it.
 
Whereas, I thought the scenario actually delineated the difference – that perception of colour is not the same as knowledge. A person who was severely colour-blind might never have experienced the colour red (the specified colour in the thought experiment) but they could be told what objects might be red. It’s well known that some animals are colour-blind compared to us and some animals specifically can’t discern red. Colour is totally a subjective experience. But I think the Mary’s room thought experiment distinguishes the difference between human perception and AI. An AI can be designed to delineate colours by wavelength, but it would not experience colour the way we do. I wrote a separate post on this.
 
Sabine gives the impression that she thinks consciousness is a non-issue. She talks about the brain like it’s a computer.
 
You feel you have free will, but… really, you’re running a sophisticated computation on your neural processor.
 
Now, many people, including most scientists, think that, because our brains are just like computers, then it’s only a matter of time before AI also shows signs of consciousness. Sabine doesn’t make this connection, even when she talks about AI. Nevertheless, she discusses one of the leading theories of neuroscience (IIT, Information Integration Theory), based on calculating the amount of information processed, which gives a number called phi (Φ). I came across this when I did an online course on consciousness through New Scientist, during COVID lockdown. According to the theory, this number provides a ‘measure of consciousness’, which suggests that it could also be used with AI, though Sabine doesn’t pursue that possibility.
 
Instead, Sabine cites an interview in New Scientist with Daniel Bor from the University of Cambridge: “Phi should decrease when you go to sleep or are sedated… but work in Bor’s laboratory has shown that it doesn’t.”
 
Sabine’s own view:
 
Personally, I am highly skeptical that any measure consisting of a single number will ever adequately represent something as complex as human consciousness.
 
Sabine discusses consciousness at length, especially following her interview with Penrose, and she gives one of the best arguments against panpsychism I’ve read. Her interview with Penrose, along with a discussion on Godel’s Theorem, which is another topic, discusses whether consciousness is computable or not. I don’t think it is and I don’t think it’s algorithmic.
 
She makes a very strong argument for reductionism: that the properties we observe of a system can be understood from studying the properties of its underlying parts. In other words, that emergent properties can be understood in terms of the properties that it emerges from. And this includes consciousness. I’m one of those who really thinks that consciousness is the exception. Thoughts can cause actions, which is known as ‘agency’.
 
I don’t claim to understand consciousness, but I’m not averse to the idea that it could exist outside the Universe – that it’s something we tap into. This is completely ascientific, to borrow from Sabine. As I said, our brains are storage devices and sometimes they let us down, and, without which, we wouldn’t even know we are conscious. I don’t believe in a soul. I think the continuity of the self is a function of memory – just read The Lost Mariner chapter in Oliver Sacks’ book, The Man Who Mistook His Wife For A Hat. It’s about a man suffering from retrograde amnesia, so his life is stuck in the past because he’s unable to create new memories.
 
At the end of her book, Sabine surprises us by talking about religion, and how she agrees with Stephen Jay Gould ‘that religion and science are two “nonoverlapping magisteria!”. She makes the point that a lot of scientists have religious beliefs but won’t discuss them in public because it’s taboo.
 
I don’t doubt that Sabine has answers to all my challenges.
 
There is one more thing: Sabine talks about an epiphany, following her introduction to physics in middle school, which started in frustration.
 
Wasn’t there some minimal set of equations, I wanted to know, from which all the rest could be derived?
 
When the principle of least action was introduced, it was a revelation: there was indeed a procedure to arrive at all these equations! Why hadn’t anybody told me?

 
The principle of least action is one concept common to both the general theory of relativity and quantum mechanics. It’s arguably the most fundamental principle in physics. And yes, I posted on that too.

 

Wednesday, 31 May 2023

Immortality; from the Pharaohs to cryonics

 I thought the term was cryogenics, but a feature article in the Weekend Australian Magazine (27-28 May 2023) calls the facilities that perform this process, cryonics, and looking up my dictionary, there is a distinction. Cryogenics is about low temperature freezing in general, and cryonics deals with the deep-freezing of bodies specifically, with the intention of one day reviving them.
 
The article cites a few people, but the author, Ross Bilton, features an Australian, Peter Tsolakides, who is in my age group. From what the article tells me, he’s a software engineer who has seen many generations of computer code and has also been a ‘globe-trotting executive for ExxonMobil’.
 
He’s one of the drivers behind a cryonic facility in Australia – its first – located at Holbrook, which is roughly halfway between Melbourne and Sydney. In fact, I often stop at Holbrook for a break and meal on my interstate trips. According to my car’s odometer it is almost exactly half way between my home and my destination, which is a good hour short of Sydney, so it’s actually closer to Melbourne, but not by much.
 
I’m not sure when Tsolakides plans to enter the facility, but he’s forecasting his resurrection in around 250 years time, when he expects he may live for another thousand years. Yes, this is science fiction to most of us, but there are some science facts that provide some credence to this venture.
 
For a start, we already cryogenically freeze embryos and sperm, and we know it works for them. There is also the case of Ewa Wisnierska, 35, a German paraglider taking part in an international competition in Australia, when she was sucked into a storm and elevated to 9947 metres (jumbo jet territory, and higher than Everest). Needless to say, she lost consciousness and spent a frozen 45 mins before she came back to Earth. Quite a miracle and I’ve watched a doco on it. She made a full recovery and was back at her sport within a couple of weeks. And I know of other cases, where the brain of a living person has been frozen to keep them alive, as counter-intuitive as that may sound.
 
Believe it or not, scientists are divided on this, or at least cautious about dismissing it outright. Many take the position, ‘Never say never’. And I think that’s fair enough, because it really is impossible to predict the future when it comes to humanity. It’s not surprising that advocates, like Tsolakides, can see a future where this will become normal for most humans. People who decline immortality will be the exception and not the norm. And I can imagine, if this ‘procedure’ became successful and commonplace, who would say no?
 
Now, I write science fiction, and I have written a story where a group of people decided to create an immortal human race, who were part machine. It’s a reflection of my own prejudices that I portrayed this as a dystopia, but I could have done the opposite.
 
There may be an assumption that if you write science fiction then you are attempting to predict the future, but I make no such claim. My science fiction is complete fantasy, but, like all science fiction, it addresses issues relevant to the contemporary society in which it was created.
 
Getting back to the article in the Weekend Australian, there is an aspect of this that no one addressed – not directly, anyway. There’s no point in cheating death if you can’t cheat old age. In the case of old age, you are dealing with a fundamental law of the Universe, entropy, the second law of thermodynamics. No one asked the obvious question: how do you expect to live for 1,000 years without getting dementia?
 
I think some have thought about this, because, in the same article, they discuss the ultimate goal of downloading their memories and their thinking apparatus (for want of a better term) into a computer. I’ve written on this before, so I won’t go into details.
 
Curiously, I’m currently reading a book by Sabine Hossenfelder called Existential Physics; A Scientist’s Guide to Life’s Biggest Questions, which you would think could not possibly have anything to say on this topic. Nevertheless:
 
The information that makes you you can be encoded in many different physical forms. The possibility that you might one day upload yourself to a computer and continue living a virtual life is arguably beyond present-day technology. It might sound entirely crazy, but it’s compatible with all we currently know.
 
I promise to write another post on Sabine’s book, because she’s nothing if not thought-provoking.
 
So where do I stand? I don’t want immortality – I don’t even want a gravestone, and neither did my father. I have no dependents, so I won’t live on in anyone’s memory. The closest I’ll get to immortality are the words on this blog.