It’s been a philosophical obsession of mine to try and understand the deep connection between mathematics, sentience and the physical universe. A recent video, an online article and a New Scientist article have all contributed to my reappraisal of these apparently disparate yet seemingly interdependent phenomena. The last post I wrote also triggered a reassessment, where I brought up the inherent tension and interrelationship between ontology and epistemology. I contend (though I didn’t spell it out in that post) that there is a loop between epistemology and ontology, which hopefully will become clear during this discourse.
I’ll start with the New Scientist article, (7 March 2026, pp.31-40), which is really a collection of articles by different writers, and elaborates on different responses to recent data from DESI (Dark Energy Spectroscopy Instrument). DESI suggests that the lambda constant (Λ), part of the ΛCDM (Lambda Cold Dark Matter) model of the Universe, may not be constant after all. Λ represents the cosmological constant, originally formulated by Einstein, then dropped by Einstein, then reinstated posthumously when more accurate measurements of the Universe’s expansion, and indeed acceleration, required its insertion (as an adjunct to Einstein’s equation for General Relativity, GR). That’s a nutshell exposition, but the consequences are explained in the next paragraph.
If Λ does remain constant the Universe will accelerate to a point where virtually everything currently observable will disappear over the horizon (yes, there is a horizon for the entire universe). However, DESI suggests that may not happen if Λ decreases in value as the Universe ages. The jury is still out, as they say.
By ‘responses’ to DESI, I mean theories, which are in essence, mathematical models, and that’s what I want to focus on. This is a case where measurements, therefore empirical data, have led to existing theories being put under strain, and therefore new models or theories are being formulated. For those familiar with Thomas Khun’s seminal tome, The Structure of Scientific Revolutions, this is arguably an example of a ‘scientific revolution’ in progress. Kuhn argued that advances in science have occurred in ‘revolutions’, not in gradual increments as commonly believed. He coined the term, ’paradigm shift’, to describe this epistemological phenomenon. What’s more, he argues this ‘shift’ inexorably arises when new data no longer agrees with an existing theory.
However, others might argue that the paradigm shift precedes the data confirming it. But I think it’s a combination. To give some well know historical examples. We have the Copernicus revolution overturning the longstanding Ptolemy model of the Universe without a massive change in known data. In fact, Stephen Hawking argued in his book, The Grand Design, that both theories fitted the observations of the day.
Of course, Galileo famously followed up on Copernicus at great personal risk, and one of his arguments centred around the fact that he could observe moons around Jupiter using a new-fangled device called a telescope. Then Kepler used the extensive observational data collected by Tycho Brahe to mathematically demonstrate that planets orbit in ellipses, not circles. It’s hard for us to imagine in the 21st Century just how big a revolutionary idea that was. It’s a case where mathematics provided a key role in formulating his thesis, and that has become increasingly pertinent ever since.
Then Newton went further, using his newly discovered (or invented) mathematical tool called calculus to determine that the orbits of the planets were determined by gravity, which also kept him bound to Earth. Who would have thought that the same phenomenon that keeps you on Earth also keeps the moon in orbit and the very planets in orbit around the sun? That’s a huge leap – a ‘paradigm shift’ of enormous consequence.
And the story continues with Einstein, building on Newton and Maxwell, where he formulated mathematical formulae to describe phenomena yet to be observed as well as explain phenomena that had been observed yet hitherto had remained inexplicable. Around the same time, Planck used empirical data to arrive at a constant (h), now called Planck’s constant, which Planck originally considered to be just a mathematical trick to get the right answer. It was Einstein who realised its true significance when he used it to explain the photo-electric effect. By the way, another constant, c (the speed of light) actually falls out of Maxwell’s equations, and it was Einstein’s genius to realise this was a ‘law’ of the Universe and not just a mathematical accident.
So scientific discoveries, in physics specifically, require a synergistic relationship between mathematics and empirical data that goes both ways.
Now I want to discuss the other side of my obsession, which is the relationship between mathematics and sentience – specifically, human sentience – as we have the ability to comprehend mathematics that goes well beyond any evolutionary requirement to merely survive. I recently wrote a post about human exceptionalism, where I mention that ‘our unique grasp of mathematics has been the most salient feature in propelling our advance in knowledge and comprehension of the natural world.’
And this leads me to a Curt Jaimungal video I recently watched, where he interviews David Blessis, who is French (going by his accent), and who’s apparently a mathematician and possibly a philosopher of mathematics, given the nature of the discussion. He makes a statement, which I found quite profound, despite its lack of esoteric language, or possibly because of it, in answer to Curt’s question, how would he define mathematics?
‘My definition of mathematics is imagining things and pretending they really exist.’
As a succinct description of mathematical Platonism, it’s hard to go past it. Though I think he was having a dig at Platonism, rather than extolling it as a viable philosophical position.
He goes on to call it a ‘side-effect’, after invoking what he calls the ‘logic side of mathematics’, which is how we validate its truth (my expression, not his). To quote Blessis again:
‘And the logic side is the core technique to produce that side-effect.’
So, while I quote Blessis, I have a different perspective, which I’m sure he wouldn’t agree with. My own view is that mathematics already exists in a purely abstract realm, independently of us and the Universe, which we access using logic.
He goes on to introduce a term, ‘meaning-making’, which is what humans do with mathematics that is not evident in its logic.
‘There is something about mathematics that cannot be explained by formal logic.’
This goes to the heart of Godel’s famous Incompleteness Theorem, though Blessis never mentions it (at least not in this video), which intrinsically differentiates ‘proof’ from ‘truth’. It’s a point that Penrose raises again and again: that humans are able to divine a mathematical truth in a way that a machine never will. And I agree, because I don’t think AI will ever actually ‘understand’ things the way we do, despite increasingly giving the impression that they do. So it would seem that Blessis, Penrose and myself are on the same page, when he distinguishes ‘meaning’ from ‘logic’.
He goes on to provide an example when he discusses Andrew Wiles famous proof of Fermat’s Last Theorem. In his initial publication of his proof, a fatal flaw was found, and Wiles went away to ‘fix his proof’, as Blessis puts it. Then Blessis asks: ‘What does it mean to fix a proof?’ The inference being that a proof is not enough. If you can ‘fix’ a proof, then is any proof valid? He doesn’t specifically ask this, but I got the impression that is what he meant.
There has to be ‘meaning’, according to Blessis, but again, I have a different perspective. To me, the fact that Wiles had to ‘fix’ his proof, is evidence that there is an objective ‘truth’, which exists before the proof is found. I’ve posited in a much earlier post that if you haven’t solved a puzzle, does that mean there is no solution until you have? This is consistent with my earlier point that mathematics exists independently of us; but, without logic, we can’t access it.
Blessis also talks about axioms, and many people would argue that because the mathematics we render is dependent on axioms, it is therefore dependent on us. He discusses set theory, which I won’t go into because I don’t know enough about it; only that it’s considered foundational to formal mathematics. And the thing is that formal mathematics is dependent on axioms and it is formal mathematics that lies at the heart of Godel’s Incompleteness Theorem. But here’s the thing: according to Godel, we discover new mathematical truths by expanding our axioms, and that is what has happened in practice. The best example is the discovery (I’ll use that term) of non-Euclidean geometry by adopting curvature. The introduction of new axioms or ‘operations’ that were once forbidden under an existing formalism allows one to find solutions to problems that were previously considered unsolvable. The square root of -1 being the best exemplar I can think of.
So the relationship between humans and mathematics is that we create a language in the form of numbers and systems of numbers (base arithmetic) along with operations like addition, multiplication and their inverse functions, among more complex ones like calculus and trigonometry, which then allow us to navigate an abstract landscape that keeps revealing new secrets. But alongside that, we have developed an epistemology called physics that appears to uncover a suite of mathematical rules or laws that underpin the Universe at all levels of our comprehension.
I haven’t mentioned the online article (from Quanta Magazine), which is an exposition on the work of Astrid Eichhorn, a physicist at Heidelberg University in Germany, who is exploring, in her own words, ‘a conservative theory of quantum gravity’, which she calls ‘asymptotic safety’. I won’t elaborate, but its relevance to this discussion, is that she’s using mathematics to explore new models of reality (my expression) that may solve existing conundrums or ones yet to be found. Specifically, she’s looking at a ‘fractal space-time’, which, as the author (Charlie Wood) says, ‘sounds pretty out there.’
I’m not advocating her theory or any of the ones I read about in New Scientist; I just want to point out that we implicitly believe that any theory or model of reality must be mathematical.
So mathematics provides us with the link between epistemology and ontology that I opened this discussion with. And implicit in this belief is another belief that it pre-exists the universe that it not only describes, but to some extent, rules.
As I said in my last post: A mathematical epistemology can only be verified with numbers. We need to take measurements, which is what DESI is doing, to give a current, ongoing example. But all our mathematical models of reality have limitations – there are no exceptions. I think this will always be true, and in the same way that Godel’s Incompleteness Theorem ‘proved’ that our formal knowledge of mathematics can never be complete; likewise, I think our epistemology of the physical Universe will also remain incomplete. So in the same way that mathematics appears to have secrets that may never be revealed, so does the Universe we inhabit, at all scales.
No comments:
Post a Comment