Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

04 April 2026

Mathematics, language and reality

I recently read an online article with Quanta Magazine, titled How Writing Changes Mathematical Thought, featuring David E Dunning, ‘a historian of mathematics at the Smithsonian’s National Museum of American History’, who was interviewed by John Pavlus.

 

In particular, Dunning pointed out how the notation we use affects the way we explore mathematics and even comprehend it. The most significant innovation was the introduction of Hindu-Arabic numerals, along with its corresponding arithmetic, which we owe to Fibonacci (of Fibonacci numbers fame) in the 12th Century. Tibees gives a good summary in this short video. The thing is that we would really struggle to do modern mathematics using Roman numerals, and it would be impossible for computers.

 

Dunning gives the example of the difference between Newton’s and Leibniz’s notation for calculus and how “Leibniz’s calculus got used a lot more in continental Europe, and it just grew and was fertile in a way that Newton’s wasn’t.” Which is why we all use Leibniz’s notation today.

 

But there is a more fundamental point, I believe, that Dunning doesn’t discuss. And that is the Wittgensteinian (new word) principle that the language we use limits what we can think about, because we all think in a language. And also, it’s the language of mathematics that I believe resolves the argument going back to Plato and Aristotle, whether mathematics is invented or discovered. On that last point, we invent the language but the relationships that the language describes is discovered. I contend there is a tendency to conflate the language of mathematics with mathematical formulations, because we learn them in tandem.

 

I pointed out in a much earlier post that there is also a tendency to treat mathematics as just another language, like the ones we think in, which takes the conflation I mention above to another level. The fact is that we still use the language we think in to describe mathematical notation and relationships. In other words, we absorb the language we use to do mathematics into our thinking language as a subset thereof. And this brings me back to Wittgenstein’s point, because we keep expanding our language to capture new concepts and ideas, otherwise we cognitively stagnate. And I see mathematical language as such an expansion, otherwise we can’t understand the concepts it’s describing. And perhaps this is why so many people struggle with mathematics in school, but that’s another topic.

 

One of Pavlus’s questions was: Why don’t we teach people to do math with, say, a more pictorial or visual kind of notation?

 

This is what led Dunning to talk about Newton’s and Leibniz’s respective calculus notation, but it got me thinking in a different direction.

 

Specifically, how we are visual creatures, and how I try to visualise mathematical concepts as much as possible. A graph can tell you so much more than the written equation can, and makes some concepts very easy to grasp. The best example that most people would be familiar with is a sine wave. You can see where the wave is zero and where it’s 1 and -1, and everything in between, and how it cycles in periods of 2π radians. It also shows just by looking at the graph how the cosine of an angle is 90 degrees (π/2 radians) out of phase with the corresponding sine wave, just by depicting them on the same graph.

 

Another example most of us are familiar with is a parabola being the graphical representation of a quadratic equation. The zeros (or square roots) are where the graph crosses the x axis, which can’t be greater than 2, so can have 2 square roots. However, you can have one square root if the parabola kisses the x axis and no roots if it doesn’t touch it. Though we all know we can have imaginary roots (-1), but you need another graph which includes an imaginary axis along with the real axis.

 

In fact, complex algebra is a lot easier to understand if it’s depicted graphically. I’m a little annoyed that it wasn’t taught to me that way when I first encountered it. By depicting it on an Argand diagram, where the imaginary (i) axis replaces the y axis in a Cartesian diagram, and using polar co-ordinates, you can see how multiplication requires adding the angles, and multiplying a complex number by i means rotating everything anticlockwise by 90 degrees.

 

Even esoteric topics like Riemann’s hypothesis becomes amenable to comprehension by mortals when it’s demonstrated graphically, as this video demonstrates quite effectively.

 

Calculus is taught using graphs: the tangent of a curve being found by differentiation and the area under a curve being found by integration. Why one is the inverse function of the other, I’m not sure anyone can tell you. Differential calculus allows one to grasp the concept of instantaneity, which doesn’t physically exist, but it’s an idealism that is more than useful. Likewise, it’s almost incomprehensible that an infinite number of infinitesimal strips can give you a finite area under a curve, but it works. Calculus is like magic.

 

But I extend this visualisation into physics, where everything is depicted in the language of mathematics.

 

I never understood Einstein’s General Theory of Relativity (GR), which is a theory of gravity, until I grasped the concept of a geodesic, which can be visualised. And I can thank Richard Feynman for explaining it relatively succinctly, including mathematical formulations, in his excellent book, Six Not-So-Easy Pieces. A geodesic is the shortest distance between 2 points, and on a sphere, it’s always a great circle. Intercontinental aircraft fly along geodesics for that very reason, though they appear curved when the map is projected onto a flat surface.

 

But here’s the thing, as pointed out by Feynman: “In a uniform gravitational field the trajectory with maximum proper time for a fixed elapsed time is a parabola.” I’ll describe what he means by ‘maximum proper time’ in a moment, because that’s the key to understanding it. But we all learned that a projectile travels through the air following a parabolic curve in high school physics, without knowing anything about GR. We did it using Newton’s equations. But Einstein gives us the same result, assuming the object is not travelling at relativistic speeds.

 

And here’s why, again quoting Feynman: An object always moves from one place to another so that a clock carried on it gives a longer time than any other trajectory (italics in the original). In his words, The time measured by a moving clock is called its “proper time” (τ). In free fall, the trajectory makes the proper time of an object a maximum. And that’s what’s called a geodesic in GR.

 

And that paragraph allowed me to finally comprehend General Relativity. Any deviation of an object from free fall in a gravitational field (from its geodesic), and remember there is a gravitational field everywhere in the Universe, means its clock will slow down which is what SR (special theory of relativity) tells us. I’ve always believed that SR is dependent on GR and not the other way round, and Feynman indirectly confirmed this for me.

 

But visualisations can be misleading, and I think the wavefunction (Ψ) in Schrodinger’s equation is a case-in-point, because it’s not a physical wave. It exists in Hilbert space which, in principle, can have infinite dimensions. There is another way of expressing the same quantum mechanical (QM) phenomena and that is with Heisenberg’s matrix formulation. In fact, Heisenberg’s formulation preceded Schrodinger’s but they are mathematically equivalent. And this brings me back to Dunning’s point that the language we mathematically express something in, will give an intuitively different picture.

 

I recently read an article on Heisenberg’s revolutionary discoveries in Philosophy Now (Issue 172, Feb/Mar 2026, by Dr Kanan Purkayastha), which made the point that ‘Heisenberg attempted to calculate the behaviour of electrons around atoms using quantities we can observe’, so basically an epistemological approach. On the other hand, Schrodinger started with a principle postulated by De Broglie that an electron’s momentum could be formulated as a wave, similar to a photon, which I would call an ontological approach. Philip Ball in his book, Beyond Weird, made a similar point: that Heisenberg’s matrix approach is ‘epistemic’ and Schrodinger’s wave function approach is ‘ontic’ (his terms).

 

Many people originally thought that the famous Heisenberg Uncertainty Principle was an epistemological one, including Einstein, who said it was “just an expression of the limits of what can be determined by measurements. Or in philosophers’ terms, the nature of uncertainty would be an epistemic one.”

 

However, it falls out of Schrodinger’s equation by using a Fourier transform, so it is a mathematical restraint, not just a physical one. Schrodinger’s wavefunction also entails superposition and entanglement, which led Schrodinger to state that entanglement is the defining feature of quantum mechanics, meaning it’s what separates it from classical physics. The other thing about Schrodinger’s equation is that it can only give us probabilities, and following an observation, it no longer applies. This leads me to argue that the wavefunction exists in the future; as far as I know, an idea not shared by anyone else except Freeman Dyson (who is no longer with us).

 

Probabilities were the subject of a recent post, but the thing is we only apply probabilities to things that are yet to happen. After something has happened its probability is no longer relevant; it effectively becomes 1. And this is what happens in QM, as described above. To quote from another online article by Phys Org:

The results showed that the photon's physical presence was distributed across both paths simultaneously, demonstrating that the particle is truly delocalized until a detector forces it into a single location.

 

This is identical to a description provided by Alain Aspect that I reported in a not-so-recent post. But, as Freeman Dyson explains, it corresponds to a change in perspective by the observer from the future to the past, which occurs at the time of ‘detection’.

 

I’d like to make a point about the fact that probabilities exist, not only in QM but classical physics – after all, the entire gambling industry is based on probabilities. I contend it means that the Universe is not deterministic. Simplistic yes, but I can’t think of a better argument. It’s also my argument against claims of so-called prophecy. You either believe in free will or you believe in prophecy, but you can’t believe in both.

 

I could imagine having a discussion (argument) with a physicist on this issue, where they claim that probabilities are a statistical outcome, as a consequence of what we cannot know. Therefore, the outcome of a coin toss, for example, could be deterministic and the probability is a consequence of our ignorance, not the event. In fact, I had this discussion (over coin tosses) with physicist, Mark John Fernee (Qld Uni). Chaos theory mathematically ensures it can never be known definitively, which is an epistemological argument. However, I argue that chaos occurs ontologically as well, and that the entire universe’s evolvement is dependent on this principle.

 

Just as in the case with Heisenberg’s Uncertainty Principle and people thinking it was a consequence of what we can't physically measure, many physicists argue that chaos theory is a consequence of our limitations of observation. However, I argue that in both cases, the limitation is built into the mathematics, which makes it a feature of the Universe.

 

So, I’ve gone way off track, but while we need a language to understand and express the mathematics we discover, nature is already determined by the rules that mathematics dictates.

 

No comments: