Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

04 April 2026

Mathematics, language and reality

I recently read an online article with Quanta Magazine, titled How Writing Changes Mathematical Thought, featuring David E Dunning, ‘a historian of mathematics at the Smithsonian’s National Museum of American History’, who was interviewed by John Pavlus.

 

In particular, Dunning pointed out how the notation we use affects the way we explore mathematics and even comprehend it. The most significant innovation was the introduction of Hindu-Arabic numerals, along with its corresponding arithmetic, which we owe to Fibonacci (of Fibonacci numbers fame) in the 12th Century. Tibees gives a good summary in this short video. The thing is that we would really struggle to do modern mathematics using Roman numerals, and it would be impossible for computers.

 

Dunning gives the example of the difference between Newton’s and Leibniz’s notation for calculus and how “Leibniz’s calculus got used a lot more in continental Europe, and it just grew and was fertile in a way that Newton’s wasn’t.” Which is why we all use Leibniz’s notation today.

 

But there is a more fundamental point, I believe, that Dunning doesn’t discuss. And that is the Wittgensteinian (new word) principle that the language we use limits what we can think about, because we all think in a language. And also, it’s the language of mathematics that I believe resolves the argument going back to Plato and Aristotle, whether mathematics is invented or discovered. On that last point, we invent the language but the relationships that the language describes is discovered. I contend there is a tendency to conflate the language of mathematics with mathematical formulations, because we learn them in tandem.

 

I pointed out in a much earlier post that there is also a tendency to treat mathematics as just another language, like the ones we think in, which takes the conflation I mention above to another level. The fact is that we still use the language we think in to describe mathematical notation and relationships. In other words, we absorb the language we use to do mathematics into our thinking language as a subset thereof. And this brings me back to Wittgenstein’s point, because we keep expanding our language to capture new concepts and ideas, otherwise we cognitively stagnate. And I see mathematical language as such an expansion, otherwise we can’t understand the concepts it’s describing. And perhaps this is why so many people struggle with mathematics in school, but that’s another topic.

 

One of Pavlus’s questions was: Why don’t we teach people to do math with, say, a more pictorial or visual kind of notation?

 

This is what led Dunning to talk about Newton’s and Leibniz’s respective calculus notation, but it got me thinking in a different direction.

 

Specifically, how we are visual creatures, and how I try to visualise mathematical concepts as much as possible. A graph can tell you so much more than the written equation can, and makes some concepts very easy to grasp. The best example that most people would be familiar with is a sine wave. You can see where the wave is zero and where it’s 1 and -1, and everything in between, and how it cycles in periods of 2π radians. It also shows just by looking at the graph how the cosine of an angle is 90 degrees (π/2 radians) out of phase with the corresponding sine wave, just by depicting them on the same graph.

 

Another example most of us are familiar with is a parabola being the graphical representation of a quadratic equation. The zeros (or square roots) are where the graph crosses the x axis, which can’t be greater than 2, so can have 2 square roots. However, you can have one square root if the parabola kisses the x axis and no roots if it doesn’t touch it. Though we all know we can have imaginary roots (-1), but you need another graph which includes an imaginary axis along with the real axis.

 

In fact, complex algebra is a lot easier to understand if it’s depicted graphically. I’m a little annoyed that it wasn’t taught to me that way when I first encountered it. By depicting it on an Argand diagram, where the imaginary (i) axis replaces the y axis in a Cartesian diagram, and using polar co-ordinates, you can see how multiplication requires adding the angles, and multiplying a complex number by i means rotating everything anticlockwise by 90 degrees.

 

Even esoteric topics like Riemann’s hypothesis becomes amenable to comprehension by mortals when it’s demonstrated graphically, as this video demonstrates quite effectively.

 

Calculus is taught using graphs: the tangent of a curve being found by differentiation and the area under a curve being found by integration. Why one is the inverse function of the other, I’m not sure anyone can tell you. Differential calculus allows one to grasp the concept of instantaneity, which doesn’t physically exist, but it’s an idealism that is more than useful. Likewise, it’s almost incomprehensible that an infinite number of infinitesimal strips can give you a finite area under a curve, but it works. Calculus is like magic.

 

But I extend this visualisation into physics, where everything is depicted in the language of mathematics.

 

I never understood Einstein’s General Theory of Relativity (GR), which is a theory of gravity, until I grasped the concept of a geodesic, which can be visualised. And I can thank Richard Feynman for explaining it relatively succinctly, including mathematical formulations, in his excellent book, Six Not-So-Easy Pieces. A geodesic is the shortest distance between 2 points, and on a sphere, it’s always a great circle. Intercontinental aircraft fly along geodesics for that very reason, though they appear curved when the map is projected onto a flat surface.

 

But here’s the thing, as pointed out by Feynman: “In a uniform gravitational field the trajectory with maximum proper time for a fixed elapsed time is a parabola.” I’ll describe what he means by ‘maximum proper time’ in a moment, because that’s the key to understanding it. But we all learned that a projectile travels through the air following a parabolic curve in high school physics, without knowing anything about GR. We did it using Newton’s equations. But Einstein gives us the same result, assuming the object is not travelling at relativistic speeds.

 

And here’s why, again quoting Feynman: An object always moves from one place to another so that a clock carried on it gives a longer time than any other trajectory (italics in the original). In his words, The time measured by a moving clock is called its “proper time” (τ). In free fall, the trajectory makes the proper time of an object a maximum. And that’s what’s called a geodesic in GR.

 

And that paragraph allowed me to finally comprehend General Relativity. Any deviation of an object from free fall in a gravitational field (from its geodesic), and remember there is a gravitational field everywhere in the Universe, means its clock will slow down which is what SR (special theory of relativity) tells us. I’ve always believed that SR is dependent on GR and not the other way round, and Feynman indirectly confirmed this for me.

 

But visualisations can be misleading, and I think the wavefunction (Ψ) in Schrodinger’s equation is a case-in-point, because it’s not a physical wave. It exists in Hilbert space which, in principle, can have infinite dimensions. There is another way of expressing the same quantum mechanical (QM) phenomena and that is with Heisenberg’s matrix formulation. In fact, Heisenberg’s formulation preceded Schrodinger’s but they are mathematically equivalent. And this brings me back to Dunning’s point that the language we mathematically express something in, will give an intuitively different picture.

 

I recently read an article on Heisenberg’s revolutionary discoveries in Philosophy Now (Issue 172, Feb/Mar 2026, by Dr Kanan Purkayastha), which made the point that ‘Heisenberg attempted to calculate the behaviour of electrons around atoms using quantities we can observe’, so basically an epistemological approach. On the other hand, Schrodinger started with a principle postulated by De Broglie that an electron’s momentum could be formulated as a wave, similar to a photon, which I would call an ontological approach. Philip Ball in his book, Beyond Weird, made a similar point: that Heisenberg’s matrix approach is ‘epistemic’ and Schrodinger’s wave function approach is ‘ontic’ (his terms).

 

Many people originally thought that the famous Heisenberg Uncertainty Principle was an epistemological one, including Einstein, who said it was “just an expression of the limits of what can be determined by measurements. Or in philosophers’ terms, the nature of uncertainty would be an epistemic one.”

 

However, it falls out of Schrodinger’s equation by using a Fourier transform, so it is a mathematical restraint, not just a physical one. Schrodinger’s wavefunction also entails superposition and entanglement, which led Schrodinger to state that entanglement is the defining feature of quantum mechanics, meaning it’s what separates it from classical physics. The other thing about Schrodinger’s equation is that it can only give us probabilities, and following an observation, it no longer applies. This leads me to argue that the wavefunction exists in the future; as far as I know, an idea not shared by anyone else except Freeman Dyson (who is no longer with us).

 

Probabilities were the subject of a recent post, but the thing is we only apply probabilities to things that are yet to happen. After something has happened its probability is no longer relevant; it effectively becomes 1. And this is what happens in QM, as described above. To quote from another online article by Phys Org:

The results showed that the photon's physical presence was distributed across both paths simultaneously, demonstrating that the particle is truly delocalized until a detector forces it into a single location.

 

This is identical to a description provided by Alain Aspect that I reported in a not-so-recent post. But, as Freeman Dyson explains, it corresponds to a change in perspective by the observer from the future to the past, which occurs at the time of ‘detection’.

 

I’d like to make a point about the fact that probabilities exist, not only in QM but classical physics – after all, the entire gambling industry is based on probabilities. I contend that it means the Universe is not deterministic. Simplistic, yes, but I can’t think of a better argument. It’s also my argument against claims of so-called prophecy. You either believe in free will or you believe in prophecy, but you can’t believe in both.

 

I could imagine having a discussion (argument) with a physicist on this issue, where they claim that probabilities are a statistical outcome, as a consequence of what we cannot know. Therefore, the outcome of a coin toss, for example, could be deterministic and the probability is a consequence of our ignorance, not the event. In fact, I had this discussion (over coin tosses) with physicist, Mark John Fernee (Qld Uni). Chaos theory mathematically ensures it can never be known definitively, which is an epistemological argument. However, I argue that chaos occurs ontologically as well, and that the entire universe’s evolvement is dependent on this principle.

 

Just as in the case with Heisenberg’s Uncertainty Principle and people thinking it was a consequence of what we can't physically measure, many physicists argue that chaos theory is a consequence of our limitations of observation. However, I argue that in both cases, the limitation is built into the mathematics, which makes it a feature of the Universe.

 

So, I’ve gone way off track, but while we need a language to understand and express the mathematics we discover, nature is already determined by the rules that mathematics dictates.

 

14 March 2026

Epistemology, ontology and the mathematical connection between them

It’s been a philosophical obsession of mine to try and understand the deep connection between mathematics, sentience and the physical universe. A recent video, an online article and a New Scientist article have all contributed to my reappraisal of these apparently disparate yet seemingly interdependent phenomena. The last post I wrote also triggered a reassessment, where I brought up the inherent tension and interrelationship between ontology and epistemology. I contend (though I didn’t spell it out in that post) that there is a loop between epistemology and ontology, which hopefully will become clear during this discourse.

 

I’ll start with the New Scientist article, (7 March 2026, pp.31-40), which is really a collection of articles by different writers, and elaborates on different responses to recent data from DESI (Dark Energy Spectroscopy Instrument). DESI suggests that the lambda constant (Λ), part of the ΛCDM (Lambda Cold Dark Matter) model of the Universe, may not be constant after all. Λ represents the cosmological constant, originally formulated by Einstein, then dropped by Einstein, then reinstated posthumously when more accurate measurements of the Universe’s expansion, and indeed acceleration, required its insertion (as an adjunct to Einstein’s equation for General Relativity, GR). That’s a nutshell exposition, but the consequences are explained in the next paragraph.

 

If Λ does remain constant the Universe will accelerate to a point where virtually everything currently observable will disappear over the horizon (yes, there is a horizon for the entire universe). However, DESI suggests that may not happen if Λ decreases in value as the Universe ages. The jury is still out, as they say.

 

By ‘responses’ to DESI, I mean theories, which are in essence, mathematical models, and that’s what I want to focus on. This is a case where measurements, therefore empirical data, have led to existing theories being put under strain, and therefore new models or theories are being formulated. For those familiar with Thomas Khun’s seminal tome, The Structure of Scientific Revolutions, this is arguably an example of a ‘scientific revolution’ in progress. Kuhn argued that advances in science have occurred in ‘revolutions’, not in gradual increments as commonly believed. He coined the term, ’paradigm shift’, to describe this epistemological phenomenon. What’s more, he argues this ‘shift’ inexorably arises when new data no longer agrees with an existing theory.

 

However, others might argue that the paradigm shift precedes the data confirming it. But I think it’s a combination. To give some well know historical examples. We have the Copernicus revolution overturning the longstanding Ptolemy model of the Universe without a massive change in known data. In fact, Stephen Hawking argued in his book, The Grand Design, that both theories fitted the observations of the day.

 

Of course, Galileo famously followed up on Copernicus at great personal risk, and one of his arguments centred around the fact that he could observe moons around Jupiter using a new-fangled device called a telescope. Then Kepler used the extensive observational data collected by Tycho Brahe to mathematically demonstrate that planets orbit in ellipses, not circles. It’s hard for us to imagine in the 21st Century just how big a revolutionary idea that was. It’s a case where mathematics provided a key role in formulating his thesis, and that has become increasingly pertinent ever since.

 

Then Newton went further, using his newly discovered (or invented) mathematical tool called calculus to determine that the orbits of the planets were determined by gravity, which also kept him bound to Earth. Who would have thought that the same phenomenon that keeps you on Earth also keeps the moon in orbit and the very planets in orbit around the sun? That’s a huge leap – a ‘paradigm shift’ of enormous consequence.

 

And the story continues with Einstein, building on Newton and Maxwell, where he formulated mathematical formulae to describe phenomena yet to be observed as well as explain phenomena that had been observed yet hitherto had remained inexplicable. Around the same time, Planck used empirical data to arrive at a constant (h), now called Planck’s constant, which Planck originally considered to be just a mathematical trick to get the right answer. It was Einstein who realised its true significance when he used it to explain the photo-electric effect. By the way, another constant, c (the speed of light) actually falls out of Maxwell’s equations, and it was Einstein’s genius to realise this was a ‘law’ of the Universe and not just a mathematical accident.

 

So scientific discoveries, in physics specifically, require a synergistic relationship between mathematics and empirical data that goes both ways.

 

Now I want to discuss the other side of my obsession, which is the relationship between mathematics and sentience – specifically, human sentience – as we have the ability to comprehend mathematics that goes well beyond any evolutionary requirement to merely survive. I recently wrote a post about human exceptionalism, where I mention that ‘our unique grasp of mathematics has been the most salient feature in propelling our advance in knowledge and comprehension of the natural world.’

 

And this leads me to a Curt Jaimungal video I watched recently, where he interviews David Blessis, who is French (going by his accent), and who is apparently a mathematician and possibly a philosopher of mathematics, given the nature of the discussion. He makes a statement, which I found quite profound, despite its lack of esoteric language, or possibly because of it, in answer to Curt’s question, how would he define mathematics?

 

‘My definition of mathematics is imagining things and pretending they really exist.’

 

As a succinct description of mathematical Platonism, it’s hard to go past it. Though I think he was having a dig at Platonism, rather than extolling it as a viable philosophical position.

 

He goes on to call it a ‘side-effect’, after invoking what he calls the ‘logic side of mathematics’, which is how we validate its truth (my expression, not his). To quote Blessis again:

 

‘And the logic side is the core technique to produce that side-effect.’

 

So, while I quote Blessis, I have a different perspective, which I’m sure he wouldn’t agree with. My own view is that mathematics already exists in a purely abstract realm, independently of us and the Universe, which we access using logic.

 

He goes on to introduce a term, ‘meaning-making’, which is what humans do with mathematics that is not evident in its logic.

 

‘There is something about mathematics that cannot be explained by formal logic.’

 

This goes to the heart of Godel’s famous Incompleteness Theorem, though Blessis never mentions it (at least not in this video), which intrinsically differentiates ‘proof’ from ‘truth’. It’s a point that Penrose raises again and again: that humans are able to divine a mathematical truth in a way that a machine never will. And I agree, because I don’t think AI will ever actually ‘understand’ things the way we do, despite increasingly giving the impression that they do. So it would seem that Blessis, Penrose and myself are on the same page, when he distinguishes ‘meaning’ from ‘logic’.

 

He goes on to provide an example when he discusses Andrew Wiles famous proof of Fermat’s Last Theorem. In his initial publication of his proof, a fatal flaw was found, and Wiles went away to ‘fix his proof’, as Blessis puts it. Then he asks: ‘What does it mean to fix a proof?’ The inference being that a proof is not enough. If you can ‘fix’ a proof, then is any proof valid? He doesn’t specifically ask this, but I got the impression that is what he meant.

 

There has to be ‘meaning’, according to Blessis, but again, I have a different perspective. To me, the fact that Wiles had to ‘fix’ his proof, is evidence that there is an objective ‘truth’, which exists before the proof is found. I’ve posited in a much earlier post that if you haven’t solved a puzzle, does that mean there's no solution until you have? This is consistent with my earlier point that mathematics exists independently of us; but, without logic, we can’t access it.

 

Blessis also talks about axioms, and many people would argue that because the mathematics we render is dependent on axioms, it is therefore dependent on us. He discusses set theory, which I won’t go into because I don’t know enough about it; only that it’s considered foundational to formal mathematics. And the thing is that formal mathematics is dependent on axioms and it is formal mathematics that lies at the heart of Godel’s Incompleteness Theorem. But here’s the thing: according to Godel, we discover new mathematical truths by expanding our axioms, and that is what has happened in practice. The best example is the discovery (I’ll use that term) of non-Euclidean geometry by adopting curvature. The introduction of new axioms or ‘operations’ that were once forbidden under an existing formalism allows one to find solutions to problems that were previously considered unsolvable. The square root of -1 being the best exemplar I can think of.

 

So the relationship between humans and mathematics is that we create a language in the form of numbers and systems of numbers (base arithmetic) along with operations like addition, multiplication and their inverse functions, among more complex ones like calculus and trigonometry, which then allow us to navigate an abstract landscape that keeps revealing new secrets. But alongside that, we have developed an epistemology called physics that appears to uncover a suite of mathematical rules or laws that underpin the Universe at all levels of our comprehension.

 

I haven’t mentioned the online article (from Quanta Magazine), which is an exposition on the work of Astrid Eichhorn, a physicist at Heidelberg University in Germany, who is exploring, in her own words, ‘a conservative theory of quantum gravity’, which she calls ‘asymptotic safety’. I won’t elaborate, but its relevance to this discussion, is that she’s using mathematics to explore new models of reality (my expression) that may solve existing conundrums or ones yet to be found. Specifically, she’s looking at a ‘fractal space-time’, which, as the author (Charlie Wood) says, ‘sounds pretty out there.’

 

I’m not advocating her theory or any of the ones I read about in New Scientist; I just want to point out that we implicitly believe that any theory or model of reality must be mathematical.

 

So mathematics provides us with the link between epistemology and ontology that I opened this discussion with. And implicit in this belief is another belief that it pre-exists the universe that it not only describes, but to some extent, rules.

 

As I said in my last post: A mathematical epistemology can only be verified with numbers. We need to take measurements, which is what DESI is doing, to give a current, ongoing example. But all our mathematical models of reality have limitations – there are no exceptions. I think this will always be true, and in the same way that Godel’s Incompleteness Theorem ‘proved’ that our formal knowledge of mathematics can never be complete; likewise, I think our epistemology of the physical Universe will also remain incomplete. So in the same way that mathematics appears to have secrets that may never be revealed, so does the Universe we inhabit, at all scales.


23 February 2026

Do probabilities actually exist?

 Only a philosopher would ask this question, let alone attempt to address it. But that’s what Raymond Tallis did in a 2-page article in Philosophy Now (Issue 172, Feb / Mar 2026).
 
This is the letter I wrote in response. It’s pretty self-explanatory.
 
I always like to read Raymond Tallis because he forces you to practice philosophy, especially if you disagree with him. Such was the case when I read his thesis on The Possibility Bearing Animal, where he concludes that “probabilities are no more objective in the physical world than are possibilities, which of course exist only insofar as they are envisaged” (italics in the original). Implicit in this statement is the belief that probabilities are a function of the mind only, and without a mind to perceive them, they would have no physical manifestation. I’m confident that he would not disagree with my rewording of his core idea.
 
If you read Erwin Schrodinger’s remarkable book, What is Life? he starts by emphasising the role of statistics in physics with the statement, “…the laws of physics and chemistry are statistical throughout.” This is true even without considering quantum mechanics, for which Schrodinger is most famously known, and for which he coined the term ‘statistico-deterministic’ to describe it.
 
Schrodinger was disappointed and frustrated that his eponymous equation required Max Born’s technique of converting the wave function into probabilities to make it relevant to the physical world. But here’s the thing: that conversion to probabilities has made his equation one of the most successful and enduring in the history of physics. Yes, it has limitations, but so does all mathematical physics. (It’s the reason that physics is a neverending endeavour, no matter the field.) This, of course, goes to the heart of Tallis’s thesis.
 
What Tallis is talking about is the distinction between epistemology and ontology, though he doesn’t specifically frame his discussion in those terms. Freeman Dyson, who was a key contributor and collaborator to Richard Feynman’s QED (yet missed out on a Nobel Prize), once warned about the reification of the wave function – making an abstract concept real. Dyson pointed out in a lecture (later turned into a paper) how quantum mechanics cannot describe the past, but only the future, which is why it can only deal in probabilities.
 
Probabilities were originally devised to explain events that people previously believed could only be determined by God. But this is common in the history of physics, including the movements of the planets in the solar system. So I agree with Tallis that probabilities are an epistemology, but they give us knowledge about future events that actually occur, therefore are inherently ontological.
 
The best example is radioactive decay, which we know is manifest as a half-life, and is very accurate within a specific range (the range varies for different isotopes). But here’s the thing: it’s impossible to predict the decay of an individual isotope (relevant to Schrodinger’s cat thought experiment), yet it’s extraordinarily, even preternaturally accurate, holistically. My point is that the half-life happens objectively and independently of any human mind, yet it’s determined by a probabilistic phenomenon.
 

 Afterword:

I limited my response to under 500 words, whereas Tallis’s treatise is much longer. There is more to this than I felt could be addressed in a Letter to the Editor.

Note how I tied my last statement to my rewording of his conclusion. I knew all along that I would use radioactive half-lives as my example to demonstrate what I saw as the error in his argument. But I snuck up on it, so-to-speak. I don’t define what I mean by ‘a probabilistic phenomenon’ yet the world is full of them, and it’s the key to my response, because I obviously believe they actually exist. Whereas Tallis effectively argues they only exist in someone’s mind.
 
The thing is that probability, as a formal device (not a colloquial expression), is always a number between 0 and 1, therefore it’s inherently mathematical. That aspect of it is somewhat ignored by Tallis, yet I don’t address it in my letter either. It’s something I would introduce later if we were engaged in a philosophical debate, because, from my perspective, it underlies what this is all about.
 
Mathematical physics is an epistemology, meaning it’s all about knowledge, and since the 20th Century, it often describes an ontology we can’t directly see or experience, yet we know it’s true within specific boundaries. Probabilities are part of that epistemology, but Tallis can discount them because they deal with the future, therefore with events yet to be actualised (by definition) - a point he makes himself. But here’s the thing: they make predictions that are highly accurate – quantum mechanics being a case-in-point.
 
The point I’d make is that while there is a distinction between epistemology and ontology, there is also a connection. Without an underlying ontology that it addresses, an epistemology is meaningless. This is a point I was attempting to make in my letter without saying it out loud. So how do we know an epistemology is true (as per my assertion in the previous paragraph)? Because we can make measurements. A mathematical epistemology can only be verified with numbers. In the case of probabilities, we do this by counting.


14 February 2026

Homer, Socrates, Gandalf and Bilbo

 A strange combination, but it all makes sense if you read the post. This is a letter that was published in Philosophy Now (Issue 172, Feb / Mar 2026) in response to an article in the previous issue (171). I’m proud to say it was published with only a couple of minor edits in the first paragraph, which I’ve adopted. Otherwise, it’s unchanged, even down to paragraphs, commas and colons.

 
I was interested in Eric Comerford’s imagined conversation between Bilbo and Gandalf on happiness and wellbeing (Philosophy Now, Issue 171). To misquote Socrates, life without challenges is not worth living. There are a couple of issues here, one of them being the role of fiction in humanity’s evolution. Fiction is not unlike dreaming in that we confront scenarios that we might not encounter in real life, yet we can learn from them. In fact, I contend that the language of stories is the language of dreams, and that, if we didn’t dream, stories wouldn’t work.
 
The overcoming of adversity is a universal theme in fiction, going back to Homer’s Odyssey, if not earlier. And of course, J.R.R. Tolkien’s The Lord of the Rings exemplifies this in multiple storylines with multiple characters.
 
All of us, when we reach a certain age, can look back at all the events in our life that ultimately formed our current selves as if we are a piece of clay moulded by life’s experiences. And the thing is that the negative events in our life are just as significant in this process as the positive ones, if not more so. It’s very important to find a purpose, but it invariably involves challenges and also failures. So to revisit Socrates: arguably, a life without failure is not worth living.


07 February 2026

Arguments for and against human exceptionalism

 This was triggered by an article I read in Philosophy Now (Issue 171, Dec 2025 / Jan 2026) by Adam Neiblum who authored Rise of the Nones: The Importance of Freedom from Religion (Hypatia Press, 2023). I don’t normally mention the publisher, but I find it interesting that they are named after the famous female Librarian of Alexandria, Hypatia (pronounced hi-pay-shia) who was infamously killed by a Christian mob in AD 414. I’ve written about her elsewhere.
 
The article was titled, Evolution or Progress, and asks “what the difference is and why it matters”. Not really a question, though one is implied. Basically, he’s arguing that evolution is not teleological (though he doesn’t use that term). Instead, he discusses the erroneous belief that most people associate evolution with progress, which is a symptom, not just of anthropocentrism, but our religious heritage. I think these are actually 2 different things, while admitting, for many people, they are connected.
 
I want to start by challenging his premise that the association of evolution with progress is not as erroneous as it appears, depending on how one defines or describes progress. My dictionary has 2 definitions:
 
1: forward or onward movement towards a destination
 

2: development towards an improved or more advanced condition
 

By the first definition, I think he’s right, but not by the second definition. If one looks at the historical evidence, going back not just millions but billions of years: the increase in complexity and sheer diversity from the most simple cells to animals with brains, I’d argue surely applies to definition 2.
 
To emphasise my point, I’ll quote from Neiblum’s essay, who provides his own definition of progress:
 
A)    An ideal or goal – literacy, or justice, for example.
B)    A gap between this ideal and the real-world state of affairs.
C)    A process of movement – individually, collectively, or even species-wide – towards that goal or ideal.
 
We can see these are not the same ideas. Evolution is neither purposeful nor intentional, it has no ideal, aim, or end-point.

 
One can see how this aligns with my dictionary definition 1, but not definition 2.
 
To be fair to Neiblum, he does address my criticism, in as much as he acknowledges evolution results in increased complexity. But he also points out that so-called primitive lifeforms (my words, not his) like insects, crocodiles, sharks (and other so-called living fossils) still thrive. But the reason they thrive, is that they have become part of an eco-system (the same with gut bacteria, for example). Evolution never applies to a species in isolation; just look at the fact that we all can’t exist without plants processing the carbon dioxide we expire as part of the extraordinary process called photo-synthesis.
 
Neiblum then goes on to discuss the role of religion, and specifically the Christian religion, in distorting or exaggerating (again, my terms) our anthropocentrism. But I’ll return to that specific point later.
 
I would like to point out that humans are not the only examples of exceptionalism in the animal kingdom. To give just 2 examples: the peregrine falcon can literally fly through the air at 200mph (in a dive); and the sperm whale can dive down to 2-3km and stay underwater for up to 45 mins.
 
But human exceptionalism is unusual and unique in the sense that, to quote Paul Davies: ‘We can unravel the plot’. I admit I tend to get annoyed when people tend to dismiss our unique ability to comprehend the universe to the degree and extent that we’ve managed to achieve. I recently watched an excellent series titled HUMAN, presented by paleo-anthropologist, Ella Al-Shamahi, which is very extensive and comprehensive for a lay-audience, and one of the things that stood out was how ‘break-throughs’ (for want of a better term) in cognitive abilities, seem to happen virtually simultaneously in different parts of the globe; the use of written script being a good example.
 
So, our cultural evolution, has tended to happen in jumps. And, in this sense, it is synonymous with progress to which Neiblum would undoubtedly agree. In his next-to-last sentence, he states that evolution has endowed us with the unique capacity to progress (emphasis in the original) using “evidence, reason and science”.
 
Personally, I think it is our unique grasp of mathematics that has been the most salient feature in propelling our advance in knowledge and comprehension of the natural world. To quote Eugene Wigner:
 
It is difficult to avoid the impression that a miracle confronts us here… or the two miracles of the existence of laws of nature and of the human mind’s capacity to divine them.
 
This was from his famous essay, The unreasonable effectiveness of mathematics in the natural sciences. And this is arguably the only reason, as Davies asserts, ‘we can unravel the plot’.
 
In my last post, I briefly talked about language, as well as imagination. Now, I actually believe that imagination is not unique to humans, in the sense that it allows us to mentally time-travel, and I suspect other creatures can do that as well, which we see in their ability to co-operate and act towards a goal. Implicit in that ability is the capacity to imagine that goal before it’s actualised. To the extent that other creatures can do this, I contend they have free will.
 
But humans take imagination to another level, because we can mentally time-travel to worlds that don’t even exist, which we do every time we read or watch a story. And this entails that other superpower we have, which is language. To quote from my last post:

…we all think in a language, which we learn from our milieu at an extremely early age, suggesting we are ‘hardwired’ genetically to do this. Without language, our ability to grasp and manipulate abstract concepts, which is arguably a unique human capability, would not be possible. Basically, I’m arguing that language for humans goes well beyond just an ability to communicate desires and wants, though that was likely its origin.
 
And this is the thing: these abstract concepts include mathematical equations, scientific theories and engineering designs (including, by the way, the theory of evolution, which is central to this discussion). But more than this, we ‘download’ this language from generation to generation at an age when these concepts are well beyond our cognitive abilities. And it’s this unique facility that has allowed us to create entire civilisations and build the scientific enterprise that we all depend upon and take for granted (if you’re reading this).
 
I’ve spent a lot of time belabouring a point, because my arguments thus far have nothing to do with religious beliefs.
 
Religion implies that there is a purpose and we are central to that purpose. I think purpose has evolved, and I’m unsure if Neiblum would agree. I’ve argued before that the Universe appears to be pseudo-teleological or quasi-teleological in that there is no end goal, yet the very mathematical laws that we have the cognitive capacity to ‘unravel’ seem to allow for a goal, even if it’s open-ended. Possibly, I’m subconsciously influenced by my ability and passion as a storyteller, because I prefer to write a story without knowing what the ending is. I’m not the only writer who does this, though there are others who won’t start a story without knowing the ending in advance.
 
I’ve always struggled with the concept of a ‘creator’ God, which is not dissimilar to the more recent belief that we live in a simulation. In a recent episode of an Australian satirical programme called The Weekly by Charlie Pickering, one of his guests, Rhys Nicholson, did a skit on this, even citing Nick Bolstrom, who is an academic proponent, but also comparing it to the widely held belief that there is a God pulling the strings behind the scenes (metaphorically speaking). Paul Davies in his book, The Goldilocks Enigma (highly recommended) also argues that the ‘simulation hypothesis’ is just a variation on ID (Intelligent Design).
 
I also like to cite Jordan Ellenberg from his excellent book, How Not to Be Wrong; The Power of Mathematical Thinking, where, among many other contentious topics, he discusses the ‘Bayesian inference of the existence of God’, whereby he shows that the Universe being a computer simulation has at least the same probability as it has being a divine intervention.
 
The thing that has struck me about all the Gods in our combined histories is that they all have cultural origins, including the Abrahamic God, and they are all anthropomorphic. I’ve long agreed with 19th Century philosopher, Ludwig Feuerbach, that ‘God is the outward projection of man’s inner nature’. God is something internal not external, though, of course, that doesn’t rule out an external source.
 
Personally, I’m attracted to the Hindu concept of Brahman (as was Schrodinger) as a collective mind that could be the end result of consciousness rather than its progenitor. I’m not proposing this as a definitive resolution, but it would provide a goal that Neiblum considers anathema to science.
 
All that aside, I think there is another aspect to seeing ourselves as ‘exceptional’ in the animal kingdom here on Earth, because it gives us a special responsibility. We are effectively the guardians of spaceship Earth by default. However, it’s a two-edged sword: we have the unique capability to destroy it or to safeguard it. Which one we do is dependent on all of us.