Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Saturday, 17 November 2012

Empirical data confirms climate change already happening for a century


Statistically, Australia’s temperature has risen approximately 1°C in the last 110 years and the oceans have risen 15-17 cm in the same period. Spring comes about 2 weeks earlier. If you don’t believe me then watch this special episode of Catalyst, aired last week on the ABC: scientific evidence of climate-change, not a left-wing conspiracy. And if it’s happened here then it’s happened all over the world.

Climate change is only one symptom of humanity’s unprecedented evolutionary success. The reason so many people, including numerous politicians, are in denial over this world-wide phenomenon is because it’s another consequence of infinite economic growth: the paradigm we are all addicted to, irrespective of political persuasion. Europe is currently finding out what happens when we reach the limit of consumerism and it will eventually happen everywhere sometime in the 21st Century. At some point we can no longer rely on a burgeoning next generation to maintain a non-sustainable economic growth, yet that’s the great denial; an even greater denial than the belief that climate-change is a global, scientifically promoted conspiracy.

Wednesday, 31 October 2012

This is torture and a violation of human rights


About 6 months ago I talked about the need to change cultural attitudes towards girls from so-called traditional cultures – specifically, to outlaw arranged marriages without the girl’s consent.

The practice of female genital mutilation, erroneously called female circumcision by those who practice it, is arguably even more barbaric and more confronting to Western cultural norms. Even though it is illegal in Australia, many people are reluctant to report it, such is the cultural divide between those who practice it and those who find it abhorrent.

If ever there was an argument to be made against moral relativism, this would have to be one of the most compelling examples. It also highlights how morality for most people, and most societies, is not based on objective criteria, as we like to contend, but on long-accepted social norms.

To prevent this practice requires more than legal prosecution, but a cultural change of attitude. Fundamentally, it needs to be recognised for what it is – torture of a pre-adolescent or adolescent girl. As demonstrated in this video, the people who perpetrate these crimes justify their actions as fulfilling the girl’s destiny. Like most changes to social norms this will ultimately be a generational change within the communities who practice it, not just a change in the law.

Wednesday, 10 October 2012

The genius of differential calculus


Newton and Leibniz are both credited as independent ‘inventors’ of calculus but I would argue that it was at least as much discovery as invention, because, at its heart, differential calculus delivers the seemingly impossible.

Calculus was arguably the greatest impetus to physics in the scientific world. Newton’s employment of calculus to give mathematical definition and precision to motion was arguably as significant to the future of physics as his formulation of the General Theory of Gravity. Without calculus, we wouldn’t have Einstein’s Theory of Relativity and we wouldn’t have Schrodinger’s equation that lies at the heart of quantum mechanics. Engineers, the world over, routinely use calculus in the form of differential equations to design most of the technological tools and infrastructure we take for granted.

Differential calculus is best understood in its application to motion in physics and to tangents in Cartesian analytic geometry. In both cases, we have mathematics describing a vanishing entity, and this is what gives calculus its power, and also makes it difficult for people to grasp, conceptually.

Calculus can freeze motion, so that at any particular point in time, knowing an object’s acceleration (like a free-falling object under gravity, for example) we can determine its instantaneous velocity, and knowing its velocity we can determine its instantaneous position. It’s the word ‘instantaneous’ that gives the game away.

In reality, there is no ‘instantaneous’ moment of time. If you increase the shutter speed of a camera, you can ‘freeze’ virtually any motion, from a cricket ball in mid-flight (baseball for you American readers) to a bullet travelling faster than the speed of sound. But the point is that, no matter how fast the shutter speed, there is still a ‘duration’ that the shutter remains open. It’s only when one looks at the photographic record, that one is led to believe that the object has been captured at an instantaneous point in time.

Calculus does something very similar in that it takes a shorter and shorter sliver of time to give an instantaneous velocity or position.

I will take the example out of Keith Devlin’s excellent book, The Language of Mathematics; Making the invisible visible, of a car accelerating along a road:

x = 5t2 + 3t

The above numbers are made up, but the formulation is correct for a vehicle under constant acceleration. If we want to know the velocity at a specific point in time we differentiate it with respect to time (t).

The differentiated equation becomes dx/dt, which means that we differentiate the distance (x) with respect to time (wrt t).

To get an ‘instantaneous’ velocity, we take smaller and smaller distances over smaller and smaller durations. So dx/dt is an incrementally small distance divided by an incrementally small time, so mathematically we are doing exactly the same as what the camera does.

But dx occurs between 2 positions, x1 and x2, where dx = x2 – x1

This means:  x2 is at dt duration later than x1.

Therefore  x2 = 5(t + dt)2 + 3(t + dt)

And x1 = 5t2 + 3t

Therefore  dx = x2 – x1 = 5(t + dt)2 + 3(t + dt) - (5t2 + 3t)

If we expand this we get:  5t2 + 10tdt + 5dt2 + 3t + 3dt – 5t23t

{Remember: (t + dt)2 = t2 + 2tdt + dt2}

Therefore dx/dt = 10t dt/dt + 5dt2/dt + 3dt/dt

Therefore dx/dt = 10t + 3 + 5dt

The sleight-of-hand that allows calculus to work is that the dt term on the RHS disappears so that dx/dt gives the instantaneous velocity at any specified time t. In other words, by making the duration virtually zero, we achieve the same result as the recorded photo, even though zero duration is physically impossible.

This example can be generalised for any polynomial: to differentiate an equation of the form, 
y = axb

dy/dx = bax(b-1)  which is exactly what I did above:

If y = 5x2 + 3x

Then dy/dx = 10x + 3

The most common example given in text books (and even Devlin’s book) is the tangent of a curve, partly because one can demonstrate it graphically.

If I was to use an equation of the form y = ax2 + bx + c, and differentiate it, the outcome would be exactly the same as above, mathematically. But, in this case, one takes a smaller and smaller x, which corresponds to a smaller and smaller y or f(x). (Note that f(x) = y, or f(x) and y are synonymous in this context). The slope of the tangent is dy/dx for smaller and smaller increments of dx. But at the point where the tangent’s slope is calculated, dx becomes infinitesimal. In other words, dx ultimately disappears, just like dt disappeared in the above worked example.

Devlin also demonstrates how integration (integral calculus), which in Cartesian analytic geometry calculates the area under a curve f(x), is the inverse function of differential calculus. In other words, for a polynomial, one just does the reverse procedure. If one differentiates an equation and then integrates it one simply gets the original equation back, and, obviously, vice versa.

Saturday, 29 September 2012

2 different views on physics and reality


Back in July I reviewed Jim Holt’s book, Why Does the World Exist? (2012), where he interviews various intellects, including David Deutsch, who wrote The Fabric of Reality (1997), a specific reference point in Holt’s interview. I’ve since read Deutsch’s book myself and reviewed it on Amazon UK. I gave it a favourable review, as it’s truly thought-provoking, which is not to say I agree with his ideas.

I followed up Deutsch’s book with John D. Barrow’s  New Theories of Everything (originally published 1990, 2nd edition published 2007) with ‘New’ being added to the title of the 2nd edition. The 2 books cover very similar territory, yet could hardly be more different. In particular, Deutsch’s book contains a radical vision of reality based on the multiple-worlds interpretation of quantum mechanics, and becomes totally fantastical in its closing chapter, where he envisages a world of infinite subjective time in the closing moments of the universe that, to all intents and purposes, represents heaven.

He took this ‘vision’ from Frank J. Tipler, who, as it turns out, co-wrote a book with Barrow called, The Anthropic Cosmological Principle (1986). Barrow also references Tipler in New Theories of Everything, not only in regard to the possibility of life forms, or ‘information processing systems’, existing in the final stages of the Universe, but in relation to everything in the Universe being possibly simulated in a computer. As Barrow points out there is a problem with this, however, as not everything is computable by a Turing machine.

Leaving aside the final chapter, Deutsch’s book is a stimulating read, and whilst he failed to convince me of his world-view, I wouldn’t ridicule him – he’s not a crank. Deutsch likes to challenge conventional wisdom, even turn it on its head. For example, he criticises the view that there is a hierarchy of ‘truth’ from mathematics to science to philosophy. To support his iconoclastic view, he provides a ‘proof’ that solipsism is false: it’s impossible for more than one person to be solipsistic in a given world. Bertrand Russell gave the anecdote of a woman philosopher writing to him and claiming she was a solipsist, then complaining she’d met no others. Deutsch uses a different example, but the contradictory outcome is the same – there can only be one solipsist in a solipsistic philosophy. He claims that the proof against solipsism is more definitive than any scientific theory. However, solipsism does occur in dreams, which we all experience, so there is one environment where solipsism is ‘true’.

In another part of the book, he points to Godel’s Incompleteness Theorem as evidence that mathematical ‘truths’ are contingent, which undermines the conventional epistemological hierarchy. Interestingly, Barrow also discusses Godel’s famous Theorem in depth, albeit in a different context, whereby he muses on what impact it has on scientific theories. Barrow concludes, if I interpret him correctly, that the basis of mathematical truths and scientific truths, though related via mathematical ‘laws of nature’, are different. Scientific truths are ultimately dependent on evidence, whereas mathematical truths are ultimately dependent on logical proofs from axioms. Godel’s Theorem prescribes limits to the proofs from the axioms, but, contrary to Deutsch’s claim, mathematical ‘truths’ have a universality and dependability that scientific ‘truths’ have never attained thus far, and are unlikely to in the foreseeable future.

One suspects that Deutsch’s desire to overturn the epistemological hierarchy, even if only in certain cases, is to give greater authority to his many-worlds interpretation of quantum mechanics, as he presents this view as if it’s unassailable to rational thought. For Deutsch, this is the ‘reality’ and Einstein’s space-time is merely an approximation to reality on a large scale. It has to be said that the many-worlds interpretation of quantum mechanics is becoming more popular, but it’s not definitive and the ‘evidence’ of interference between these worlds, manifest in quantum experiments, is not evidence of the worlds themselves. At the end of the day, it’s evidence that determines scientific ‘truth’.

Deutsch begins his book with a discussion on Popper’s philosophy of epistemology and how it differs from induction. Induction, according to Deutsch, simply examines what has happened in the past and forecasts it into the future. In other words, past experimental results predict future experimental results. However, Deutsch argues, quite compellingly, that the explanatory power of a theory has more authority and more weight than just induction. Kepler’s mathematical formulation of planetary orbits gives us a mechanism of induction but Newton’s Theory of Gravity gives us an explanation. It’s obvious that Deutsch believes that Hugh Everett’s many-worlds interpretation of quantum mechanics is a better explanation than any other rival interpretation. My contention is that quantum rival ‘theories’ are more philosophically based than science-based, so they are not theories per se, as there are no experiments that can separate them.

It was towards the end of his book, before he took off in a flight of speculative fancy, that it occurred to me that Deutsch had managed to convey all aspects of the universe – space-time, knowledge, human free will, chaotic and quantum phenomena, human and machine computation – into an explanatory model with quantum multiple-worlds at its heart. He had encompassed this world-view so completely with his ‘4 strands of reality’ – quantum mechanics, epistemology, evolution, computation – that he’s convinced that there can be no other explanation, therefore the quantum multiple-worlds must be ‘reality’.

In fact, Deutsch believes that his thesis is so all-encompassing that even chaotic phenomena can be explained as classical manifestations of quantum mechanics, even though the mathematics of chaos theory doesn’t support this. In all my reading, I’ve never come across another physicist who claims that chaotic phenomena have quantum mechanical origins.

Despite his emphasis on explanatory power, Deutsch makes no reference to Heisenberg's Uncertainty Principle or Planck’s constant, h. Considering how fundamental they are to quantum mechanics, a theory that fails to mention them, let alone incorporate them in its explanation, would appear to short-change us.

Deutsch does however explain the probabilities that are part and parcel of quantum calculations and predictions. They are simply the result of the ratio of universes giving one result over another. This implies that we are discussing a finite number of universes for every quantum interaction, though Deutsch doesn’t explicitly state this. Mathematically, I believe this could be the Achilles heel of his thesis: the quantum multiverse cannot be infinite yet its finiteness appears open-ended, not to mention indeterminable.

Quantum computers is an area where I believe Deutsch has some expertise, and it’s here that he provides one compelling argument for multiple worlds. To quote:

When a quantum factorization engine is factorizing a  250-digit number, the number of interfering universes will be of the order of 10500

Deutsch issues the challenge: how can this be done without multiple universes working in parallel? He explains that these 10500 universes are effectively identical except that each one is doing a different part of the calculation. There are also 10500 identical persons each getting the correct answer. So quantum computers, when they become standard tools, will be creating multiple universes complete with multiple human populations along with the infrastructure, worlds, galaxies and independent futures, all simultaneously calculating the same algorithm. In response to Deutsch’s challenge, I admit I don’t know, but I find his resolution incredulous in the extreme (refer Addendum 2 below).

Those who have read my post on Holt’s book, will remember that he interviewed Roger Penrose as well as Deutsch (along with many other intellectual luminaries). Interestingly, Holt seemed to find Penrose’s Platonic mathematical philosophy more bizarre than Deutsch’s but based on what I’ve read of them both, I’d have to disagree. Deutsch also mentions Penrose and delineates where he agrees and disagrees. To quote again:

[Penrose] envisages a comprehensible world, rejects the supernatural, recognizes creativity as being central to mathematics, ascribes objective reality both to the physical world and to abstract entities, and involves an integration of the foundations of mathematics and physics. In all these respects I am on his side.

Where Deutsch specifically disagrees with Penrose is in Penrose’s belief that the human brain cannot be reduced to algorithms. In other words, it disobeys Turing’s universal principle (as interpreted by Deutsch) that everything in the universe can be simulated by a universal quantum Turing machine. (Deutsch, by the way, believes the brain is effectively a classical computer, not a quantum computer.) Deutsch points out that Penrose’s position is at odds with most physicists, yet I agree with him on this salient point. I don’t believe the brain (human or otherwise) runs on algorithms. Deutsch sees this as a problem with Penrose’s world-view as he’s unable to explain human thinking. However, I see it as a problem with Deutsch’s world-view, because, if Penrose is right, then Deutsch is the one who can’t explain it.

Barrow is a cosmologist and logically his book reflects this perspective. Compared to Deutsch’s book, it’s more science, less philosophy. But there is another fundamental difference, in tone if not content. Right from his opening words, Deutsch stakes his position in the belief that we can encompass more and more knowledge in fewer and fewer theories, so it is possible for one person to ‘understand’ everything, at least in principle. He readily acknowledges, however, that we will probably never ‘know’ everything. On the other hand, Barrow brings the reader down-to-earth with a lengthy discussion on the initial conditions of the universe, and how they are completely up for grabs based on what we currently know.

Barrow ends his particular chapter on cosmological initial conditions with an in-depth discussion on the evolution of cosmology from Newton to Einstein to Wheeler-De Witt, which leads to the Hartle-Hawking ‘no-boundary condition’ model of the universe. He points out that this is a radical theory, ‘proposed by James Hartle and Stephen Hawking for aesthetic reasons’, but it overcomes the divide between initial conditions and the laws of nature. Compared to Deutsch’s radical theses, it’s almost prosaic. It has the added advantage of overcoming theological-based initial conditions, allowing ‘…a Universe which tunnels into existence out of nothing.’

Logically, a book on ‘theories of everything’ must include string theory or M theory, yet it’s not Barrow’s strong suit. Earlier this year, I read Lee Smolin’s The Trouble With Physics, which gives a detailed history and critique of string theory, but I won’t discuss it here. Of course, it’s another version of ‘reality’ where ‘theory’ is yet to be given credence by evidence.

As I alluded to above, what separates Barrow from Deutsch is his cosmologist’s perspective. Even if we can finally grasp all the laws of Nature in some ‘Theory of Everything’, the outcomes are based on chance, which was once considered the sole province of gods, and, as Barrow argues, is the reason that the mathematics of chance and probability were not investigated earlier in our scientific endeavours. To quote Barrow:

…it is possible for a Universe like ours to be governed by a very small number of simple laws and yet display an unlimited number of complex states and structures, including you and me.

Of all the improbabilities, the most fundamental and consequential to our existence is the asymmetry between matter and antimatter of one billion and one to one billion. We know this, because the ratio of photons to protons in the Universe is two billion to one (the annihilation of a proton with an anti-proton creates 2 photons). It is sobering to consider that a billion to one asymmetry in the birth pangs of the Universe is the basis of our very existence.

The final chapter in Barrow’s book is called Is pi really in the sky? This is an obvious allusion to mathematical Platonism and the entire chapter is a lengthy and in-depth discussion on the topic of mathematics and its relationship to reality. (Barrow has also authored a book called Pi in the Sky, which I haven’t read.) According to Barrow, Plato and Aristotle were the first to represent the dichotomy we still find today as to whether mathematics is discovered or invented. In other words, is it solely a product of the human mind or does it have an abstract existence independently of us and possibly the Universe? What we do know is that mathematics is the fundamental epistemological bridge between reality and us, especially when it comes to understanding Nature’s deepest secrets.

In regard to Platonism, Barrow has this to say:

It elevates mathematics close to the status of God... just alter the word ‘God’ to ‘mathematics’ wherever it appears and it makes pretty good sense. Mathematics is part of the world, and yet transcends it. It must exist before and after the Universe.

In the next paragraph he says:

Most scientists and mathematicians operate as if Platonism is true, regardless of whether they believe that it is. That is, they work as though there were an unknown realm of truth to be discovered.

Neither of these statements are definitive, and it should be pointed out that Barrow discusses all aspects of mathematical philosophies in depth.

I think that consciousness will never be reduced to mathematics, yet it is consciousness that makes mathematics manifest. Obviously, some argue that it is consciousness alone that makes mathematics at all, and Platonism is a remnant of numerology and mysticism. Whichever point of view one takes, it is mathematics that makes the Universe comprehensible. I’m a Platonist because of both the reasons given above. I don’t think the Universe is a giant computer, but I do think that mathematics determines, to a large extent, what realities we can have.

Despite my criticisms and disagreements, I concede that Deutsch is much cleverer than me. His book is certainly provocative, but I think it’s philosophically flawed in all the areas I discuss above. On the other hand, the more I read of Barrow, the more I find myself aligning to his cosmological world-view; in particular, his apparent attraction to the Anthropic Principle. He makes the point that the probability of the critical Nature’s constants’ values are less important than their necessity to provide conditions for observers to evolve. This does not invoke teleology - as he’s quick to point out – it’s just a necessary condition if intelligent life is to evolve.

You’ve no doubt noticed that I don’t really address the question in my heading. Deutsch’s multiverse and String Theory are two prevalent, if also extreme, versions of reality. String Theory claims that the Universe is actually 10 dimensions of space rather than 3 and predicts 10500 possible universes, not to be confused with the quantum multiverse. 20th Century physics has revealed, through quantum mechanics and Einstein’s theories of relativity, that ‘reality’ is more ‘strange’ than we imagine. I often think that Kant was prescient, in ways he could not have anticipated, when he said that we may never know the ‘thing-in-itself’.

It is therefore apposite to leave you with Barrow’s last paragraph in his book:

There is no formula that can deliver all truth, all harmony, all simplicity. No Theory of Everything can ever provide total insight. For, to see through everything, would leave us seeing nothing.

Barrow loves to fill his books with quotable snippets, but I like this one in particular:

Mathematics is the part of science you could continue to do if you woke up tomorrow and discovered the universe was gone. Dave Rusin.

Addendum 1: I've since read John Barrow's book, Pi in the Sky, and cover it here

Addendum 2: I've since read Philip Ball's book, Beyond Weird, where he challenges Deutsch's assertion that it requires multiple worlds to explain quantum computers. Quantum computers are dependent on entangled particles, which is not the same thing. Multiple entities in quantum mechanics don't really exist (according to Ball) just multiple probabilities, only one of which is ever observed. In Deutsch's theory that 'one' is in the universe that you happen to inhabit, whereas all the others exist in other universes that you are not consciously aware of.

Sunday, 2 September 2012

This one is for the climate-change sceptics


Notice I use the English spelling and not the American (skeptic) for those who may think I can’t spell (although I’m not infallible).

Not so long ago, North Carolina passed a bill to ‘bar state agencies from considering accelerated sea level rise in decision-making until 1 July 2016’. Apparently, this is a watered-down version of the original bill, which I believe didn’t have the 4 year moratorium. I learnt about it because it was reported in the Letters section of New Scientist. What worries me is the mentality behind this: the belief that we can legislate against nature.  In other words, if scientists start making predictions about sea-level rise, it’s forbidden. The legislation doesn’t state that sea level rise can’t happen but that any science-based predictions must be ignored.

This mentality also exists in Australia where there seems to be an unspoken yet tacit belief that we can vote against climate-change politically. There is a serious disconnect here: nature doesn’t belong to any political party; it’s not a constituency. The current leader of the opposition in Australian Federal politics, Tony Abbott, won his position (within the Party or Cabinet room) over the incumbent, on this very issue. The incumbent leader, Malcolm Turnbull, felt so strongly over the moral issue of human-induced climate-change he put his leadership on the line and lost, by 1 vote (in 2009).

This interview with Climate Central's chief climatologist, Heidi Cullen, from Princeton University, helps to put this issue into perspective. We don’t live at the poles where evidence of climate change is most apparent. The signs are there and we need to trust the people who can read the signs, whom we call scientists. Malcolm Turnbull, who lost his job over this, made the point that there is something wrong with a society when we can't trust our scientists – they are our brains trust.

In Australia, the sceptics argue that this is a global conspiracy by climatologists to keep themselves in jobs and maintain an influx of funding. In other words, as long as they keep maintaining that there is a problem, governments will keep giving them money, whereas, if they tell the ‘truth’ the funding will stop. This is so ludicrous one can’t waste words on it. In Australia, scientists working on climate-change were sent death-threats, which demonstrates the mentality of the people who oppose it. Again, there is an irrational-held belief that if only scientists would write the right reports that tell us climate-change is a furphy, then it won’t happen – the problem will simply go away.

Addendum: I learnt today (8 Sep 2012) that the NSW government has done something similar: revoked local council laws indicating coastal properties which may be subject to sea-level rise based on IPCC predictions.


Saturday, 18 August 2012

The Riemann Hypothesis; the most famous unsolved problem in mathematics


I’ve read 3 books on this topic: The Music of the Primes by Marcus du Sautoy, Prime Obsession by John Derbyshire and Stalking the Riemann Hypothesis by Dan Rockmore (and I originally read them in that order). They are all worthy of recommendation, but only John Derbyshire makes a truly valiant attempt to explain the mathematics behind the ‘Hypothesis’ (for laypeople) so it’s his book that I studied most closely.

Now it’s impossible for me to provide an explanation for 2 reasons: one, I’m not mathematically equipped to do it; and two, this is a blog and not a book. So my intention is to try and instill some of the wonder that Riemann’s extraordinary gravity-defying intuitive leap passes onto those who can faintly grasp its mathematical ramifications (like myself).

In 1859 (the same year that Darwin published The Origin of the Species), a young Bernhard Riemann (aged 32) presented a paper to the Berlin Academy as part of his acceptance as a ‘corresponding member’, titled “On the Number of Prime Numbers Less Than a Given Quantity”. The paper contains a formula that provides a definitive number called Ï€ (not to be confused with pi, the well-known transcendental number). In fact, I noticed that Derbyshire uses Ï€(x) as a function in an attempt to make a distinction. As Derbyshire points out, it’s a demonstration of the limitations arising from the use of the Greek alphabet to provide mathematical symbols – they double-up. So Ï€(x) is the number of primes to be found below any positive Real number. Real numbers include rational numbers, irrationals and transcendental numbers, as well as integers. The formula is complex and its explication requires a convoluted journey into the realm of complex algebra, logarithms and calculus.

Eratosthenes was one of the librarians at the famous Alexandria Library, around 230 BC and roughly 70 years after Euclid. He famously measured the circumference of the Earth to within 2% of its current figure (see Wikipedia) using the sun and some basic geometry. But he also came up with the first recorded method for finding primes known as Eratosthenes’ Sieve. It’s so simple that it’s obvious once explained: leaving the number 1, take the first natural number (or integer) which is 2, then delete all numbers that are multiples of 2, which are all the even numbers. Then take the next number, 3, and delete all its multiples. The next number left standing is 5, and one just repeats the process over whatever group of numbers one is examining (like 100, for example) until you are left with all the primes less than 100. With truly gigantic numbers there are other methods, especially now we have computers that can grind out algorithms, but Eratosthenes demonstrates that scholars were fascinated by primes even in antiquity.

Euclid famously came up with a simple proof to show that there are an infinite number of primes, which, on the surface, seems a remarkable feat, considering it’s impossible to count to infinity. But it’s so simple that Stephen Fry was even able to explain it on his TV programme, QI. Assume you have found the biggest prime, then take all the primes up to and including that prime and multiply them all together. Then add 1. Obviously none of the primes you know can be factors of this number as they would all give a remainder of 1. Therefore the number is either a prime or can be factored by a prime that is higher than the ones you already know. Either way, there will always be a higher prime, no matter which one you select, so there must be an infinite number of primes.

The thing about primes, that has fascinated mathematicians for eons, is that there appears to be no rhyme or reason to their distribution, except they get thinner - further apart as one goes to higher numbers. But even this is not strictly correct because there appears to be an infinite number of twin primes, 2 primes separated by a non-prime (which must be even for obvious reasons).

Back to Riemann’s paper and its 150 year old legacy. Entailed in his formula is a formulation of the Zeta function. Richard Elwes provides a relatively succinct exposition in his encyclopaedic MATHS 1001, and I’m not even going to attempt to write it down here.  The point is that the Zeta function gives complex roots to infinity. Most people know what a quadratic root is from high school maths. If you take the graph of a parabola like y = ax2 + bx + c, then it crosses the x axis where y = 0. It can cross the x axis in 2 places, or just touch it in 1 place or not cross it at all. The values of x that gives us a 0 value of y are called the roots of the equation. As a polynomial goes up in degree so does its number of roots. So a quadratic equation gives us 2 roots maximum but a polynomial with degree 3 (includes x3) will give us 3 roots and so on. Going back to the parabola, in the case where we don’t get any roots at all, it’s because we are trying to find square roots of negative numbers. However, if we use i (-1), we get complex roots in the form of a + ib. (For a basic explanation see my Apr.12 post on imaginary numbers.) A trigonometric equation like sinθ can give us an infinite number of zeros and so can the Zeta function.

If you didn’t follow that, don’t worry, the important point is that Riemann’s Hypothesis says that all the complex zeros of the Zeta function (to infinity) have Real part ½. So they are all of the form ½ + ib. Riemann wasn’t able to provide a proof for this and neither have the best mathematical minds since. The critical point is that if his Hypothesis is correct then so is his formula for finding an exact number of primes to any given number.

In the 150 years since, Riemann’s Hypothesis has found its way into many fields of mathematics, including Hermitian matrices, which has implications for quantum mechanics. The Zeta function is a formidable mathematical beast to the uninitiated, and its relationship to the distribution of the primes was first intimated by Euler. Riemann’s genius was to introduce complex numbers, then make the convoluted mental journey to demonstrate their pivotal role in providing an exact result. Even then, his fundamental conjecture was effectively based on a hunch. At the time he presented his paper, he had only calculated the first 3 non-trivial zeros (non-trivial means complex in this context) and computers have calculated them in the trillions since, yet we still have no proof. It’s known that they become chaotic at extremely high numbers (beyond the number of atoms in the universe) so it’s by no means certain that Reimann’s hypothesis is correct.

It would be a huge disappointment to most mathematicians if either a proof was found to falsify it or an exception was found through brute computation. Riemann gave us a formula that gives us an accurate count of the primes (Derbyshire gives a worked example up to 1 million) that’s dependent on the Hypothesis being correct to specified values. It’s hard to imagine that this formula suddenly fails at some extremely high number that’s currently beyond our ken, yet it can’t be ruled out.

Marcus du Sautoy, in The Music of the Primes, contemplates the Riemann hypothesis in the context of Godel’s Incompleteness Theorem, which is germane to the entire edifice of mathematics. The primes have a history of providing hard-to-prove conjectures. Along with Riemann’s hypothesis, there is the twin prime conjecture I mentioned earlier and Goldbach’s conjecture, which states that every even number greater than 2 is the sum of 2 primes. These conjectures are also practical demonstrations of Turing’s halting problem concerning computers. If they are correct, a computer algorithm set to finding them may never stop, yet we can’t determine in advance whether it will or not, otherwise we’d know in advance if it was true or not.

As du Sautoy points out, a corollary to Godel’s theorem is that there are limits to the proofs from any axioms we know at any time. In essence, there may be mathematical truths that the axioms cannot cover. The solution is to expand the axioms. In other words, we need to expand the foundations of our mathematics to extend our knowledge at its stratospheric limits. Du Sautoy speculates that the Riemann Hypothesis, along with these other examples, may be Godel’s Incompleteness Theorem in action.

Exploring the Reimann Hypothesis, even at the rudimentary level that I can manage, reinforces my philosophical Platonist view of mathematics. These truths exist independently of our investigations. There are an infinity of these Zeta zeros (we know that much) the same as there are an infinity of primes, which means there will always exist mathematical entities that we can’t possibly know. But aside from that obvious fact, the relationship that exists between apparently obscure objects like Zeta zeros and the distribution of prime numbers is a wonder. Godel’s Theorem implies that no matter how much we learn, there will always be mathematical wonders beyond our ken.

Addendum: This is a reasonably easy-to-follow description of Riemann's famous Zeta function, plus lots more.