Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Wednesday, 10 October 2012

The genius of differential calculus


Newton and Leibniz are both credited as independent ‘inventors’ of calculus but I would argue that it was at least as much discovery as invention, because, at its heart, differential calculus delivers the seemingly impossible.

Calculus was arguably the greatest impetus to physics in the scientific world. Newton’s employment of calculus to give mathematical definition and precision to motion was arguably as significant to the future of physics as his formulation of the General Theory of Gravity. Without calculus, we wouldn’t have Einstein’s Theory of Relativity and we wouldn’t have Schrodinger’s equation that lies at the heart of quantum mechanics. Engineers, the world over, routinely use calculus in the form of differential equations to design most of the technological tools and infrastructure we take for granted.

Differential calculus is best understood in its application to motion in physics and to tangents in Cartesian analytic geometry. In both cases, we have mathematics describing a vanishing entity, and this is what gives calculus its power, and also makes it difficult for people to grasp, conceptually.

Calculus can freeze motion, so that at any particular point in time, knowing an object’s acceleration (like a free-falling object under gravity, for example) we can determine its instantaneous velocity, and knowing its velocity we can determine its instantaneous position. It’s the word ‘instantaneous’ that gives the game away.

In reality, there is no ‘instantaneous’ moment of time. If you increase the shutter speed of a camera, you can ‘freeze’ virtually any motion, from a cricket ball in mid-flight (baseball for you American readers) to a bullet travelling faster than the speed of sound. But the point is that, no matter how fast the shutter speed, there is still a ‘duration’ that the shutter remains open. It’s only when one looks at the photographic record, that one is led to believe that the object has been captured at an instantaneous point in time.

Calculus does something very similar in that it takes a shorter and shorter sliver of time to give an instantaneous velocity or position.

I will take the example out of Keith Devlin’s excellent book, The Language of Mathematics; Making the invisible visible, of a car accelerating along a road:

x = 5t2 + 3t

The above numbers are made up, but the formulation is correct for a vehicle under constant acceleration. If we want to know the velocity at a specific point in time we differentiate it with respect to time (t).

The differentiated equation becomes dx/dt, which means that we differentiate the distance (x) with respect to time (wrt t).

To get an ‘instantaneous’ velocity, we take smaller and smaller distances over smaller and smaller durations. So dx/dt is an incrementally small distance divided by an incrementally small time, so mathematically we are doing exactly the same as what the camera does.

But dx occurs between 2 positions, x1 and x2, where dx = x2 – x1

This means:  x2 is at dt duration later than x1.

Therefore  x2 = 5(t + dt)2 + 3(t + dt)

And x1 = 5t2 + 3t

Therefore  dx = x2 – x1 = 5(t + dt)2 + 3(t + dt) - (5t2 + 3t)

If we expand this we get:  5t2 + 10tdt + 5dt2 + 3t + 3dt – 5t23t

{Remember: (t + dt)2 = t2 + 2tdt + dt2}

Therefore dx/dt = 10t dt/dt + 5dt2/dt + 3dt/dt

Therefore dx/dt = 10t + 3 + 5dt

The sleight-of-hand that allows calculus to work is that the dt term on the RHS disappears so that dx/dt gives the instantaneous velocity at any specified time t. In other words, by making the duration virtually zero, we achieve the same result as the recorded photo, even though zero duration is physically impossible.

This example can be generalised for any polynomial: to differentiate an equation of the form, 
y = axb

dy/dx = bax(b-1)  which is exactly what I did above:

If y = 5x2 + 3x

Then dy/dx = 10x + 3

The most common example given in text books (and even Devlin’s book) is the tangent of a curve, partly because one can demonstrate it graphically.

If I was to use an equation of the form y = ax2 + bx + c, and differentiate it, the outcome would be exactly the same as above, mathematically. But, in this case, one takes a smaller and smaller x, which corresponds to a smaller and smaller y or f(x). (Note that f(x) = y, or f(x) and y are synonymous in this context). The slope of the tangent is dy/dx for smaller and smaller increments of dx. But at the point where the tangent’s slope is calculated, dx becomes infinitesimal. In other words, dx ultimately disappears, just like dt disappeared in the above worked example.

Devlin also demonstrates how integration (integral calculus), which in Cartesian analytic geometry calculates the area under a curve f(x), is the inverse function of differential calculus. In other words, for a polynomial, one just does the reverse procedure. If one differentiates an equation and then integrates it one simply gets the original equation back, and, obviously, vice versa.

Saturday, 29 September 2012

2 different views on physics and reality


Back in July I reviewed Jim Holt’s book, Why Does the World Exist? (2012), where he interviews various intellects, including David Deutsch, who wrote The Fabric of Reality (1997), a specific reference point in Holt’s interview. I’ve since read Deutsch’s book myself and reviewed it on Amazon UK. I gave it a favourable review, as it’s truly thought-provoking, which is not to say I agree with his ideas.

I followed up Deutsch’s book with John D. Barrow’s  New Theories of Everything (originally published 1990, 2nd edition published 2007) with ‘New’ being added to the title of the 2nd edition. The 2 books cover very similar territory, yet could hardly be more different. In particular, Deutsch’s book contains a radical vision of reality based on the multiple-worlds interpretation of quantum mechanics, and becomes totally fantastical in its closing chapter, where he envisages a world of infinite subjective time in the closing moments of the universe that, to all intents and purposes, represents heaven.

He took this ‘vision’ from Frank J. Tipler, who, as it turns out, co-wrote a book with Barrow called, The Anthropic Cosmological Principle (1986). Barrow also references Tipler in New Theories of Everything, not only in regard to the possibility of life forms, or ‘information processing systems’, existing in the final stages of the Universe, but in relation to everything in the Universe being possibly simulated in a computer. As Barrow points out there is a problem with this, however, as not everything is computable by a Turing machine.

Leaving aside the final chapter, Deutsch’s book is a stimulating read, and whilst he failed to convince me of his world-view, I wouldn’t ridicule him – he’s not a crank. Deutsch likes to challenge conventional wisdom, even turn it on its head. For example, he criticises the view that there is a hierarchy of ‘truth’ from mathematics to science to philosophy. To support his iconoclastic view, he provides a ‘proof’ that solipsism is false: it’s impossible for more than one person to be solipsistic in a given world. Bertrand Russell gave the anecdote of a woman philosopher writing to him and claiming she was a solipsist, then complaining she’d met no others. Deutsch uses a different example, but the contradictory outcome is the same – there can only be one solipsist in a solipsistic philosophy. He claims that the proof against solipsism is more definitive than any scientific theory. However, solipsism does occur in dreams, which we all experience, so there is one environment where solipsism is ‘true’.

In another part of the book, he points to Godel’s Incompleteness Theorem as evidence that mathematical ‘truths’ are contingent, which undermines the conventional epistemological hierarchy. Interestingly, Barrow also discusses Godel’s famous Theorem in depth, albeit in a different context, whereby he muses on what impact it has on scientific theories. Barrow concludes, if I interpret him correctly, that the basis of mathematical truths and scientific truths, though related via mathematical ‘laws of nature’, are different. Scientific truths are ultimately dependent on evidence, whereas mathematical truths are ultimately dependent on logical proofs from axioms. Godel’s Theorem prescribes limits to the proofs from the axioms, but, contrary to Deutsch’s claim, mathematical ‘truths’ have a universality and dependability that scientific ‘truths’ have never attained thus far, and are unlikely to in the foreseeable future.

One suspects that Deutsch’s desire to overturn the epistemological hierarchy, even if only in certain cases, is to give greater authority to his many-worlds interpretation of quantum mechanics, as he presents this view as if it’s unassailable to rational thought. For Deutsch, this is the ‘reality’ and Einstein’s space-time is merely an approximation to reality on a large scale. It has to be said that the many-worlds interpretation of quantum mechanics is becoming more popular, but it’s not definitive and the ‘evidence’ of interference between these worlds, manifest in quantum experiments, is not evidence of the worlds themselves. At the end of the day, it’s evidence that determines scientific ‘truth’.

Deutsch begins his book with a discussion on Popper’s philosophy of epistemology and how it differs from induction. Induction, according to Deutsch, simply examines what has happened in the past and forecasts it into the future. In other words, past experimental results predict future experimental results. However, Deutsch argues, quite compellingly, that the explanatory power of a theory has more authority and more weight than just induction. Kepler’s mathematical formulation of planetary orbits gives us a mechanism of induction but Newton’s Theory of Gravity gives us an explanation. It’s obvious that Deutsch believes that Hugh Everett’s many-worlds interpretation of quantum mechanics is a better explanation than any other rival interpretation. My contention is that quantum rival ‘theories’ are more philosophically based than science-based, so they are not theories per se, as there are no experiments that can separate them.

It was towards the end of his book, before he took off in a flight of speculative fancy, that it occurred to me that Deutsch had managed to convey all aspects of the universe – space-time, knowledge, human free will, chaotic and quantum phenomena, human and machine computation – into an explanatory model with quantum multiple-worlds at its heart. He had encompassed this world-view so completely with his ‘4 strands of reality’ – quantum mechanics, epistemology, evolution, computation – that he’s convinced that there can be no other explanation, therefore the quantum multiple-worlds must be ‘reality’.

In fact, Deutsch believes that his thesis is so all-encompassing that even chaotic phenomena can be explained as classical manifestations of quantum mechanics, even though the mathematics of chaos theory doesn’t support this. In all my reading, I’ve never come across another physicist who claims that chaotic phenomena have quantum mechanical origins.

Despite his emphasis on explanatory power, Deutsch makes no reference to Heisenberg's Uncertainty Principle or Planck’s constant, h. Considering how fundamental they are to quantum mechanics, a theory that fails to mention them, let alone incorporate them in its explanation, would appear to short-change us.

Deutsch does however explain the probabilities that are part and parcel of quantum calculations and predictions. They are simply the result of the ratio of universes giving one result over another. This implies that we are discussing a finite number of universes for every quantum interaction, though Deutsch doesn’t explicitly state this. Mathematically, I believe this could be the Achilles heel of his thesis: the quantum multiverse cannot be infinite yet its finiteness appears open-ended, not to mention indeterminable.

Quantum computers is an area where I believe Deutsch has some expertise, and it’s here that he provides one compelling argument for multiple worlds. To quote:

When a quantum factorization engine is factorizing a  250-digit number, the number of interfering universes will be of the order of 10500

Deutsch issues the challenge: how can this be done without multiple universes working in parallel? He explains that these 10500 universes are effectively identical except that each one is doing a different part of the calculation. There are also 10500 identical persons each getting the correct answer. So quantum computers, when they become standard tools, will be creating multiple universes complete with multiple human populations along with the infrastructure, worlds, galaxies and independent futures, all simultaneously calculating the same algorithm. In response to Deutsch’s challenge, I admit I don’t know, but I find his resolution incredulous in the extreme (refer Addendum 2 below).

Those who have read my post on Holt’s book, will remember that he interviewed Roger Penrose as well as Deutsch (along with many other intellectual luminaries). Interestingly, Holt seemed to find Penrose’s Platonic mathematical philosophy more bizarre than Deutsch’s but based on what I’ve read of them both, I’d have to disagree. Deutsch also mentions Penrose and delineates where he agrees and disagrees. To quote again:

[Penrose] envisages a comprehensible world, rejects the supernatural, recognizes creativity as being central to mathematics, ascribes objective reality both to the physical world and to abstract entities, and involves an integration of the foundations of mathematics and physics. In all these respects I am on his side.

Where Deutsch specifically disagrees with Penrose is in Penrose’s belief that the human brain cannot be reduced to algorithms. In other words, it disobeys Turing’s universal principle (as interpreted by Deutsch) that everything in the universe can be simulated by a universal quantum Turing machine. (Deutsch, by the way, believes the brain is effectively a classical computer, not a quantum computer.) Deutsch points out that Penrose’s position is at odds with most physicists, yet I agree with him on this salient point. I don’t believe the brain (human or otherwise) runs on algorithms. Deutsch sees this as a problem with Penrose’s world-view as he’s unable to explain human thinking. However, I see it as a problem with Deutsch’s world-view, because, if Penrose is right, then Deutsch is the one who can’t explain it.

Barrow is a cosmologist and logically his book reflects this perspective. Compared to Deutsch’s book, it’s more science, less philosophy. But there is another fundamental difference, in tone if not content. Right from his opening words, Deutsch stakes his position in the belief that we can encompass more and more knowledge in fewer and fewer theories, so it is possible for one person to ‘understand’ everything, at least in principle. He readily acknowledges, however, that we will probably never ‘know’ everything. On the other hand, Barrow brings the reader down-to-earth with a lengthy discussion on the initial conditions of the universe, and how they are completely up for grabs based on what we currently know.

Barrow ends his particular chapter on cosmological initial conditions with an in-depth discussion on the evolution of cosmology from Newton to Einstein to Wheeler-De Witt, which leads to the Hartle-Hawking ‘no-boundary condition’ model of the universe. He points out that this is a radical theory, ‘proposed by James Hartle and Stephen Hawking for aesthetic reasons’, but it overcomes the divide between initial conditions and the laws of nature. Compared to Deutsch’s radical theses, it’s almost prosaic. It has the added advantage of overcoming theological-based initial conditions, allowing ‘…a Universe which tunnels into existence out of nothing.’

Logically, a book on ‘theories of everything’ must include string theory or M theory, yet it’s not Barrow’s strong suit. Earlier this year, I read Lee Smolin’s The Trouble With Physics, which gives a detailed history and critique of string theory, but I won’t discuss it here. Of course, it’s another version of ‘reality’ where ‘theory’ is yet to be given credence by evidence.

As I alluded to above, what separates Barrow from Deutsch is his cosmologist’s perspective. Even if we can finally grasp all the laws of Nature in some ‘Theory of Everything’, the outcomes are based on chance, which was once considered the sole province of gods, and, as Barrow argues, is the reason that the mathematics of chance and probability were not investigated earlier in our scientific endeavours. To quote Barrow:

…it is possible for a Universe like ours to be governed by a very small number of simple laws and yet display an unlimited number of complex states and structures, including you and me.

Of all the improbabilities, the most fundamental and consequential to our existence is the asymmetry between matter and antimatter of one billion and one to one billion. We know this, because the ratio of photons to protons in the Universe is two billion to one (the annihilation of a proton with an anti-proton creates 2 photons). It is sobering to consider that a billion to one asymmetry in the birth pangs of the Universe is the basis of our very existence.

The final chapter in Barrow’s book is called Is pi really in the sky? This is an obvious allusion to mathematical Platonism and the entire chapter is a lengthy and in-depth discussion on the topic of mathematics and its relationship to reality. (Barrow has also authored a book called Pi in the Sky, which I haven’t read.) According to Barrow, Plato and Aristotle were the first to represent the dichotomy we still find today as to whether mathematics is discovered or invented. In other words, is it solely a product of the human mind or does it have an abstract existence independently of us and possibly the Universe? What we do know is that mathematics is the fundamental epistemological bridge between reality and us, especially when it comes to understanding Nature’s deepest secrets.

In regard to Platonism, Barrow has this to say:

It elevates mathematics close to the status of God... just alter the word ‘God’ to ‘mathematics’ wherever it appears and it makes pretty good sense. Mathematics is part of the world, and yet transcends it. It must exist before and after the Universe.

In the next paragraph he says:

Most scientists and mathematicians operate as if Platonism is true, regardless of whether they believe that it is. That is, they work as though there were an unknown realm of truth to be discovered.

Neither of these statements are definitive, and it should be pointed out that Barrow discusses all aspects of mathematical philosophies in depth.

I think that consciousness will never be reduced to mathematics, yet it is consciousness that makes mathematics manifest. Obviously, some argue that it is consciousness alone that makes mathematics at all, and Platonism is a remnant of numerology and mysticism. Whichever point of view one takes, it is mathematics that makes the Universe comprehensible. I’m a Platonist because of both the reasons given above. I don’t think the Universe is a giant computer, but I do think that mathematics determines, to a large extent, what realities we can have.

Despite my criticisms and disagreements, I concede that Deutsch is much cleverer than me. His book is certainly provocative, but I think it’s philosophically flawed in all the areas I discuss above. On the other hand, the more I read of Barrow, the more I find myself aligning to his cosmological world-view; in particular, his apparent attraction to the Anthropic Principle. He makes the point that the probability of the critical Nature’s constants’ values are less important than their necessity to provide conditions for observers to evolve. This does not invoke teleology - as he’s quick to point out – it’s just a necessary condition if intelligent life is to evolve.

You’ve no doubt noticed that I don’t really address the question in my heading. Deutsch’s multiverse and String Theory are two prevalent, if also extreme, versions of reality. String Theory claims that the Universe is actually 10 dimensions of space rather than 3 and predicts 10500 possible universes, not to be confused with the quantum multiverse. 20th Century physics has revealed, through quantum mechanics and Einstein’s theories of relativity, that ‘reality’ is more ‘strange’ than we imagine. I often think that Kant was prescient, in ways he could not have anticipated, when he said that we may never know the ‘thing-in-itself’.

It is therefore apposite to leave you with Barrow’s last paragraph in his book:

There is no formula that can deliver all truth, all harmony, all simplicity. No Theory of Everything can ever provide total insight. For, to see through everything, would leave us seeing nothing.

Barrow loves to fill his books with quotable snippets, but I like this one in particular:

Mathematics is the part of science you could continue to do if you woke up tomorrow and discovered the universe was gone. Dave Rusin.

Addendum 1: I've since read John Barrow's book, Pi in the Sky, and cover it here

Addendum 2: I've since read Philip Ball's book, Beyond Weird, where he challenges Deutsch's assertion that it requires multiple worlds to explain quantum computers. Quantum computers are dependent on entangled particles, which is not the same thing. Multiple entities in quantum mechanics don't really exist (according to Ball) just multiple probabilities, only one of which is ever observed. In Deutsch's theory that 'one' is in the universe that you happen to inhabit, whereas all the others exist in other universes that you are not consciously aware of.

Sunday, 2 September 2012

This one is for the climate-change sceptics


Notice I use the English spelling and not the American (skeptic) for those who may think I can’t spell (although I’m not infallible).

Not so long ago, North Carolina passed a bill to ‘bar state agencies from considering accelerated sea level rise in decision-making until 1 July 2016’. Apparently, this is a watered-down version of the original bill, which I believe didn’t have the 4 year moratorium. I learnt about it because it was reported in the Letters section of New Scientist. What worries me is the mentality behind this: the belief that we can legislate against nature.  In other words, if scientists start making predictions about sea-level rise, it’s forbidden. The legislation doesn’t state that sea level rise can’t happen but that any science-based predictions must be ignored.

This mentality also exists in Australia where there seems to be an unspoken yet tacit belief that we can vote against climate-change politically. There is a serious disconnect here: nature doesn’t belong to any political party; it’s not a constituency. The current leader of the opposition in Australian Federal politics, Tony Abbott, won his position (within the Party or Cabinet room) over the incumbent, on this very issue. The incumbent leader, Malcolm Turnbull, felt so strongly over the moral issue of human-induced climate-change he put his leadership on the line and lost, by 1 vote (in 2009).

This interview with Climate Central's chief climatologist, Heidi Cullen, from Princeton University, helps to put this issue into perspective. We don’t live at the poles where evidence of climate change is most apparent. The signs are there and we need to trust the people who can read the signs, whom we call scientists. Malcolm Turnbull, who lost his job over this, made the point that there is something wrong with a society when we can't trust our scientists – they are our brains trust.

In Australia, the sceptics argue that this is a global conspiracy by climatologists to keep themselves in jobs and maintain an influx of funding. In other words, as long as they keep maintaining that there is a problem, governments will keep giving them money, whereas, if they tell the ‘truth’ the funding will stop. This is so ludicrous one can’t waste words on it. In Australia, scientists working on climate-change were sent death-threats, which demonstrates the mentality of the people who oppose it. Again, there is an irrational-held belief that if only scientists would write the right reports that tell us climate-change is a furphy, then it won’t happen – the problem will simply go away.

Addendum: I learnt today (8 Sep 2012) that the NSW government has done something similar: revoked local council laws indicating coastal properties which may be subject to sea-level rise based on IPCC predictions.


Saturday, 18 August 2012

The Riemann Hypothesis; the most famous unsolved problem in mathematics


I’ve read 3 books on this topic: The Music of the Primes by Marcus du Sautoy, Prime Obsession by John Derbyshire and Stalking the Riemann Hypothesis by Dan Rockmore (and I originally read them in that order). They are all worthy of recommendation, but only John Derbyshire makes a truly valiant attempt to explain the mathematics behind the ‘Hypothesis’ (for laypeople) so it’s his book that I studied most closely.

Now it’s impossible for me to provide an explanation for 2 reasons: one, I’m not mathematically equipped to do it; and two, this is a blog and not a book. So my intention is to try and instill some of the wonder that Riemann’s extraordinary gravity-defying intuitive leap passes onto those who can faintly grasp its mathematical ramifications (like myself).

In 1859 (the same year that Darwin published The Origin of the Species), a young Bernhard Riemann (aged 32) presented a paper to the Berlin Academy as part of his acceptance as a ‘corresponding member’, titled “On the Number of Prime Numbers Less Than a Given Quantity”. The paper contains a formula that provides a definitive number called Ï€ (not to be confused with pi, the well-known transcendental number). In fact, I noticed that Derbyshire uses Ï€(x) as a function in an attempt to make a distinction. As Derbyshire points out, it’s a demonstration of the limitations arising from the use of the Greek alphabet to provide mathematical symbols – they double-up. So Ï€(x) is the number of primes to be found below any positive Real number. Real numbers include rational numbers, irrationals and transcendental numbers, as well as integers. The formula is complex and its explication requires a convoluted journey into the realm of complex algebra, logarithms and calculus.

Eratosthenes was one of the librarians at the famous Alexandria Library, around 230 BC and roughly 70 years after Euclid. He famously measured the circumference of the Earth to within 2% of its current figure (see Wikipedia) using the sun and some basic geometry. But he also came up with the first recorded method for finding primes known as Eratosthenes’ Sieve. It’s so simple that it’s obvious once explained: leaving the number 1, take the first natural number (or integer) which is 2, then delete all numbers that are multiples of 2, which are all the even numbers. Then take the next number, 3, and delete all its multiples. The next number left standing is 5, and one just repeats the process over whatever group of numbers one is examining (like 100, for example) until you are left with all the primes less than 100. With truly gigantic numbers there are other methods, especially now we have computers that can grind out algorithms, but Eratosthenes demonstrates that scholars were fascinated by primes even in antiquity.

Euclid famously came up with a simple proof to show that there are an infinite number of primes, which, on the surface, seems a remarkable feat, considering it’s impossible to count to infinity. But it’s so simple that Stephen Fry was even able to explain it on his TV programme, QI. Assume you have found the biggest prime, then take all the primes up to and including that prime and multiply them all together. Then add 1. Obviously none of the primes you know can be factors of this number as they would all give a remainder of 1. Therefore the number is either a prime or can be factored by a prime that is higher than the ones you already know. Either way, there will always be a higher prime, no matter which one you select, so there must be an infinite number of primes.

The thing about primes, that has fascinated mathematicians for eons, is that there appears to be no rhyme or reason to their distribution, except they get thinner - further apart as one goes to higher numbers. But even this is not strictly correct because there appears to be an infinite number of twin primes, 2 primes separated by a non-prime (which must be even for obvious reasons).

Back to Riemann’s paper and its 150 year old legacy. Entailed in his formula is a formulation of the Zeta function. Richard Elwes provides a relatively succinct exposition in his encyclopaedic MATHS 1001, and I’m not even going to attempt to write it down here.  The point is that the Zeta function gives complex roots to infinity. Most people know what a quadratic root is from high school maths. If you take the graph of a parabola like y = ax2 + bx + c, then it crosses the x axis where y = 0. It can cross the x axis in 2 places, or just touch it in 1 place or not cross it at all. The values of x that gives us a 0 value of y are called the roots of the equation. As a polynomial goes up in degree so does its number of roots. So a quadratic equation gives us 2 roots maximum but a polynomial with degree 3 (includes x3) will give us 3 roots and so on. Going back to the parabola, in the case where we don’t get any roots at all, it’s because we are trying to find square roots of negative numbers. However, if we use i (-1), we get complex roots in the form of a + ib. (For a basic explanation see my Apr.12 post on imaginary numbers.) A trigonometric equation like sinθ can give us an infinite number of zeros and so can the Zeta function.

If you didn’t follow that, don’t worry, the important point is that Riemann’s Hypothesis says that all the complex zeros of the Zeta function (to infinity) have Real part ½. So they are all of the form ½ + ib. Riemann wasn’t able to provide a proof for this and neither have the best mathematical minds since. The critical point is that if his Hypothesis is correct then so is his formula for finding an exact number of primes to any given number.

In the 150 years since, Riemann’s Hypothesis has found its way into many fields of mathematics, including Hermitian matrices, which has implications for quantum mechanics. The Zeta function is a formidable mathematical beast to the uninitiated, and its relationship to the distribution of the primes was first intimated by Euler. Riemann’s genius was to introduce complex numbers, then make the convoluted mental journey to demonstrate their pivotal role in providing an exact result. Even then, his fundamental conjecture was effectively based on a hunch. At the time he presented his paper, he had only calculated the first 3 non-trivial zeros (non-trivial means complex in this context) and computers have calculated them in the trillions since, yet we still have no proof. It’s known that they become chaotic at extremely high numbers (beyond the number of atoms in the universe) so it’s by no means certain that Reimann’s hypothesis is correct.

It would be a huge disappointment to most mathematicians if either a proof was found to falsify it or an exception was found through brute computation. Riemann gave us a formula that gives us an accurate count of the primes (Derbyshire gives a worked example up to 1 million) that’s dependent on the Hypothesis being correct to specified values. It’s hard to imagine that this formula suddenly fails at some extremely high number that’s currently beyond our ken, yet it can’t be ruled out.

Marcus du Sautoy, in The Music of the Primes, contemplates the Riemann hypothesis in the context of Godel’s Incompleteness Theorem, which is germane to the entire edifice of mathematics. The primes have a history of providing hard-to-prove conjectures. Along with Riemann’s hypothesis, there is the twin prime conjecture I mentioned earlier and Goldbach’s conjecture, which states that every even number greater than 2 is the sum of 2 primes. These conjectures are also practical demonstrations of Turing’s halting problem concerning computers. If they are correct, a computer algorithm set to finding them may never stop, yet we can’t determine in advance whether it will or not, otherwise we’d know in advance if it was true or not.

As du Sautoy points out, a corollary to Godel’s theorem is that there are limits to the proofs from any axioms we know at any time. In essence, there may be mathematical truths that the axioms cannot cover. The solution is to expand the axioms. In other words, we need to expand the foundations of our mathematics to extend our knowledge at its stratospheric limits. Du Sautoy speculates that the Riemann Hypothesis, along with these other examples, may be Godel’s Incompleteness Theorem in action.

Exploring the Reimann Hypothesis, even at the rudimentary level that I can manage, reinforces my philosophical Platonist view of mathematics. These truths exist independently of our investigations. There are an infinity of these Zeta zeros (we know that much) the same as there are an infinity of primes, which means there will always exist mathematical entities that we can’t possibly know. But aside from that obvious fact, the relationship that exists between apparently obscure objects like Zeta zeros and the distribution of prime numbers is a wonder. Godel’s Theorem implies that no matter how much we learn, there will always be mathematical wonders beyond our ken.

Addendum: This is a reasonably easy-to-follow description of Riemann's famous Zeta function, plus lots more.

Thursday, 16 August 2012

Sex, Lies and Julian Assange, according to the ABC


With Assange’s status again in the spotlight, and the British government threatening to revoke Ecuador’s diplomatic asylum status, using force if necessary, which would be unprecedented in the modern world, it is worth looking at what all the fuss is about.

Almost a month ago, ABC’s 4 Corners aired its own investigations of the allegations against Assange initiated in Sweden. What the programme demonstrates is just how farcical this entire episode is.

Considering he was allowed to leave Sweden by Sweden’s public prosecutor, you would have to wonder, what changed? Is it a coincidence that Sweden’s change of mind - complete reversal in fact - came about when Assange elevated his whistle-blowing campaign against America?

Going by the rhetoric coming out of London, it’s fairly obvious, no matter what decision Ecuador comes to, Assange will be extradited to Sweden, and then we will find out if America will finally reveal its hand.

Saturday, 21 July 2012

Why is there something rather than nothing?


Jim Holt has written an entire book on this subject, titled Why Does the World Exist? An Existential Detective Story. Holt is a philosopher and frequent contributor to The New Yorker, the New York Times and the London Review of Books, according to the blurb on the inner title page. He’s also very knowledgeable in mathematics and physics, and has the intellectual credentials to gain access to some of the world’s most eminent thinkers, like David Deutsch, Richard Swinburne, Steven Weinberg, Roger Penrose and the late John Updike, amongst others. I’m stating the obvious when I say that he is both cleverer and better read than me.

The above-referenced, often-quoted existential question is generally attributed to Gottfried Leibniz, in the early 18th Century and towards the end of his life, in his treatise on the “Principle of Sufficient Reason”, which, according to Holt, ‘…says, in essence, that there is an explanation for every fact, an answer to every question.’ Given the time in which he lived, it’s not surprising that Leibniz’s answer was ‘God’.  Whilst Leibniz acknowledged the physical world is contingent, God, on the other hand, is a ‘necessary being’.

For some people (like Richard Swinburne), this is still the only relevant and pertinent answer, but considering Holt makes this point on page 21 of a 280 page book, it’s obviously an historical starting point and not a conclusion. He goes on to discuss Hume’s and Kant’s responses but I’ll digress. In Feb. 2011, I wrote a post on metaphysics, where I point out that there is no reason for God to exist if we didn’t exist, so I think the logic is back to front. As I’ve argued elsewhere (March 2012), the argument for a God existing independently of humanity is a non sequitur. This is not something I’ll dwell on – I’m just putting the argument for God into perspective and don’t intend to reference it again.

Sorry, I’ll take that back. In Nov 2011, I got into an argument with Emanuel Rutten on his blog, after he claimed that he had proven that God ‘necessarily exists’ using modal logic. Interestingly, Holt, who understands modal logic better than me, raises this same issue. Holt references Alvin Platinga’s argument, which he describes as ‘dauntingly technical’. In a nutshell: because of God’s ‘maximal greatness’, if one concedes he can exist in one possible world, he must necessarily exist in all possible worlds because ‘maximal greatness’ must exist in all possible worlds. Apparently, this was the basis of Godel’s argument (by logic) for the existence of God. But Holt contends that the argument can just as easily be reversed by claiming that there exists a possible world where ‘maximal greatness’ is absent’. And ‘if God is absent from any possible world, he is absent from all possible worlds…’ (italics in the original). Rutten, by the way, tried to have it both ways: a personal God necessarily exists, but a non-personal God must necessarily not exist. If you don’t believe me, check out the argument thread on his own blog which I link from my own post, Trying to define God (Nov. 2011).

Holt starts off with a brief history lesson, and just when you think: what else can he possibly say on the subject? he takes us on a globe-trotting journey, engaging some truly Olympian intellects. As the book progressed I found the topic more engaging and more thought-provoking. At the very least, Holt makes you think, as all good philosophy should. Holt acknowledges an influence and respect for Thomas Nagel, whom he didn’t speak with, but ‘…a philosopher I have always revered for his originality, depth and integrity.’

I found the most interesting person Holt interviewed to be David Deutsch, who is best known as an advocate for Hugh Everett’s ‘many worlds’ interpretation of quantum mechanics. Holt had expected a frosty response from Deutsch, based on a review he’d written on Deutsch’s book, The Fabric of Reality, for the Wall Street Journal where he’d used the famous description given to Lord Byron: “mad, bad and dangerous to know”. But he left Deutsch’s company with quite a different impression, where ‘…he had revealed a real sweetness of character and intellectual generosity.’

I didn’t know this, but Deutsch had extended Turing’s proof of a universal computer to a quantum version, whereby  ‘…in principle, it could simulate any physically possible environment. It was the ultimate “virtual reality” machine.’ In fact, Deutsch had presented his proof to Richard Feynman just before his death in 1988, who got up, as Deutsch was writing it on a blackboard, took the chalk off him and finished it off. Holt found out, from his conversation with Deutsch, that he didn’t believe we live in a ‘quantum computer simulation’.

Deutsch outlined his philosophy in The Fabric of Reality, according to Holt (I haven’t read it):

Life and thought, [Deutsch] declared, determine the very warp and woof of the quantum multiverse… knowledge-bearing structures – embodied in physical minds – arise from evolutionary processes that ensure they are nearly identical across different universes. From the perspective of the quantum multiverse as a whole, mind is a pervasive ordering principle, like a giant crystal.

When Holt asked Deutsch ‘Why is there a “fabric of reality” at all?’ he said “[it] could only be answered by finding a more encompassing fabric of which the physical multiverse was a part. But there is no ultimate answer.” He said “I would start with the principle of comprehensibility.”

He gave the example of a quasar in the universe and a model of the quasar in someone’s brain “…yet they embody the same mathematical relationships.” For Deutsch, it’s the comprehensibility of the universe (in particular, its mathematical comprehensibility) that provides a basis for the ‘fabric of reality’. I’ll return to this point later.

The most insightful aspect of Holt’s discourse with Deutsch was his differentiation between explanation by laws and explanation of specifics. For example, Newton’s theory of gravitation gave laws to explain what Kepler could only explain by specifics: the orbits of planets in the solar system. Likewise, Darwin and Wallace’s theory of natural selection gave a law for evolutionary speciation rather than an explanation for every individual species. Despite his affinity for ‘comprehensibility’, Deutsch also claimed: “No, none of the laws of physics can possibly answer the question of why the multiverse is there.”

It needs to be pointed out that Deutsch’s quantum multiverse is not the same as the multiverse propagated by an ‘eternally-inflating universe’. Apparently, Leonard Susskind has argued that “the two may really be the same thing”, but Steven Weinberg, in conversation with Holt, thinks they’re “completely perpendicular”.

Holt’s conversation with Penrose held few surprises for me. In particular, Penrose described his 3 worlds philosophy: the Platonic (mathematical) world, the physical world and the mental world. I’ve expounded on this in previous posts, including the one on metaphysics I mentioned earlier but also when I reviewed Mario Livio’s book, Is God a Mathematician? (March 2009).

Penrose argues that mathematics is part of our mental world (in fact, the most complex and advanced part) whilst our mental world is produced by the most advanced and complex part of the physical world (our brains). But Penrose is a mathematical Platonist, and conjectures that the universe is effectively a product of the Platonic world, which creates an existential circle when you contemplate all three. Holt found Penrose’s ideas too ‘mystical’ and suggests that he was perhaps more Pythagorean than Platonist. However, I couldn’t help but see a connection with Deutsch’s ‘comprehensibility’ philosophy. The mathematical model in the brain (of a quasar, for example) having the same ‘mathematical relationships’ as the quasar itself. Epistemologically, mathematics is the bridge between our comprehensibility and the machinations of the universe.

One thing that struck me right from the start of Holt’s book, yet he doesn’t address till the very end, is the fact that without consciousness there might as well be nothing. Nothingness is what happens when we die, and what existed before we were born. It’s consciousness that determines the difference between ‘something’ and ‘nothing’. Schrodinger, in What is Life? made the observation that consciousness exists in a continuous present. Possibly, it’s the only thing that does. After all, we know that photons don’t. As Raymond Tallis keeps reminding us, without consciousness, there is no past, present or future. It also means that without memory we would not experience consciousness. So some states of unconsciousness could simply mean that we are not creating any memories.

Another interesting personality in Holt’s engagements was Derek Parfit, who contemplated a hypothetical ‘selector’ to choose a universe. Both Holt and Parfit concluded, through pure logic, using ‘simplicity’ as the criterion, that there would be no selector and ‘lots of generic possibilities’ which would lead to a ‘thoroughly mediocre universe’. I’ve short-circuited the argument for brevity, but, contrary to Holt’s and Parfit’s conclusion, I would contend that it doesn’t fit the evidence. Our universe is far from mediocre if it’s produced life and consciousness. The ‘selector’, it should be pointed out, could be a condition like ‘goodness’ or ‘fullness’. But, after reading their discussion, I concluded that the logical ‘selector’ is the anthropic principle, because that’s what we’ve got: a universe that’s comprehensible containing conscious entities that comprehend it.

P.S. I wrote a post on The Anthropic Principle last month.


Addendum 1: In reference to the anthropic principle, the abovementioned post specifies a ‘weak’ version and a ‘strong’ version, but it’s perhaps best understood as a ‘passive’ version and an ‘active’ version. To combine both posts, I would argue that the fundamental ontological question in my title, raises an obvious, fundamental ontological fact that I expound upon in the second last paragraph: ‘without consciousness, there might as well be nothing.’ This leads me to be an advocate for the ‘strong’ version of the anthropic principle. I’m not saying that something can’t exist without consciousness, as it obviously can and has, but, without consciousness, it’s irrelevant.


Addendum 2 (18 Nov. 2012): Four months ago I wrote a comment in response to someone recommending Robert Amneus's book, The Origin of the Universe; Case Closed (only available as an e-book, apparently).

In particular, Amneus is correct in asserting that if you have an infinitely large universe with infinite time, then anything that could happen will happen an infinite number of times, which explains how the most improbable events can become, not only possible, but actual. So mathematically, given enough space and time, anything that can happen will happen. I would contend that this is as good an answer to the question in my heading as you are likely to get.