Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
Showing posts sorted by relevance for query the magic number. Sort by date Show all posts
Showing posts sorted by relevance for query the magic number. Sort by date Show all posts

Saturday, 14 April 2012

i, the magic number that transformed mathematics and physics

You might wonder why I bother to beleaguer people with such esoteric topics like complex algebra and Schrodinger’s equation (May 2011, refer link below). The reason is that I’ve struggled with these mathematical milestones myself, but, having found some limited understanding, I attempt to pass on my revelations.

Firstly, I contend that calling i an imaginary number is a misnomer; it’s really an imaginary dimension. And if it was called such it would dispel much of the confusion that surrounds it. We define i as:

i = √-1

But it’s more intuitive to give the inverse relationship:

i2 = -1

Because, when we square an imaginary number, we transfer it from the imaginary plane to the Real plane. Graphically, i rotates a complex number by 900 in the anti-clockwise direction on the complex plane (or Argand diagram). Or, to be more precise, multiplying any complex number (which has both an imaginary and a Real component) by i will rotate its entire graphical representation through 900. In fact, complex algebra is a lot easier to comprehend when it is demonstrated graphically via an Argand diagram. An Argand diagram is similar to a Cartesian diagram only the x axis represents the Real numbers and the y axis is replaced by the i axis, hence representing the i dimension, not the number i.

It’s not unusual to have mathematical dimensions that are not intuitively perceived. Any dimension above 3 is impossible for us to visualise. And we even have fractional dimensions that are called fractals (Davies, The Cosmic Blueprint, 1987). So an imaginary dimension is not such a leap of imagination (excuse the pun) in this context. Whereas calling i an imaginary number is nonsensical since it quantifies nothing.

In an equation, i appears to be a number, and to all intents and purposes is treated like one, but it’s more appropriate to treat it as an operator. It converts numbers from Real to imaginary and back to Real again.

In quantum mechanics, Schrodinger’s wave function is a differential complex equation, which of itself tells us nothing about the particle it’s describing in the physical world. It’s only by squaring the modulus of the wave function (actually multiplying it by its conjugate to be technically correct) that we get a Real number, which gives a probability of finding the particle in the physical world.

Without complex algebra (therefore i ) we would not have a mathematical representation of quantum mechanics at all, which is a sobering thought. We have long passed the point in our epistemology of the physical universe whereby our comprehension is limited by our mathematical abilities and knowledge.

There are 2 ways to represent a complex number, and we need to thank Leonhard Euler for pointing this out. In 1748 he discovered the mathematical relationship that bears his name, and it has arguably become the most famous equation in mathematics.

Exponential and trigonometric functions can be expressed as infinite power series. In fact, the exponential function is defined by the power series:

ex = 1 + x + x2/2! + x3/3! + x4/4! + ….

Where n! (called n factorial) is defined as: n! = n x (n-1) x (n-2) x …. 2 x 1

But the common trig functions, sin x and cos x, can also be expressed as infinite power series (Taylor’s theorem):

sin x = x – x3/3! + x5/5! – x7/7! + ….

cos x = 1 – x2/2! + x4/4! – x6/6! + ….

Euler’s simple manipulation of these series by invoking i was a stroke of genius.

eix = I +ix – x2/2! – ix3/3! + x4/4! + ix5/5! – x6/6! – ix7/7! + …

i sin x = ix – ix3/3! + ix5/5! – ix7/7! + …

I’ll let the reader demonstrate for themselves that if they add the power series for cos x and isin x they’ll get the power series for eix .

Therefore:   eix = cos x + i sin x

But there is more: x in this equation is obviously an angle, and if you make x = π, which is the same as 1800, you get:

sin 1800 = sin 0 = 0

cos 1800 = - cos 0 = -1

Therefore:  eiπ = -1

This is more commonly expressed thus:

eiπ  + 1 = 0

And is known as Euler’s identity. Richard Feynman, who discovered it for himself just before his 15th birthday, called it “The most remarkable formula in math”.

It brings together the 2 most fundamental integers, 1 and 0 (the only digits you need for binary arithmetic), the 2 most commonly known transcendental numbers, e and π, and the operator i.

What I find remarkable is that by adding 2 infinite power series we get one of the simplest and most profound relationships in mathematics.


But Euler’s equation (Euler’s identity is a special case): eiθ = cos θ + i sin θ
gives us 2 ways of expressing a complex number, one in polar co-ordinates and one in Cartesian co-ordinates.

We use z by convention to express a complex number, as opposed to x or y.

So  z = x + iy (Cartesian co-ordinates)

And z = reiθ  (polar co-ordinates)

Where r is called the modulus (radius) and θ is the argument (angle).

If one looks at an Argand diagram, one can see from Pythagoras’s theorem that:

r2 = x2 + y2

But the same can be derived by multiplying the complex number by its conjugate, x – iy

So  (x + iy)(x – iy) = x2 + y2 = r2 

(I’ll let the reader expand the equation for themselves to demonstrate the result)

But also from the Argand diagram, using basic trigonometry, we can see:

x = r cos θ  and y = r sin θ (from cos θ = x/r and sin θ = y/r)

So  x + iy  becomes  r cos θ + i r sin θ

There is an advantage in using the polar co-ordinate version of complex numbers when it comes to multiplication, because you multiply the moduli and add the arguments.

So, if:    z1 = r1eiθ1   and   z2 = r2eiθ2

Then:   z1 x z2 = r1eiθ1 x r2eiθ2 = r1r2ei(θ1 + θ2)

And, obviously, you can do this graphically on an Argand diagram (complex plane), by multiplying the moduli (radii) and adding the arguments (angles).


Addendum 1: Given its role in quantum mechanics, I think i should be called the 'invisible dimension'.

Addendum 2: I've been re-reading Paul J. Nahin's very comprehensive book on this subject, An Imaginary Tale: The Story of √-1, and he reminds me of something pretty basic, even obvious once you've seen it.

tan θ = sin θ/cos θ or y/x (refer the Argand Diagram)

So θ = tan-1(y/x) where this represents the inverse function of tan (you can calculate the angle from the ratio of y over x, or the imaginary component over the Real component).

You can find this function on any scientific calculator usually by pressing an 'inverse' button and then the 'tan' button.

The point is that you can go from Cartesian co-ordinates to polar co-ordinates without using e. According to Nahin, Caspar Wessel discovered this without knowing about Euler's earlier discovery. But Wessel, apparently, was the first to appreciate that you sum angles when multiplying complex numbers and invented the imaginary axis when he realised that multiplying by i rotated everything by 900 anticlockwise.

Saturday, 29 November 2014

A book for geeks

Matt Parker is a mathematical entertainer, as oxymoronic as that sounds, because apparently, in the UK, he does stand-up comedic mathematics and mathematical-based magic tricks with cards. Originally a school teacher from Oz, he has the official title of Public Engagement in Mathematics Fellow at Queen Mary University of London.

He’s written a very accessible book called Things to Make and Do in the Fourth Dimension, where he attempts to introduce the reader to more obscure areas of mathematics by wooing them with games and little-known intriguing mathematical facts.

For example: if you square any prime number greater than 3 and take off 1 you’ll find it’s divisible by 24.  As he says: ‘That sentence can freak out even the most balanced mathematician.’ In a section at the back, called The Answers in the Back of the Book, he provides an easy-to-follow proof that shows this applies to any number that is not factored by 2 or 3 – so not just prime numbers. Obviously, any prime number above 2 or 3 fits that category as well. So the converse is not true: a number divisible by 24 plus 1 is not necessarily a squared prime. Otherwise, as Parker points out, we would have a very easy ready-made method of finding all primes, which we haven’t.

Basically, he is a mathematical enthusiast and he wants to share his enthusiasm. As anyone who reads my blog would know, I’m familiar with a fair sample of mathematical concepts and esoterica, so I don’t believe I’m the audience that Parker is seeking. Having said that, he managed to augment my knowledge considerably, like in the previous paragraph. Another example is his description of how to make binary computer logic gates by just using dominoes that actually can perform a calculation. In fact, he and a team of mathematicians spent 6  hours setting up a 10,000 domino ‘computer’ that took 48 seconds to compute 6 + 4 = 10, performed at the Manchester Science Festival in October 2012.

The title of this post is apt: geeks would love this book; yet Parker’s objective, one feels, is to make mathematics attractive to a wider audience. In particular, those who were turned off maths in their high school years, if not before. One of the virtues I found in this book is his selective use of visual representation, even of the simplest kind. I’m not just talking about graphs of exotic equations like Zeta functions and perspective drawings of Platonic solids or even 2D renderings of tesseracts (4D cubes), but rough hand-drawn sketches and sometimes just a list of numbers to demonstrate a series or sequence. I found these most helpful in understanding a tricky concept.

We are visual creatures because sight is our prime medium for comprehending the world. It should be no surprise that visualising an abstract concept, mathematical or otherwise, is the shortest way to understanding it. I work a lot with engineers and when they want to explain something they invariably draw a picture.

The problem with maths in education is that it’s a cumulative subject. More esoteric topics are dependent on lesser ones. If a student falls behind, the gap between what they’re expected to know and what they can actually achieve grows over the years of schooling.

Books like Parker’s attempt to short-circuit this process. He tries to introduce the reader to the more ‘sexy’ aspects of mathematics without grinding them into the ground with mind-bending exercises. His Answers in the Back of the Book allows the more adventurous and less intimidated reader to understand a topic more fully, whilst not burdening a less experienced reader with mind-expanding exercises. It is possible to read this book and come away with both a sense of awe at its magisterial wonder and an appreciation of how maths literally drives our digital world without having to do a lot of mental gymnastics. On the other hand, Parker is letting you into some of the secrets of the priesthood without feeling like you’ve done a PhD.

Although it is divided chapter by chapter into separate topics, this is a book that should be read in the order it is presented. Parker often references material already covered, partly to demonstrate how the mathematical world is so interconnected. To give an example, he sneaks up on the famous Zeta function in a way that makes it appear less intimidating then it really is, yet still manages to explain its relationship to Riemann’s famous hypothesis and the distribution of primes. I was disappointed that he didn’t explain that the non-trivial zeros, which are both the core mystery and ultimate unsolved puzzle, are in fact complex numbers involving the imaginary axis. However, he explains this in a later chapter when he introduces the reader to imaginary numbers and the ‘complex plane’.

Pythagoras famously said (or so we are led to believe, as he never wrote anything down) that everything is numbers. In the digital world this is literally true, and one of Parker’s most illuminating chapters explains how everything you do on your smart-phone from pictures to texting to music are all rendered by 0s and 1s.

Parker is very clever in that he discusses highly esoteric mathematical topics like the Zeta function (already mentioned), quarternions (imaginary numbers in 4D),  the so-called Monster or Friendly Giant in 196,833 dimensions, computer-generated self-correcting algorithms using binary arithmetic, multiple infinities, knot theory’s relevance to DNA not getting tangled and Klein bottles (4D bottles in 3D); without discussing more fundamental topics like logarithms, trigonometry or calculus. He doesn’t even explain the fundamental relationship between polar co-ordinates and Cartesian co-ordinates that makes imaginary numbers such a widely used tool.

He doesn’t get philosophical until the very end of the book, when he discusses the relevance of Godel’s Incompleteness Theorem to the study of mathematics for ever (quite literally). As I’m sure I’ve mentioned in previous posts, implicit in Godel’s Theorem is the fact that mathematics is never-ending, therefore it’s a human activity that will never stop. Also Parker points out that there could be other universes with other dimensions to ours, but any hypothetical residents (he calls them ‘hypertheticals’) would still discover the same mathematics as us, assuming they have the intellect to do so.

Friday, 30 December 2011

The Quantum Universe by Brian Cox and Jeff Forshaw

I’ve recently read this tome, subtitled Everything that can happen does happen, which is a phrase they reiterate throughout the book. Cox is best known as a TV science presenter for BBC. His series on the universe can be highly recommended. His youthful and conversational delivery, combined with an erudite knowledge of physics, makes him ideal for television. The same style comes across in the book despite the inherent difficulty of the topic.

In the last chapter, an epilogue, he mentions writing in September 2011, so this book really is hot off the press. Whilst the book is meant to cater for people with a non-scientific background, I’m unsure if it succeeds at that level and I’m not in a position to judge it on that basis. I’m fairly well read in this area, and I mainly bought it to see if they could add anything new to my knowledge and to compare their approach to other physics writers I’ve read.

They reference Richard Feynman (along with many other contributors to quantum theory) quite a lot, and, in particular, they borrow the same method of exposition that Feynman used in his book, QED. In fact, I’d recommend that this book be read in conjunction with Feynman’s book even though they overlap. Feynman introduced the notion of a one handed clock to represent the phase, amplitude and frequency of the wave function that lies at the heart of quantum mechanics (refer my post on Schrodinger’s equation, May 2011).

Cox and Forshaw use this same analogous method very effectively throughout the book, but they never tell the reader specifically that the clock represents the wave function as I assume it does. In fact, in one part of the book they refer to clocks and wave functions independently in the same passage, which could lead the reader to believe they are different things. If they are different things then I’ve misconstrued their meaning.

Early in their description of clocks they mention that the number of turns is dependent on the particle’s mass, thus energy. This is a direct consequence of Planck’s equation that relates energy to frequency, yet they don’t explain this. Later in the book, when they introduce Planck’s equation, they write it in terms of wavelength, not frequency, as it is normally expressed. These are minor quibbles, some might say petty, yet I believe they would help to relate the use of Feynman’s clocks to what the reader might already know of the subject.

One of the significant facts I learnt from their book was how Feynman exploited the ‘least action principle’ in quantum mechanics. (For a brief exposition of the least action principle refer my post on The Laws of Nature, Mar. 2008). Feynman also describes its significance in gravity in Six-Not-So-Easy Pieces: the principle dictates the path of a body in a gravitational field. In effect, the ‘least action’ is the difference between the kinetic and potential energy of the body. Nature contrives that it will always be a minimum, hence the description, ‘principle of least action’.

Now, I already knew that Feynman had applied it to quantum mechanics, but Cox and Forshaw provide us with the story behind it. Dirac had written a paper in 1933 entitled ‘The Lagrangian in Quantum Mechanics’ (the Lagrangian is the mathematical formulation of least action). In 1941, Herbet Jehle, a European physicist visiting Princeton, told Feynman about Dirac’s paper. The next day, Feynman found the paper in the Princeton library, and with Jehle looking on, derived Schrodinger’s equation in one afternoon using the least action principle. Feynman later told Dirac about his discovery, and was surprised to learn that Dirac had not made the connection himself.

But the other interesting point is that the units for ‘action’ in physics are mx2/t which are the same units as Planck’s constant, h. In other words, the fundamental unit of quantum mechanics is an ‘action’ unit. Now, units are important concepts in physics because only entities with the same type of units can be added and subtracted in an equation. Physicists talk about dimensions, because units must have the same dimensions to be able to be combined or deducted. The dimensions for ‘action’, for instance, are 1 of mass, 2 of length and -1 of time. To give a more common example, the dimensions for velocity are 0 of mass, 1 of length and -1 of time. You can add and subtract areas, for example, (2 dimensions of length) but you can’t add a length to an area or deduct an area from a volume (3 dimensions of length). Obviously, multiplication and calculus allow one to transform dimensions.

One of the concepts that Cox and Forshaw emphasise throughout the book is the universality of quantum mechanics and how literally everything is interconnected. They point out that no 2 electrons can have exactly the same energy, not only in the same atom but in the same universe (the Pauli Exclusion Principle). Also individual photons can never be tracked. In fact, they point out a little-known fact that Planck’s law is incompatible with the notion of tracking individual photons; a discovery made by Ladislas Natanson as far back as 1911. No, I’d never heard of him either, or his remarkable insight.

Cox and Forshaw do a brilliant job of explaining Wolfgang Pauli’s famous principle that makes individual atoms, and therefore matter, stable. They also expound on Freeman Dyson’s and Andrew Leonard’s 1967 paper demonstrating that it’s the Pauli Exclusion Principle that stops you from falling through the floor. Dyson described ‘the proof as extraordinarily complicated, difficult and opaque’, which may help to explain why it took so long for someone to derive it.

They also do an excellent job of explaining how quantum mechanics allows transistors to work, which is arguably the most significant invention of the 20th Century. In fact, it’s probably the best exposition I’ve come across outside a text book.

But what comes across throughout their book, is that the quantum world obeys specific ‘rules’ and once you understand those rules, no matter how bizarre they may seem to our common sense view of the world, you can make accurate and consistent predictions. The catch is that probability plays a key role and deterministic interpretations are not compatible with the quantum universe. In fact, Cox and Forshaw point out that quantum mechanics exhibits true ‘randomness’ unlike the ‘chaotic’ randomness that is dependent on ultra-sensitive initial conditions. In a recent issue of New Scientist, I came across someone discussing free will or the lack of it (in a book review on the topic) and espousing the view that everything is deterministic from the Big Bang onwards. Personally, I find it very difficult to hold such a philosophical position when the bedrock of the entire physical universe insists on chance.

Cox and Forshaw don’t have much to say about the philosophical implications of quantum mechanics except in one brief passage where they reveal a preference for the 'many worlds' interpretation because it does away with the so-called ‘collapse’ or ‘decoherence’ of the wave function. In fact, they make no reference to ‘collapse’ or ‘decoherence’ at all. They prefer the idea that there is an uninterrupted history of the quantum wave function, even if it implies that its future lies in another universe or a multitude of universes. But they also give tacit acknowledgement to Feynman’s dictum: ‘…the position taken by the “shut up and calculate” school of physics, which deftly dismisses any attempt to talk about the reality of things.’

In the epilogue, Cox and Forshaw get into some serious physics where they explain how quantum mechanics gives us the famous Chandrasekhar limit, developed by Subrahmanyan Chandresekhar in 1930, which determines how big a star can be before it becomes a neutron star or a black hole. The answer is 1.4 solar masses (1.4 times the mass of our sun). Mind you, it has to go through a whole series of phases in between and that’s what Cox and Forshaw explain, using some fundamental algebra along with some generous assumptions to make the exposition digestible for laypeople. But the purpose of the exercise is to demonstrate that quantum phenomena can determine limits on a stellar scale that have been verified by observation. It also gives a good demonstration of the scientific method in practice, as they point out.

This is a good book for introducing people to the mysteries of quantum mechanics with no attempt to side-step the inherent weirdness and no attempt to provide simplistic answers. They do their best to follow the Feynman tradition of telling it exactly as it is and eschew the magic that mysteries tend to induce. Nature doesn’t provide loop holes for specious reasoning. Quantum mechanics is the latest in a long line of nature’s secret workings, mathematically cogent and reliable, but deeply counter-intuitive.