Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Wednesday 26 August 2020

Did the Universe see us coming?

 I recently read The Grand Design by Stephen Hawking (2010), co-authored by Leonard Mlodinow, who gets ‘second billing’ (with much smaller font) on the cover, so one is unsure what his contribution was. Having said that, other titles listed by Mlodinow (Euclid’s Window and Feynman’s Rainbow) make me want to search him out. But the prose style does appear to be quintessential Hawking, with liberal lashings of one-liners that we’ve come to know him for. Also, I think one can confidently assume that everything in the book has Hawking’s imprimatur.

 

I found this book so thought-provoking that, on finishing it, I went back to the beginning, so I could re-read his earlier chapters in the context of his later ones. On the very first page he says, rather provocatively, ‘philosophy is dead’. He then spends the rest of the book giving his account of ‘life, the universe and everything’ (which, in one of his early quips, ‘is not 42’). He ends the first chapter (introduction, really) with 3 questions:

 

1)    Why is there something rather than nothing?

2)    Why do we exist?

3)    Why this particular set of laws and not some other?

It’s hard to get more philosophical than this.

 

I haven’t read everything he’s written, but I’m familiar with his ideas and achievements, as well as some of his philosophy and personal prejudices. ‘Prejudice’ is a word that is usually used pejoratively, but I use it in the same sense I use it on myself, regarding my ‘pet’ theories or beliefs. For example, one of my prejudices (contrary to accepted philosophical wisdom) is that AI will not achieve consciousness.

 

Nevertheless, Hawking expresses some ideas that I would not have expected of him. His chapter titled, What is Reality? is where he first challenges the accepted wisdom of the general populace. He argues, rather convincingly, that there are only ‘models of reality’, including the ones we all create inside our heads. He doesn’t say there is no objective reality, but he says that, if we have 2 or more ‘models of reality’ that agree with the evidence, then one cannot say that one is ‘more true’ than another.

 

For example, he says, ‘although it is not uncommon for people to say that Copernicus proved Ptolemy wrong, that is not true’. He elaborates: ‘one can use either picture as a model of the universe, for our observations of the heavens can be explained by assuming either the earth or the sun is at rest’.

 

However, as I’ve pointed out in other posts, either the Sun goes around the Earth or the Earth goes around the Sun. It has to be one or the other, so one of those models is wrong.

 

He argues that we only ‘believe’ there is an ‘objective reality’ because it’s the easiest model to live with. For example, we don’t know whether an object disappears or not when go into another room, nevertheless he cites Hume, ‘who wrote that although we have no rational grounds for believing in an objective reality, we also have no choice but to act as if it’s true’.

 

I’ve written about this before. It’s a well known conundrum (in philosophy) that you don’t know if you’re a ‘brain-in-a-vat’. But I don’t know of a single philosopher who thinks that they are. The proof is in dreams. We all have dreams that we can’t distinguish from reality until we wake up. Hawking also referenced dreams as an example of a ‘reality’ that doesn’t exist objectively. So dreams are completely solipsistic to the extent that all our senses will play along, including taste.

 

Considering Hawking’s confessed aversion to philosophy, this is all very Kantian. We can never know the thing-in-itself. Kant even argued that time and space are a priori constructs of the mind. And if we return to the ‘model of reality’ that exists in your mind: if it didn’t accurately reflect the external objective reality outside your mind, the consequences would be fatal. To me, this is evidence that there is an objective reality independent of one’s mind - it can kill you. However, if you die in a dream, you just wake up.

 

Of course, this all leads to subatomic physics, where the only models of reality are mathematical. But even in this realm, we rely on predictions made by these models to determine if they reflect an objective reality that we can’t see. To return to Kant, the thing-in-itself is dependent on the scale at which we ‘observe’ it. So, at the subatomic scale, our observations may be tracks of particles captured in images, not what we see with the naked eye. The same can be said on the cosmic scale; observations dependent on instruments that may not even be stationed on Earth.

 

To get a different perspective, I recently read an article on ‘reality’ written by Roger Penrose (New Scientist, 16 May 2020) which was updated from one he wrote in 2006. Penrose has no problem with an ‘objective independent reality’, and he goes to some lengths (with examples) to show the extraordinary agreement between our mathematical models and physical reality. 

 

Our mathematical models of physical reality are far from complete, but they provide us with schemes that model reality with great precision – a precision enormously exceeding that of any description free of mathematics.

 

(It should be pointed out that Penrose and Hawking won a joint prize in physics for their work in cosmology.)

 

But Penrose gets to the nub of the issue when he says, ‘...the “reality” that quantum theory seems to be telling us to believe in is so far removed from what we are used to that many quantum theorists would tell us to abandon the very notion of reality’. But then he says in the spirit of an internal dialogue, ‘Where does quantum non-reality leave off and the physical reality that we actually experience begin to take over? Present day quantum theory has no satisfactory answer to this question’. (I try to answer this below.)

 

Hawking spends an entire chapter on this subject, called Alternative Histories. For me, this was the most revealing chapter in his book. He discusses at length Richard Feynman’s ‘sum over histories’ methodology, called QED or quantum electrodynamics. I say methodology instead of theory, because it’s a mathematical method that has proved extraordinarily accurate in concordance with Penrose’s claim above. Feynman compared it to measuring the distance between New York and Seattle (from memory) to within the width dimension of a human hair.

 

Basically, as Hawking expounds, in Feynman’s theory, a quantum particle can take every path imaginable (in the famous double-slit experiment, say) and then he adds them altogether, but because they’re waves, most of them cancel each other out. This leads to the principle of superposition, where a particle can be in 2 places or 2 states at once. However, as soon as it’s ‘observed’ or ‘measured’ it becomes one particle in one state. In fact, according to standard quantum theory, it’s possible for a single photon to be split into 2 paths and be ‘observed’ to interfere with itself, as described in this video. (I've edited this after Wes Hansen from Quora challenged it). I've added a couple of Wes's comments in an addendum below. Personally, I believe 'superposition' is part of the QM description of the future, as alluded to by Freeman Dyson (see  below). So I don't think superposition really occurs.

 

Hawking contends that the ‘alternative histories’ inherent in Feynman’s mathematical method, not only affect the future but also the past. What he is implying is that when an observation is made it determines the past as well as the future. He talks about a ‘top down’ history in lieu of a ‘bottom up’ history, which is the traditional way of looking at things. In other words, cosmological history is one of many ‘alternative histories’ (his terminology) that evolve from QM.

 

This leads to a radically different view of cosmology, and the relation between cause and effect. The histories that contribute to the Feynman sum don’t have an independent existence, but depend on what is being measured. We create history by our observation, rather than history creating us (my emphasis).

 

As it happens, John Wheeler made the exact same contention, and proposed that it could happen on a cosmic scale when we observed light from a distant quasar being ‘gravitationally lensed’ by an intervening galaxy or black hole (refer Davies paper, linked below). Hawking makes specific reference to Wheeler’s conjecture at the end of his chapter. It should be pointed out that Wheeler was a mentor to Feynman, and Feynman even referenced Wheeler’s influence in his Nobel Prize acceptance speech.

 

A contemporary champion of Wheeler’s ideas is Paul Davies, and he even dedicates his book, The Goldilocks Enigma, to Wheeler.

 

Davies wrote a paper which is available on-line, where he describes Wheeler’s idea as the “…participatory universe” in which observers—minds, if you like—are inextricably tied to the concretization of the physical universe emerging from quantum fuzziness over cosmological durations.

 

In the same paper, Davies references and attaches an essay by Freeman Dyson, where he says, “Dyson concludes that a quantum description cannot be applied to past events.”

 

And this leads me back to Penrose’s question: how do we get the ‘reality’ we are familiar with from the mathematically modelled quantum world that strains our credulousness? If Dyson is correct, and the past can only be described by classical physics then QM only describes the future. So how does one reconcile this with Hawking’s alternative histories?

 

I’ve argued elsewhere that the path from the infinitely many paths of Feynman’s theory, is only revealed when an ‘observation’ is made, which is consistent with Hawking’s point, quoted above. But it’s worth quoting Dyson, as well, because Dyson argues that the observer is not the trigger.

 

... the “role of the observer” in quantum mechanics is solely to make the distinction between past and future...

 

What really happens is that the quantum-mechanical description of an event ceases to be meaningful as the observer changes the point of reference from before the event to after it. We do not need a human observer to make quantum mechanics work. All we need is a point of reference, to separate past from future, to separate what has happened from what may happen, to separate facts from probabilities.

 

But, as I’ve pointed out in other posts, consciousness exists in a constant present. The time for ‘us’ is always ‘now’, so the ‘point of reference’, that is key to Dyson’s argument, correlates with the ‘now’ of a conscious observer.

 

We know that ‘decoherence’ is not necessarily dependent on an observer, but dependent on the wave function interacting with ‘classical physics’ objects, like a laboratory apparatus or any ‘macro’ object. Dyson’s distinction between past and future makes sense in this context. Having said that, the interaction could still determine the ‘history’ of the quantum event (like a photon), even it traversed the entire Universe, as in the cosmic background radiation (for example).

 

In Hawking’s subsequent chapters, including one titled, Choosing Our Universe, he invokes the anthropic principle. In fact, there are 2 anthropic principles called the ‘weak’ and the ‘strong’. As Hawking points out, the weak anthropic principle is trivial, because, as I’ve pointed out, it’s a tautology: Only universes that produce observers can be observed.

 

On the other hand, the strong anthropic principle (which Hawking invokes) effectively says, Only universes that produce observers can ‘exist’. One can see that this is consistent with Davies’ ‘participatory universe’.

 

Hawking doesn’t say anything about a ‘participatory universe’, but goes into some detail about the fine-tuning of our universe for life, in particular the ‘miracle’ of how carbon can exist (predicted by Fred Hoyle). There are many such ‘flukes’ in our universe, including the cosmological constant, which Hawking also discusses at some length.

 

Hawking also explains how an entire universe could come into being out of ‘nothing’ because the ‘negative’ gravitational energy cancels all the ‘positive’ matter and radiation energy that we observe (I assume this also includes dark energy and dark matter). Dark energy is really the cosmological constant. Its effect increases with the age of the Universe, because, as the Universe expands, gravitational attraction over cosmological distances decreases while ‘dark energy’ (which repulses) doesn’t. Dark matter explains the stable rotation of galaxies, without which, they’d fly apart.

 

Hawking also describes the Hartle-Hawking model of cosmology (without mentioning James Hartle) whereby he argues that in a QM only universe (at its birth), time was actually a 4th spatial dimension. He calls this the ‘no-boundary’ universe, because, as John Barrow once quipped, ‘Once upon a time, there was no time’. I admit that this ‘model’ appeals to me, because in quantum cosmology, time disappears mathematically.

 

Hawking’s philosophical view is the orthodox one that, if there is a multiverse, then the anthropic principle (weak or strong) ensures that there must be a universe where we can exist. I think there are very good arguments for the multiverse (the cosmological variety, not the QM multiple worlds variety) but I have a prejudice against an infinity of them because then there would be an infinity of me.

 

Hawking is a well known atheist, so, not surprisingly, he provides good arguments against the God hypothesis. There could be a demiurge, but if there is, there is no reason to believe it coincides with any of the Gods of mythology. Every God I know of has cultural ties and that includes the Abrahamic God.

 

For someone who claims that ‘philosophy is dead’, Hawking’s book is surprisingly philosophical and thought-provoking, as all good philosophy should be. In his conclusions, he argues strongly for ‘M theory’, believing it will provide the theory(s) of everything that physicists strive for. M theory, as Hawking acknowledges, requires ‘supersymmetry’, and from what I know and read, there is little or no evidence of it thus far. But I agree with Socrates that every mystery resolved only uncovers more mysteries, which history, thus far, has confirmed over and over.

 

My views have evolved and, along with the ‘strong anthropic principle’, I’m becoming increasingly attracted to Wheeler’s ‘participatory universe’, because the more of its secrets we learn, the more it appears as if ‘the Universe saw us coming’, to paraphrase Freeman Dyson.



Addendum (23Apr2021): Wes Hansen, whom I met on Quora, and who has strong views on this topic, told me outright that he's not a fan of Hawking or Feynman. Not surprisingly, he challenged some of my views and I'm not in a position to say if he's right or wrong. Here are some of his comments:


You know, I would add, the problem with the whole “we create history by observation” thing is, it takes a whole lot of history for light to travel to us from distant galaxies, so it leads to a logical fallacy. Consider:

Suppose we create the past with our observations, then prior to observation the galaxies in the Hubble Deep Fields did not exist. Then where does the light come from? You see, we are actually seeing those galaxies as they existed long ago, some over 10 billion years ago.

We have never observed a single photon interfering with itself, quite the opposite actually: Ian Miller's answer to Can a particle really be in several places at the same time in the subatomic world, or is this just modern mysticism?. This is precisely why I cannot tolerate Hawking or Feynman, it’s absolute nonsense!

Regarding his last point, I think Ian Miller has a point. I don't always agree with Miller, but he has more knowledge on this topic than me. I argue that the superposition, which we infer from the interference pattern, is in the future. The idea of a single photon taking 2 paths and interfering with itself is deduced solely from the interference pattern (see linked video in main text). My view is that superposition doesn't really happen - it's part of the QM description of the future. I admit that I effectively contradicted myself, and I've made an edit to the original post to correct that.


 

Friday 10 July 2020

Not losing the plot, and even how to find it

As I’ve pointed out in previous posts, the most difficult part of writing for me is plotting. The characters come relatively easy, though there is always the danger that they can be too much alike. I’ve noticed from my own reading that some authors produce a limited range of characters, not unlike some actors. Whether I fall into that category is for others to judge. 

But my characters do vary in age and gender and include AI entities (like androids). Ideally, a character reveals more of themselves as the story unfolds and even changes or grows. One should not do this deliberately – it’s best to just let it happen – try not to interfere is the intention if not always the result.

I’ve also pointed out previously that whether to outline or not is a personal preference, and sometimes a contentious one. As I keep saying, you need to find what works for you, and for me it took trial and error.

In my last post on this topic, I compared plotting to planning a project, because that is what I did professionally. On a project you have milestones that become ‘goals’ and there is invariably a suite of often diverse activities required to come together at the right time. In effect, making sure everything aligns was what my job was all about.

When it comes to plotting, we have ‘plot points’, which are analogous to milestones but not really the same thing. And this is relevant to whether one ‘outlines’ or not. A very good example is given in the movie, Their Finest (excellent movie), which is a film within a film and has a screenwriter as the protagonist. The writers have a board where they pin up the plot points and then join them up with scenes, which is what they write.

On the other hand, a lot of highly successful writers will tell you that they never outline at all, and there is a good reason for that. Spontaneity is what all artists strive for – it’s the very essence of creativity. I’ve remarked myself, that the best motivation to write a specific scene is the same as the reader’s: to find out what happens next. As a writer, you know that if you are surprised then so will your readers be.

Logically, if you don’t have an outline, you axiomatically don’t know what happens next, and the spontaneity that you strive for, is all but guaranteed. So what do I do? I do something in between. I learned early on that I need a plot point to aim at, and whether I know what lies beyond that plot point is not essential.

I found a method that works for me, and any writer needs to find a method that works for them. I keep a notebook, where I’ll ‘sketch’ what-ifs, which I’ll often do when I don’t know what the next plot point is. But once I’ve found it, and I always recognise it when I see it, I know I can go back to my story-in-progress. But that particular plot point should be far enough in the future that I can extemporise, and other plot points will occur spontaneously in the interim.

Backstory is often an important part of plot development. J.K. Rowling created a very complex backstory that was only revealed in the last 2 books of her Harry Potter series. George Lucas created such an extensive backstory for Star Wars, he was able to make 3 prequels out of it.

So, whether you outline or not may be dependent on how much you already know about your characters before you start.

Friday 3 July 2020

Road safety starts with the driver, not the vehicle

There was recently (pre-COVID-19) a road-safety ad on some cinemas in Australia (and possibly TV) for motorcyclists. We have video of a motorcyclist on a winding road, which I guess is the other side of Healesville, and there is a voiceover of his thoughts. He sees a branch on the road to avoid, he sees a curve coming up, he consciously thinks through changing gears, including clutch manipulation, he sees a van ahead which he overtakes. The point is that there is this continuous internal dialogue based on what he observes while he’s riding. 

What I find intriguing is that this ad is obviously targeted at motorcyclists, yet I fail to see why it doesn’t equally apply to car drivers. I learned to drive (decades ago) from riding motorcycles, not only on winding roads but in city and suburban traffic. I used to do a daily commute along one of the busiest arterial roads from East Sydney to Western Sydney and back, which I’d still claim to be the most dangerous stretch of driving I ever did in my life. 

I had at least one close call and one accident when a panel van turned left into a side road from the middle lane while I was in the left lane (vehicles travel on the left side, a la Britain, in Australia). I not only went over the top of my bike but the van started to drag the bike over me while I was trapped in the gutter, and then he stopped. I was very young and unhurt and he was older and managed to convince me that it was my fault. My biggest concern was not whether I had sustained injuries (I hadn’t) but that the bike was unrideable.

Watching the ad on the screen, which is clearly aimed at a younger version of myself, I thought that’s how I drive all the time, and I learned that from riding bikes, even though I haven’t ridden a bike in more than 3 decades. It occurred to me that most people probably don’t – they put their cars on cruise-control, now ‘adaptive’, and think about something else entirely, possibly having a conversation with someone who is not even in the vehicle.

In Australia, speed limits get lower and lower every year, so that drivers don’t have to think about what they’re doing. The biggest cause of accidents now, I understand, are distractions to the driver. We are transitioning (for want of a better word) to fully autonomous vehicles. In the interim, it seems that since we don’t have automaton cars, we need automaton drivers. Humans actually don’t make good robots. The road-safety ad aimed at motorcyclists is the exact opposite of this thinking.

I’m anomalous in that I still drive a manual and actually enjoy it. I’ve found others of my generation, including women, who feel that driving a manual forces them to think about what they’re doing in a way that an auto doesn’t. In a manual, you are constantly anticipating what gear you need, whether it be for traffic or for a corner, to slow down or to speed up (just like the rider in the ad). It becomes an integral part of driving. I have a 6 speed which is the same as I had on my first 2 motorbikes, and I use the gears in exactly the same way. We are taught to get into top gear as quickly as possible and stay there. But, riding a bike, you soon learn that this is nonsense. In my car, you ideally need to be doing 100km/hr (60 mph) to change into top gear. 

We have cars that do their best to take the driving out of driving, and I’m not convinced that makes us safer, though most people seem to think it does.


Addendum: I acknowledge I’m a fossil like the car I drive. I do drive autos, and it doesn’t change the way I drive, but I don’t think I’ve ever enjoyed the experience. I accept that, in the future, cars probably won’t be enjoyable to drive at all, because they will have no 'feeling'. The Tesla represents the future of motoring, whether autonomous or not.

Tuesday 9 June 2020

Is liberalism under siege?

Like most so-called liberal-minded individuals, I read liberal-minded media, like The New Yorker, but I also acquire The Weekend Australian, religiously, every weekend (a Murdoch broadsheet newspaper). Like most weekend tabloids, it has ‘sections’ and pull-out segments, including a Weekend Australian Magazine and Weekend Australian Review. These pull-out segments often include profiles of people from all walks of life, coverage of arts and culture, as well as commentaries on topical issues.

There is a curious dichotomy in that the main body of the paper has opinion pieces that are predominantly and overtly conservative, whereas the ‘pull-out’ sections (mentioned above) have far more liberal content. Having said that, this weekend, there was virtually a full-page article called Voice, Treaty, Truth: Heart, which was an extract from a book called Treaty by George Williams and Harry Hobbs (who are, respectively, Dean and lecturer in the faculty of law at the University of NSW). It gives a potted history of the treaty process in Australia for indigenous people, with well written arguments on why it’s a necessary process for all Australians. The idea has long been opposed by conservative voices in Australia, so it says a lot that it finds expression in a conservative newspaper.

I only reference the article to give contrast to other feature articles dealing with the current ‘black lives matter’ crisis occurring in the US and spilling over into Australia on the same weekend. In particular, 3 opinion pieces by Paul Kelly (Editor at Large), Greg Sheridan (Foreign Editor) and Chris Kenny (Associate Editor) that provide different yet distinctly conservative views on the divisive issue. None of them are apologists for Trump, yet Sheridan and Kenny, in particular, are critical, to the point of ridicule, of the backlash against Trump, and downplay the racial schism that has become a running sore over the past week.

But I wish to focus on Paul Kelly’s commentary, The Uncivil War Killing Liberalism, because his arguments are more measured and he takes a much wider view. Kelly has been critical of Trump in the past – in particular, his incompetent handling of the COVID-19 pandemic right from the outset.

Kelly effectively argues that liberalism is under attack from both sides, with the political desertion of the ‘centre’ all over the Western world. I’ve made the same point myself, but, even though I’d guess we’re of a similar vintage, we have different perspectives and biases.

Kelly provides a broad definition, which I’ll quote out of context:

...liberalism means equality before the law regardless of race, equal access to health care and education on the principle of universalism.

This is an ideal that is far from fulfilled in virtually every democracy in the modern world, and is manifest in faultlines, particularly in the US, which is the main focus of Kelly’s essay. He more or less says as much in the next paragraph:

Yet the US today is engulfed in a series of social crises, with life expectancy falling for three successive years since 2015.

Kelly sees Trump as a symptom, or a ‘product’ of a ‘decline into cultural decadence’ (quoting conservative New York Times journalist, Ross Douthat, from his book, The Decadent Society). Kelly clearly agrees with Douthat when he quotes him: Trump exploits the decline of liberalism while being an agent of that decline.

But, like many conservative commentators, Kelly lays at least part of the blame with what he and others call ‘the Elites’. He quotes another American author, Christopher Lasch, from his 1995 book, The Revolt of the Elites:

The new elites are in revolt against ‘Middle America’ as they imagine it: a nation technologically backward, politically reactionary, repressive in its sexual morality, middlebrow in its tastes and complacent, dull and dowdy.

There is a social dynamic occurring here that I have seen before, and so I believe has Kelly. I’m thinking of the 1960s when there was a revolt against postwar conservative values that was arguably augmented by the introduction of oral contraception. It included a rejection of the dominance of the Church in both legislative and family politics, as well as shifts in feminist politics, the effects of which are still being experienced a couple of generations later. Were the ‘radicals’ advocating those ideals the ‘elites’ of their generation?

One of the major differences between American and Australian cultures is obvious to Australians and a surprise to many Americans. In Australia, religious belief is rarely an issue, and certainly not in politics. This wasn’t always the case. When I was growing up there was a divide between protestants and Catholics that even affected the small country town where I lived and was educated. The dissolution of that division was one of the more providential casualties of the 1960s. These days, most Australians are apathetic about religion, which renders it mostly a non-issue.

The reason I raise this is because militant atheism is most aggressive in countries where fundamentalist religion is most political (like the US). In other words, when you get extreme views becoming mainstream, you will get a reaction from the polar opposite extreme. And this is what is happening in politics pretty well worldwide.

So Kelly is right when he contends that Trump is the manifestation of a reaction to left wing ideologies, but he leaves a lot out. If one goes back to the ‘definition’ of liberalism, scribed by Kelly himself, the word ‘equality’ tends to stick in one’s craw. Inequality is arguably the biggest issue in the US which has been exacerbated by recent events. Even in the pandemic, which one assumes is indiscriminate, Black deaths have outnumbered Whites, which suggests that health care is not equitable.

It would seem that people (well, conservative political commentators at least) have already forgotten both the cause and the consequences of the GFC. The GFC hit middle America hard and it is their hardship that Trump exploited. So, the so-called ‘decadence of liberalism’ is a straw man that hides the discontent caused by the sheer greed of the people whom Trump and his ‘Tea Party’ allies really represent.

Kelly argues that ‘aggressive progressivism’ is one, if not ‘the’ cause of the ‘assault on liberalism’, to use his own words. He doesn’t say, but one assumes by ‘aggressive progressivism’, he’s talking about the strong push for renewable energy sources in response to what he calls ‘climate change alarmists’. Curiously, it’s been reported in the last week that industry leaders (you know, the ones who vote for conservative governments) are pushing for more investment in renewable resources. So we have industry groups attempting to lead the (conservative) Australian government, following the paralysis of the last decade by consecutive governments on both sides.

Kelly also argues that ‘individualism’ is one of the factors, along with ‘multiculturalism’, which he denigrates. In Australia, I’ve witnessed at least 3 waves of immigration, all of which have brought out the best and the worst in people. But generally people have got along fine because we tend to live and let live. As long as people from all backgrounds have the same access to health care and education and job opportunities, then there is very little societal dislocation that the xenophobes warn us about. There is inequality, especially among indigenous Australians, and I think that is why the recent protests in America have resonated here. Equality, I believe, starts with education. There is an elitism around education here and it is a political minefield. But the ideals of liberalism, expressed so succinctly by Kelly, surely start with education.

If one takes a broad historical perspective, it’s generally the ideas and ideals of people on the Left of politics that develop into social norms, even for conservatives of later generations. This is arguably how liberalism has evolved and will continue to evolve. Importantly, it’s dynamic, not static.

Saturday 30 May 2020

How do we understand each other?


This is the latest Question of the Month from Philosophy Now (Issue 137 April/May 2020), so answers will appear in Issue 139 (Aug/Sep 2020). It just occurred to me that I may have misread the question and the question I've answered is: How CAN we understand each other? Whatever, it's still worthy of a post, and below is what I wrote: definitely philosophical with psychological underpinnings and political overtones. There’s a thinly veiled reference to my not-so-recent post on Plato, and the conclusion was unexpected.


This is possibly the most difficult question I’ve encountered on Question of the Month, and I’m not sure I have the answer. If there is one characteristic that defines humans, it’s that we are tribal to the extent that it can define us. In almost every facet of our lives we create ingroups and outgroups, and it starts in childhood. If one watches the so-called debates that occur in parliament (at least in Australia) it can remind one of their childhood experiences at school. In current political discourse, if someone proposes an action or a policy, it is reflexively countered by the opposition, irrespective of its merit.

But I’ve also observed this is in the workplace, working on complex engineering projects, where contractual relationships can create similar divisions; where differences of opinion and perspective can escalate to irrational opposition that invariably leads to paralysis.

We’ve observed worldwide (at least in the West) divisions becoming stronger, reinforced by social media that is increasingly being used as a political weapon. We have situations where groups holding extreme yet strongly opposing views will both resist and subvert a compromise position proposed by the middle, which logically results in stalemate.

Staying with Australia (where I’ve lived since birth), we observed this stalemate in energy policy for over a decade. Every time a compromise was about to be reached, either someone from the left side or someone from the right side would scuttle it, because they would not accept a compromise on principle.

But recently, two events occurred in Australia that changed the physical, social and political landscape. In the summer of 2019/2020, we witnessed the worst bushfire season, not only in my lifetime, but in recorded history since European settlement. And although there was some political sniping and blame-calling, all the governments, both Federal and States, deferred to the experts in wildfire and forestry management. What’s more, the whole community came together and helped out irrespective of political and cultural differences. And then, the same thing happened with the COVID-19 crisis. There was broad bipartisan agreement on formulating a response, and the medical experts were not only allowed to do their job but to dictate policy.

Plato was critical of democracies and argued for a ‘philosopher-king’. We don’t have philosopher-kings, but we have non-ideological institutions with diverse scientific and technical expertise. I would contend that ‘understanding each other’ starts with acknowledging one’s own ignorance. 


Saturday 23 May 2020

Quantum mechanics, entanglement, gravity and time

I wrote a post on Louisa Gilder’s well researched book, The Age of Entanglement, 10 years ago, when I acquired it (copyright 2008). I started rereading it after someone on Quora, with more knowledge than me, challenged the veracity of Bell’s theorem, also known as Bell’s Inequality, which really changed our perception of quantum phenomena at its foundations. Gilder’s book is really all about Bell’s theorem and its consequences, whilst covering the history of quantum mechanics over most of the 20th Century, from Bohr through to Feynman and beyond.

Gilder is not a physicist, from what I can tell, yet the book is very well researched with copious notes and references, and she garnered accolades from science publications as well as literary reviewers. Her exposition on Bell’s theorem is technically correct to the best of my knowledge, which she provides very early in the book. 

She goes to some length to explain that the resolution of Bell’s theorem is not the obvious intuitive answer that entangled particles are like a pair of shoes separated in space and time, so that if you find the right-handed shoe you automatically know that the other one must be left-handed. This is what my interlocutor on Quora was effectively claiming. No, according to Gilder, and everything else I’ve read on this subject, Bell’s theorem is akin to finding too many coincidences than one would expect to find by chance. The inequality means that if results are found on one side of the inequality then the intuitive scenario is correct, and if they are on the other side, then the QM world obeys rules not found in classical physics.

The result is called ‘non-local’, which is the opposite of ‘local’, a term with a specific meaning in QM. Local means that objects are only affected by ‘signals’ that travel at the speed of light. Non-local means that objects show a connectivity that is not dependent on lightspeed communication or linkage.

It was Schrodinger who coined the term ‘entanglement’, claiming that it was the defining characteristic of QM.

I would not call that ‘one’ but rather ‘the’ characteristic trait of quantum mechanics. The one that enforces its entire departure from classical lines of thought.

I’ve also recently read an e-book called An Intuitive Approach to Quantum Field Theory by Toni Semantana (only available in e-book, 2019), so it’s very recent. It’s very good in that Semantana obviously knows what he’s talking about, but, even though it has minimum mathematical formulae, it’s not easy to follow. Nevertheless, he covers esoteric topics like the Higgs field, gauge theories, Noether’s theorem (very erudite) and Feynman diagrams. It made me realise how little I know. It’s relevance to this topic is that he doesn’t discuss entanglement at all.

Back to Gilder, and it’s obvious that you can’t discuss entanglement and locality (or non-locality) without talking about time. If I can digress, someone else on Quora provided a link to an essay by J.C.N. Smith called Time – Illusion and Reality. Smith said you won’t find a definition of time that doesn’t include clocks or things that move. In fact, I’ve come across a few people who claim that, without motion, time has no reality. 

However, I have a definition that involves light. Basically, time is the separation between events as measured by light. This stems from the startling yet obvious fact, that if lightspeed was not finite (instantaneous) then everything would happen at once. And, because lightspeed is the same for all observers, it determines the time difference between events, even though the time measured may differ for different observers, as per Einstein’s special theory of relativity. (Spacetime between events for all observers is the same.)

When I was in primary school at the impressionable age of 10 or 11, I was introduced to relativity theory, without being told that is what it was. Nevertheless, it had such an impact on my still-developing brain that I’ve never forgotten it. I can’t remember the context, but the teacher (Mr Hinton) told us that if you travel fast enough clocks will slow down and so will your heart. I distinctly remember trying to mentally grasp the concept and I found that I could if time was a dimension and as you sped up the seconds, or whatever time was measured in, they became more frequent between each heartbeat, so, by comparison, your heart slowed down. One of the other students made the comment that ‘if a plane could fly fast enough it would come back to land before it took off’. I’m unsure if that was a product of his imagination or if he’d come across it somewhere else, which was the impression he gave at the time. Then, thinking aloud, I said, It’s impossible to go faster than time, as if time and speed were interdependent. And someone near me turned, in a light-bulb moment, and said, You’re right.

My attempt at conceptually grasping the concept was flawed but my comment was prescient. You can’t travel faster than time because you can’t travel faster than light. For a photon of light, time is zero. The link between time and light is an intrinsic feature of the Universe, and was a major revelation of Einstein’s theory of relativity.

J.C.N. Smith argues in his essay that we have the wrong definition of time by referring to local events like the rotation of the planet or its orbit about the sun, or, even more locally, the motions of a pendulum or an atomic clock. He argues that the definition of time should be the configuration of the entire universe, because at any point in time it has a unique configuration, and, even though we can’t observe it completely, it must exist. 

There is a serious problem with this because every observer of that configuration would see something completely different, even without relativistic effects. If you take the Magellanic Clouds, which you can see in the southern hemisphere with the naked eye on a cloudless, moonless night, you are looking 150,000 to 190,000 years into the past (there are 2 of them), which is roughly when homo sapiens emerged from Africa. So an observer on a world in the Magellanic Clouds, looking at the Milky Way galaxy, would see us 150,000 to 190,000 years in the past. In other words, no observer in the Universe could possibly see the same thing at the same time if they are far enough apart.

However, Smith is right in the sense that the age of the Universe infers that there is a universal ‘now’, which is the edge of the Big Bang (because it’s still in progress). The Cosmic Microwave Background Radiation is the earliest light we can see (from 380,000 years after the Big Bang) yet our observation of it is part of our ‘now’.

This has implications for entanglement if it’s non-local. If Freeman Dyson is correct that QM describes the future and classical physics describes the past, then the collapse or decoherence of the wave function represents ‘now’. So ‘now’ for an observer is when a photon hits your retina and you immediately see into the past, whether the photon is part of a reflection in a mirror or it comes from the Cosmic Background Radiation. It’s also the point when an entangled quantum particle (which could be a photon or something else) ‘fixes’ the outcome of its entangled partner wherever in the Universe it may be.

If entangled particles are in the future until one of them is observed then they infer a universal now. Or does it mean that it creates a link back in time across the Universe? 

John Wheeler believed that there was a possibility of a connection between an observer and the distant past across the Universe, but he wasn’t thinking of entanglement. He proposed a thought experiment involving the famous double-slit experiment, whereby one makes an observation after the particle (electron or photon) has passed through the slit but before it hits the target (where we observe the outcome). He predicted that this would change the pattern from a wave going through both slits to a particle going through one. He was later vindicated (after his death). Wheeler argued that this would imply that there is a ‘backwards-in-time’ signal or acausal connection to the source. He argued that this could equally apply to photons from a distant quasar, gravitationally lensed by an intervening galaxy.

Wheeler’s thought experiment makes sense if the wave function of the particle exists in the future until it is detected, meaning before it interacts with a classical physics object. Entanglement also becomes ‘known’ after one of the entangled particles interacts with a classical physics object. Signals into the so-called past are not so mysterious if everything is happening in the future of the ‘observer’. Even microwaves from the Cosmic Background Radiation exist in our future until we ‘detect’ them.

Einstein’s special theory of relativity tells us that simultaneity can’t be determined, which seems to contradict the non-locality of entanglement according to Bell’s theorem. According to Einstein, ‘now’ is subjective, dependent on the observer’s frame of reference. This implies that someone’s future could be another person’s past, but this has implications for causality. No matter where an observer is in the Universe, everywhere they look is in their past. Now, as I explained earlier, their past maybe different to your past but, because all observations are dependent on electromagnetic radiation, everything they ‘see’ has already happened.

The exception is the event horizon of a black hole. According to Viktor T Toth (a regular contributor to Quora), the event horizon is always in your future. This creates a paradox, because it is believed you could cross an event horizon without knowing it. On the other hand, an external observer would see you frozen in time. Kip Thorne argues there is no matter in a black hole, only warped spacetime. Most significantly, once you pass the event horizon, space effectively becomes uni-directional like time – you can’t go backwards the way you came.

As Toth has pointed out a number of times, Einstein’s theory of gravity (the general theory of relativity) is mathematically a geometrical theory. Toth also points out that We can do quantum field theory just fine on the curved spacetime background of general relativity. Another contributor, Terry Bollinger, explains why general relativity is not quantum:

GR is a purely geometric theory, which in turn means that the gravity force that it describes is also specified purely in terms of geometry. There are no particles in gravity itself, and in fact nothing even slightly quantum.

In effect, Bollinger argues that quantum phenomena ‘sit’ on top of general relativity. I contend that gravity ultimately determines the rate of time, and QM uses a ‘clock’ that exists outside of Hilbert space where QM ‘sits’ (according to Roger Penrose, as well as Anil Ananthaswamy, who writes for New Scientist). 

So what happens inside a black hole, which requires a theory of quantum gravity? As Freeman Dyson observed, no one can get inside a black hole to report or perform an experiment. But, if it’s always in one’s future, then maybe quantum gravity has no time. John Wheeler and Bryce de-Witt famously attempted to formulate Einstein’s theory of general relativity (gravity) in the same form as electromagnetism, and time (denoted as t) simply disappeared. And as Paul Davies pointed out in The Goldilocks Enigma, in quantum cosmology (as per the Wheeler de-Witt equation), time vanishes. But, if quantum cosmology is attempting to describe the future, then maybe one should expect time to disappear.



Another thought experiment: if you take an entangled particle to the other side of the visible universe (which would take something like the age of the Universe) and then they instantly ‘link’ or ‘connect’ non-locally, it still requires less than lightspeed to separate them. So you won’t achieve instantaneous transmission, even in principle, because you have to wait until its entangled ‘partner’ arrives at its destination. Or, as explained in the video below, the 'correlation' can only be checked in classical physics.

Addendum: This is the best explanation of QM entanglement and Bell's Theorem (for laypeople) that I've seen:




Monday 18 May 2020

An android of the seminal android storyteller

I just read a very interesting true story about an android built in the early 2000s based on the renowned sci-fi author, Philip K Dick, both in personality and physical appearance. It was displayed in public at a few prominent events where it interacted with the public in 2005, then was lost on a flight between Dallas and Las Vegas in 2006, and has never been seen since. The book is called Lost In Transit; The Strange Story of the Philip K Dick Android by David F Duffy.

You have to read the back cover to know it’s non-fiction published by Melbourne University Press in 2011, so surprisingly a local publication. I bought it from my local bookstore at a 30% discount price as they were closing down for good. They were planning to close by Good Friday but the COVID-19 pandemic forced them to close a good 2 weeks earlier and I acquired it at the 11th hour, looking for anything I might find interesting.

To quote the back cover:

David F Duffy was a postdoctoral fellow at the University of Memphis at the time the android was being developed... David completed a psychology degree with honours at the University of Newcastle [Australia] and a PhD in psychology at Macquarie University, before his fellowship at the University of Memphis, Tennessee. He returned to Australia in 2007 and lives in Canberra with his wife and son.

The book is written chronologically and is based on extensive interviews with the team of scientists involved, as well as Duffy’s own personal interaction with the android. He had an insider’s perspective as a cognitive psychologist who had access to members of the team while the project was active. Like everyone else involved, he is a bit of a sci-fi nerd with a particular affinity and knowledge of the works of Philip K Dick.

My specific interest is in the technical development of the android and how its creators attempted to simulate human intelligence. As a cognitive psychologist, with professionally respected access to the team, Duffy is well placed to provide some esoteric knowledge to an interested bystander like myself.

There were effectively 2 people responsible (or 2 team leaders), David Hanson and Andrew Olney, who were brought together by Professor Art Greasser, head of the Institute of Intelligent Systems, a research lab in the psychology building at the University of Memphis (hence the connection with the author). 

Hanson is actually an artist, and his specialty was building ‘heads’ with humanlike features and humanlike abilities to express facial emotions. His heads included mini-motors that pulled on a ‘skin’, which could mimic a range of facial movements, including talking.

Olney developed the ‘brains’ of the android that actually resided on a laptop and was connected by wires going into the back of the android’s head. Hanson’s objective was to make an android head that was so humanlike that people would interact with it on an emotional and intellectual level. For him, the goal was to achieve ‘empathy’. He had made at least 2 heads before the Philip K Dick project.

Even though the project got the ‘blessing’ of Dick’s daughters, Laura and Isa, and access to an inordinate amount of material, including transcripts of extensive interviews, they had mixed feelings about the end result, and, tellingly, they were ‘relieved’ when the head disappeared. It suggests that it’s not the way they wanted him to be remembered.

In a chapter called Life Inside a Laptop, Duffy gives a potted history of AI, specifically in relation to the Turing test, which challenges someone to distinguish an AI from a human. He also explains the 3 levels of processing that were used to create the android’s ‘brain’. The first level was what Olney called ‘canned’ answers, which were pre-recorded answers to obvious questions and interactions, like ‘Hi’, ‘What’s your name?’, ‘What are you?’ and so on. Another level was ‘Latent Semantic Analysis’ (LSA), which was originally developed in a lab in Colorado, with close ties to Graesser’s lab in Memphis, and was the basis of Grasser’s pet project, ‘AutoTutor’ with Olney as its ‘chief programmer’. AutoTutor was an AI designed to answer technical questions as a ‘tutor’ for students in subjects like physics.

To create the Philip K Dick database, Olney downloaded all of Dick’s opus, plus a vast collection of transcribed interviews from later in his life. The Author conjectures that ‘There is probably more dialogue in print of interviews with Philip K Dick than any other person, alive or dead.’

The third layer ‘broke the input (the interlocutor’s side of the dialogue) into sections and looked for fragments in the dialogue database that seemed relevant’ (to paraphrase Duffy). Duffy gives a cursory explanation of how LSA works – a mathematical matrix using vector algebra – that’s probably a little too esoteric for the content of this post.

In practice, this search and synthesise approach could create a self-referencing loop, where the android would endlessly riff on a subject, going off on tangents, that sounded cogent but never stopped. To overcome this, Olney developed a ‘kill switch’ that removed the ‘buffer’ he could see building up on his laptop. At one display at ComicCon (July 2005) as part of the promotion for A Scanner Darkly (a rotoscope movie by Richard Linklater, starring Keanu Reeves), Hanson had to present the android without Olney, and he couldn’t get the kill switch to work, so Hanson stopped the audio with the mouth still working and asked for the next question. The android simply continued with its monolithic monologue which had no relevance to any question at all. I think it was its last public appearance before it was lost. Dick’s daughters, Laura and Isa, were in the audience and they were not impressed.

It’s a very informative and insightful book, presented like a documentary without video, capturing a very quirky, unique and intellectually curious project. There is a lot of discussion about whether we can produce an AI that can truly mimic human intelligence. For me, the pertinent word in that phrase is ‘mimic’, because I believe that’s the best we can do, as opposed to having an AI that actually ‘thinks’ like a human. 

In many parts of the book, Duffy compares what Graesser’s team is trying to do with LSA with how we learn language as children, where we create a memory store of words, phrases and stock responses, based on our interaction with others and the world at large. It’s a personal prejudice of mine, but I think that words and phrases have a ‘meaning’ to us that an AI can never capture.

I’ve contended before that language for humans is like ‘software’ in that it is ‘downloaded’ from generation to generation. I believe that this is unique to the human species and it goes further than communication, which is its obvious genesis. It’s what we literally think in. The human brain can connect and manipulate concepts in all sorts of contexts that go far beyond the simple need to tell someone what they want them to do in a given situation, or ask what they did with their time the day before or last year or whenever. We can relate concepts that have a spiritual connection or are mathematical or are stories. In other words, we can converse in topics that relate not just to physical objects, but are products of pure imagination.

Any android follows a set of algorithms that are designed to respond to human generated dialogue, but, despite appearances, the android has no idea what it’s talking about. Some of the sample dialogue that Duffy presented in his book, drifted into gibberish as far as I could tell, and that didn’t surprise me.

I’ve explored the idea of a very advanced AI in my own fiction, where ‘he’ became a prominent character in the narrative. But he (yes, I gave him a gender) was often restrained by rules. He can converse on virtually any topic because he has a Google-like database and he makes logical sense of someone’s vocalisations. If they are not logical, he’s quick to point it out. I play cognitive games with him and his main interlocutor because they have a symbiotic relationship. They spend so much time together that they develop a psychological interdependence that’s central to the narrative. It’s fiction, but even in my fiction I see a subtle difference: he thinks and talks so well, he almost passes for human, but he is a piece of software that can make logical deductions based on inputs and past experiences. Of course, we do that as well, and we do it so well it separates us from other species. But we also have empathy, not only with other humans, but other species. Even in my fiction, the AI doesn’t display empathy, though he’s been programmed to be ‘loyal’.

Duffy also talks about the ‘uncanny valley’, which I’ve discussed before. Apparently, Hanson believed it was a ‘myth’ and that there was no scientific data to support it. Duffy appears to agree. But according to a New Scientist article I read in Jan 2013 (by Joe Kloc, a New York correspondent), MRI studies tell another story. Neuroscientists believe the symptom is real and is caused by a cognitive dissonance between 3 types of empathy: cognitive, motor and emotional. Apparently, it’s emotional empathy that breaks the spell of suspended disbelief.

Hanson claims that he never saw evidence of the ‘uncanny valley’ with any of his androids. On YouTube you can watch a celebrity android called Sophie and I didn’t see any evidence of the phenomenon with her either. But I think the reason is that none of these androids appear human enough to evoke the response. The uncanny valley is a sense of unease and disassociation we would feel because it’s unnatural; similar to seeing a ghost - a human in all respects except actually being flesh and blood. 

I expect, as androids, like the Philip K Dick simulation and Sophie, become more commonplace, the sense of ‘unnaturalness’ would dissipate - a natural consequence of habituation. Androids in movies don’t have this effect, but then a story is a medium of suspended disbelief already.

Sunday 10 May 2020

Logic, analysis and creativity

I’ve talked before about the apparent divide between arts and humanities, and science and technology. Someone once called me a polymath, but I don’t think I’m expert enough in any field to qualify. However, I will admit that, for most of my life, I’ve had a foot in both camps, to use a well-worn metaphor. At the risk of being self-indulgent, I’m going to discuss this dichotomy in reference to my own experiences.

I’ve worked in the engineering/construction industry most of my adult life, yet I have no technical expertise there either. Mostly, I worked as a planning and cost control engineer, which is a niche activity that I found I was good at. It also meant I got to work with accountants and lawyers as well as engineers of all disciplines, along with architects. 

The reason I bring this up is because planning is all about logic – in fact, that’s really all it is. At its most basic, it’s a series of steps, some of which are sequential and some in parallel. I started doing this before computers did a lot of the work for you. But even with computers, you have to provide the logic; so if you can’t do that, you can’t do professional planning. I make that distinction because it was literally my profession.

In my leisure time, I write stories and that also requires a certain amount of planning, and I’ve found there are similarities, especially when you have multiple plot lines that interweave throughout the story. For me, plotting is the hardest part of storytelling; it’s a sequential process of solving puzzles. And science is also about solving puzzles, all of which are beyond my abilities, yet I love to try and understand them, especially the ones that defy our intuitive sense of logic. But science is on a different level to both my professional activities and my storytelling. I dabble at the fringes, taking ideas from people much cleverer than me and creating a philosophical pastiche.

Someone on Quora (a student) commented once that studying physics exercised his analytical skills, which he then adapted to other areas of his life. It occurred to me that I have an analytical mind and that is why I took an interest in physics rather than the other way round. Certainly, my work required an analytical approach and I believe I also take an analytical approach to philosophy. In fact, I’ve argued previously that analysis is what separates philosophy from dogma. Anyway, I don’t think it’s unusual for us, as individuals, to take a skill set from one activity and apply it to another apparently unrelated one.

I wrote a post once about the 3 essential writing skills, being character development, evoking emotion and creating narrative tension. The key to all of these is character and, if one was to distil out the most essential skill of all, it would be to write believable dialogue, as if it was spontaneous, meaning unpremeditated, yet not boring or irrelevant to the story. I’m not at all sure it can be taught. Someone once said (Don Burrows) that jazz can’t be taught, because it’s improvisation by its very nature, and I’d argue the same applies to writing dialogue. I’ve always felt that writing fiction has more in common with musical composition than writing non-fiction. In both cases they can come unbidden into one’s mind, sometimes when one is asleep, and they’re both essentially emotive mediums. 

But science too has its moments of creativity, indeed sheer genius; being a combination of sometimes arduous analysis and inspired intuition.

Wednesday 8 April 2020

Secret heroes

A writer can get attached to characters, and it tends to sneak up on one (speaking for myself) meaning they are not necessarily the characters you expect to affect you.

 

All writers, who get past the ego phase, will tell you the characters feel like they exist separately to them. By the ego phase, I mean you’ve learned how to keep yourself out of the story, though you may suffer lapses – the best fiction is definitely not about you.

 

People will tell you that you use your own experience on which to base characters and events, and otherwise will base characters on people you know. I expect some writers might do that and I’ve even seen advice, if writing a screenplay, to imagine an actor you’ve seen playing the role. If I find myself doing that then I know I’ve lost the plot, literally rather than figuratively. 

 

I borrow names from people I’ve known, but the characters don’t resemble them at all, except in ethnicity. For example, if I have an Indian character, I will use an Indian name of someone I knew. We know that a name is not unique, so we know more than one John, for example, and we also know they have nothing in common.

 

I worked with someone once, who had a very unusual name, Essayas Alfa, and I used both his names in the same story. Neither character was anything like the guy I knew, except the character called Essayas was African and so was my co-worker, but one was a sociopath and the other was a really nice bloke. A lot of names I make up, including all the Kiri names, and even Elvene. I was surprised to learn it was a real name; at least, I got the gender right.

 

The first female character I ever created, when I was learning my craft, was based on someone I knew, though they had little in common, except their age. It was like I was using her as an actor for the role. I’ve never done that since. A lot of my main characters are female, which is a bit of a gamble, I admit. Creating Elvene was liberating and I’ve never looked back.

 

If you have dreams occupied by strangers, then characters in fiction are no different. One can’t explain it if you haven’t experienced it. So how can you get attached to a character who is a figment of your mind? Well, not necessarily in the way you think – it’s not an infatuation. I can’t imagine falling in love with a character I created, though I can imagine falling in love with an actor playing that character, because she’s no longer mine (assuming the character is female).

 

And I’ve got attached to male characters as well. These are the characters who have surprised me. They’ve risen above themselves, achieved something that I never expected them to. They weren’t meant to be the hero of the story, yet they excel themselves, often by making a sacrifice. They go outside their comfort zone, as we like to say, and become courageous, not by overcoming an adversary but by overcoming a fear. And then I feel like I owe them, as illogical as that sounds, because of what I put them through. They are my secret hero of the story. 


Tuesday 31 March 2020

Plato’s 2400 year legacy

I’ve said this before, but it’s worth repeating: no one totally agrees with everything by someone else. In fact, we each of us change our views as we learn and progress and become exposed to new ideas. It’s okay to cherry-pick. In fact, it’s normal. All the giants in science and mathematics and literature and philosophy borrowed and built on the giants who went before them.

I’ve been reading about Plato in A.C. Grayling’s extensive chapter on him and his monumental status in Western philosophy (The History of Philosophy). According to Grayling, Plato was critical of his own ideas. His later writings challenged some of the tenets of his earlier writings. Plato is a seminal figure in Western culture; his famous Academy ran for almost 800 years, before the Christian Roman Emperor, Justininian, closed it down in 529 CE, because he considered it pagan. One must remember that it was during the Roman occupation of Alexandria in 414 that Hypatia was killed by a Christian mob, which many believe foreshadowed the so-called ‘Dark Ages’. 

Hypatia had good relations with the Roman Prefect of her time, and even had correspondence with a Bishop (Synesius of Cyrene), who clearly respected, even adored her, as her former student. I’ve read the transcript of some of his letters, care of Michael Deakin’s scholarly biography. Deakin is Honorary Research Fellow at the School of Mathematical Sciences of Monash University (Melbourne, Australia). Hypatia also taught a Neo-Platonist philosophy, including the works of Euclid, a former Librarian of Alexandria. On the other hand, the Bishop who is historically held responsible for her death (Cyril) was canonised. It’s generally believed that her death was a ‘surrogate’ attack on the Prefect.

Returning to my theme, the Academy of course changed and evolved under various leaders, which led to what’s called Neoplatonism. It’s worth noting that Augustine was influenced by Neoplatonism as well as Aquinas, because Plato’s perfect world of ‘forms’ and his belief in an immaterial soul lend themselves to Christian concepts of Heaven and life after death.

But I would argue that the unique Western tradition that combines science, mathematics and epistemology into a unifying discipline called physics has its origins in Plato’s Academy. It was a pre-requisite, specified by Plato, that people entering the Academy required a knowledge of mathematics. The one remnant of Plato’s philosophy, which stubbornly resists being relegated to history as an anachronism, is mathematical Platonism, though it probably means something different to Plato’s original concept of ‘forms’.

In modern parlance, mathematical Platonism means that mathematics has an independent existence to the human mind and even the Universe. To quote Richard Feynman (who wasn’t a Platonist) from his book, The Character of Physical Law in the chapter titled The Relation of Mathematics to Physics.

...what turns out to be true is that the more we investigate, the more laws we find, and the deeper we penetrate nature, the more this disease persists. Every one of our laws is a purely mathematical statement in rather complex and abstruse mathematics... Why? I have not the slightest idea. It is only my purpose to tell you about this fact.

The ’disease’ he’s referring to and the ‘fact’ he can’t explain is best expressed in his own words:

The strange thing about physics is that for the fundamental laws we still need mathematics.

To put this into context, he argues that when you take a physical phenomenon that you describe mathematically, like the collision between billiard balls, the fundaments are not numbers or formulae but the actual billiard balls themselves (my mundane example, not his). But when it comes to fundaments of fundamental laws, like the wave function in Schrodinger’s equation (again, my example), the fundaments remain mathematical and not physical objects per se.

In his conclusion, towards the end of a lengthy chapter, he says:

Physicists cannot make a conversation in any other language. If you want to learn about nature, to appreciate nature, it is necessary to understand the language that she speaks in. She offers her information only in one form.

I’m not aware of any physicist who would disagree with that last statement, but there is strong disagreement whether mathematical language is simply the only language to describe nature, or it’s somehow intrinsic to nature. Mathematical Platonism is unequivocally the latter.

Grayling’s account of Plato says almost nothing about the mathematical and science aspect of his legacy. On the other hand, he contends that Plato formulated and attempted to address three pertinent questions:

What is the right kind of life, and the best kind of society? What is knowledge and how do we get it? What is the fundamental nature of reality?

In the next paragraph he puts these questions into perspective for Western culture.

Almost the whole of philosophy consists in approaches to the related set of questions addressed by Plato.

Grayling argues that the questions need to be addressed in reverse order. To some extent, I’ve already addressed the last two. Knowledge of the natural world has become increasingly dependent on a knowledge of mathematics. Grayling doesn’t mention that Plato based his Academy on Pythagoras’s quadrivium: arithmetic, geometry, astronomy and music; after Plato deliberately sought out Pythagoras’s best student, Archytas of Terentum. Pythagoras is remembered for contending that ‘all is number’, though his ideas were more religiously motivated than scientific.

But the first question is the one that was taken up by subsequent philosophers, including his most famous student, Aristotle, who arguably had a greater and longer lasting influence on Western thought than his teacher. But Aristotle is a whole other chapter in Grayling’s book, as you’d expect, so I’ll stick to Plato. 

Plato argued for an ‘aristocracy’ government run by a ‘philosopher-king’, but based on a meritocracy rather than hereditary rulership. In fact, if one goes into details, he effectively argued for leadership on a eugenics basis, where prospective leaders were selected from early childhood and educated to rule.

Plato was famously critical of democracy (in his time) because it was instrumental in the execution of his friend and mentor, Socrates. Plato predicted that democracy led to either anarchy or the rule of the wealthy over the poor. In the case of anarchy, a strongman would logically take over and you'd have 'tyranny', which is the worst form of government (according to Plato). The former (anarchy) is what we’ve recently witnessed in so-called 'Arab spring' uprisings. 

The latter (rule by the wealthy) is what has arguably occurred in America, where lobbying by corporate interests increasingly shapes policies. This is happening in other ‘democracies’, including Australia. To give an example, our so-called ‘water policy’ is driven by prioritising the sale of ‘water rights’ to overseas investors over ecological and community needs; despite Australia being the driest continent in the world (after Antarctica). Keeping people employed is the mantra of all parties. In other words, as long as the populace is gainfully employed, earning money and servicing the economy, policy deliberations don’t need to take them into account.

As Clive James once pointed out, democracy is the exception, not the norm. Democracies in the modern world have evolved from a feudalistic model, predominant in Europe up to the industrial revolution, when social engineering ideologies like fascism and communism took over from monarchism. It arguably took 2 world wars before we gave up traditional colonial exploitation, and now we have exploitation of a different kind, which is run by corporations rather than nations. 

I acknowledge that democracy is the best model for government that we have, but those of us lucky enough to live in one tend to take if for granted. In Athens, in the original democracy (in Plato’s time) which was only open to males and excluded slaves, there was a broad separation between the aristocracy and the people who provided all the goods and services, including the army. One can see parallels to today’s world, where the aristocracy have been replaced by corporate leaders, and the interdependence and political friction between these broad categories remain. In the Athens Senate (according to historian, Philip Matyszak) if you weren’t an expert in the field you pontificated on, like ship building (his example) you were generally given short thrift by the Assembly.

I sometimes think that this is the missing link in today’s governance, which has been further eroded by social media. There are experts in today’s world on topics like climate change and species extinction and water conservation (to provide a parochial example) but they are often ignored or sidelined or censored. As recently as a couple of decades ago, scientists at CSIRO (Australia’s internationally renowned, scientific research organisation) were censored from talking about climate change, because they were bound by their conditions of employment not to publicly comment on political issues. And climate change was deemed a political issue, not a scientific one, by the then Cabinet, who were predominantly climate change deniers (including the incumbent PM).

In contrast, the recent bush fire crisis and the current COVID-19 crisis have seen government bodies, at both the Federal and State level, defer to expertise in their relevant fields. To return to my opening paragraph, I think we can cherry-pick some of Plato’s ideas in the context of a modern democracy. I would like to see governments focus more on expertise and long-term planning beyond a government’s term in office. We can’t have ‘philosopher kings’, but we do have ‘elite’ research institutions that can work with private industries in creating more eco-friendly policies that aren’t necessarily governed by the sole criterion of increasing GDP in the short term. I would like to see more bipartisanship rather than a reflex opposition to every idea that is proposed, irrespective of its merits.

Wednesday 4 March 2020

Freeman Dyson: 15 December 1923 – 28 February 2020

I only learned of Dyson's passing yesterday, quite by accident. I didn't hear about it through any news service.

In this video, Dyson describes the moment on a Greyhound bus in 1948, when he was struck by lightning (to use a suitably vivid metaphor) which eventually gave rise to a Nobel prize in physics for Feynman, Schwinger and Tomanaga, but not himself.

It was the unification of quantum mechanics (QM) with Einstein's special theory of relativity. Unification with the general theory of relativity (GR) still eludes us, and Dyson heretically argues that it may never happen (in another video). Dyson's other significant contribution to physics was to prove (along with Andrew Leonard, in 1967) how the Pauli Exclusion Principle stops you from sinking into everything you touch.

I learned only a year or so ago that Dyson believes that QM is distinct from classical physics, contrary to accepted wisdom. A viewpoint I've long held myself. What's more, Dyson argues that QM can only describe the future and classical physics describes the past. Another view I thought I held alone. In his own words:

What really happens is that the quantum-mechanical description of an event ceases to be meaningful as the observer changes the point of reference from before the event to after it. We do not need a human observer to make quantum mechanics work. All we need is a point of reference, to separate past from future, to separate what has happened from what may happen, to separate facts from probabilities.




Addendum: I came across this excellent obituary in the New York Times.