Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Thursday 24 December 2020

Does imagination separate us from AI?

 I think this is a very good question, but it depends on how one defines ‘imagination’. I remember having a conversation (via email) with Peter Watson, who wrote an excellent book, A Terrible Beauty (about the minds and ideas of the 20th Century) which covered the arts and sciences with equal erudition, and very little of the politics and conflicts that we tend to associate with that century. In reference to the topic, he argued that imagination was a word past its use-by date, just like introspection and any other term that referred to an inner world. Effectively, he argued that because our inner world is completely dependent on our outer world, it’s misleading to use terms that suggest otherwise.

It’s an interesting perspective, not without merit, when you consider that we all speak and think in a language that is totally dependent on an external environment from our earliest years. 

 

But memory for us is not at all like memory in a computer, which provides a literal record of whatever it stores, including images, words and sounds. On the contrary, our memories of events are ‘reconstructions’, which tend to become less reliable over time. Curiously, the imagination apparently uses the same part of the brain as memory. I’m talking semantic memory, not muscle memory, which is completely different, physiologically. So the imagination, from the brain’s perspective is like a memory of the future. In other words, it’s a projection into the future of something we might desire or fear or just expect to happen. I believe that many animals have this same facility, which they demonstrate when they hunt or, alternatively, evade being hunted.

 

Raymond Tallis, who has a background in neuroscience and writes books as well as a regular column in Philosophy Now, had this to say, when talking about free will:

 

Free agents, then, are free because they select between imagined possibilities, and use actualities to bring about one rather than another.

 

I find a correspondence here with Richard Feynman’s ‘sum over histories’ interpretation of quantum mechanics (QM). There are, in fact, an infinite number of possible paths in the future, but only one is ‘actualised’ in the past.

 

But the key here is imagination. It is because we can imagine a future that we attempt to bring it about - that's free will. And what we imagine is affected by our past, our emotions and our intellectual considerations, but that doesn't make it predetermined.

 

Now, recent advances in AI would appear to do something similar in the form of making predictions based on recordings of past events. So what’s the difference? Well, if we’re playing a game of chess, there might not be a lot of difference, and AI has reached the stage where it can do it even better than humans. There are even computer programmes available now that try and predict what I’m going to write next, based on what I’ve already written. How do you know this hasn’t been written by a machine?

 

Computers use data – lots of it – and use it mindlessly, which means the computer really doesn’t know what it means in the same way we do. A computer can win a game of chess, but it requires a human watching the game to appreciate what it actually did. In the same way that a computer can distinguish one colour from another, including different shades of a single colour, but without ever ‘seeing’ a colour the way we do.

 

So, when we ‘imagine’, we fabricate a mindscape that affects us emotionally. The most obvious examples are in art, including music and stories. We now have computers also creating works of art, including music and stories. But here’s the thing: the computer cannot respond to these works of art the way we do.

 

Imagination is one of the fundamental attributes that makes us humans. An AI can and will (in the future) generate scenarios and select the one that produces the best outcome, given specific criteria. But, even in these situations, it is a tool that a human will use to analyse enormous amounts of data that would be beyond our capabilities. But I wouldn’t call it imagination any more than I would say an AI could see colour.


Saturday 5 December 2020

Some (personal) Notes on Writing

 This post is more personal, so don’t necessarily do what I’ve done. I struggled to find my way as a writer, and this might help to explain why. Someone recently asked me how to become a writer, and I said, ‘It helps, if you start early.’ I started pre-high school, about age 8-9. I can remember writing my own Tarzan scripts and drawing my own superheroes. 

 

Composition, as it was called then, was one of my favourite activities. At age 12 (first year high school), when asked to write about what we wanted to do as adults, I wrote that I wanted to write fiction. I used to draw a lot as a kid, as well. But, as I progressed through high school, I stopped drawing altogether and my writing deteriorated to the point that, by the time I left school, I couldn’t write an essay to save my life; I had constant writer’s block.

 

I was in my 30s before I started writing again and, when I started, I knew it was awful, so I didn’t show it to anyone. A couple of screenwriting courses (in my late 30s) was the best thing I ever did. With screenwriting, the character is all in what they say and what they do, not in what they look like. However, in my fiction, I describe mannerisms and body language as part of a character’s demeanour, in conjunction with their dialogue. Also, screenwriting taught me to be lean and economical – you don’t write anything that can’t be seen or heard on the screen. The main difference in writing prose is that you do all your writing from inside a character’s head; in effect, you turn the reader into an actor, subconsciously. Also, you write in real time so it unfolds like a movie in the reader’s imagination.

 

I break rules, but only because the rules didn’t work for me, and I learned that the hard way. So I don’t recommend that you do what I do, because, from what I’ve heard and read, most writers don’t. I don’t write every day and I don’t do multiple drafts. It took me a long time to accept this, but it was only after I became happy and confident with what I produced. In fact, I can go weeks, even months, without writing anything at all and then pick it up from where I left off.

 

I don’t do rewrites because I learned the hard way that, for me, they are a waste of time. I do revisions and you can edit something forever without changing the story or its characters in any substantial way. I correct for inconsistencies and possible plot holes, but if you’re going to do a rewrite, you might as well write something completely different – that’s how I feel about it. 

 

I recently saw a YouTube discussion between someone and a writer where they talked about the writer’s method. He said he did a lot of drafts, and there are a lot of highly successful writers who do (I’m not highly successful, yet I don’t think that’s the reason why). However, he said that if you pick something up you wrote some time ago, you can usually tell if it’s any good or not. Well, my writing passes that test for me.

 

I’m happiest when my characters surprise me, and, if they don’t, I know I’m wasting my time. I treat it like it’s their story, not mine; that’s the best advice I can give.

 

How to keep the reader engaged? I once wrote in another post that creating narrative tension is an essential writing skill, and there are a number of ways to do this. Even a slow-moving story can keep a reader engaged, if every scene moves the story forward. I found that keeping scenes short, like in a movie, and using logical sequencing so that one scene sets up the next, keeps readers turning the page. Narrative tension can be subliminally created by revealing information to the reader that the characters don’t know themselves; it’s a subtle form of suspense. Also, narrative tension is often manifest in the relationships between characters. I’ve always liked moral dilemmas, both in what I read (or watch) and what I write.

 

Finally, when I start off a new work, it will often take me into territory I didn’t anticipate; I mean psychological territory, as opposed to contextual territory or physical territory. 

 

A story has all these strands, and when you start out, you don’t necessarily know how they are going to come together – in fact, it’s probably better if you don’t. That way, when they do, it’s very satisfying and there is a sense that the story already existed before you wrote it. It’s like you’re the first to read it, not create it, which I think is a requisite perception.


Monday 30 November 2020

Social norms determine morality

 The latest issue of Philosophy Now (Issue 140, Oct/Nov 2020) has Hegel as its theme. I confess that really the only thing I knew about Hegel was his ‘dialectic’ and that he influenced Marx, though, from memory, Marx claimed to have turned Hegel’s dialectic ‘on its head’. Hegel’s dialectic has relevance to politics and history, because, basically, he claimed that if someone proposes a ‘thesis’ someone else will propose its ‘antithesis’ and we end up with a ‘synthesis’ of the two. Some people claim that this is how history has progressed, but I’m not so sure. 

 

However, I do agree that if someone promotes an ideology or a social agenda, you will invariably get opposition to it, and the stronger the promotion, the stronger the opposition. We see this in politics a lot, but a good example is religion. Militant atheism only tends to occur in societies where you have militant fundamentalist religion, which is usually Christian, but could be Muslim. In societies where no one really cares about religion, no one cares too much about atheism either. 

 

I know this, because I live in a culture where no one cares and I’ve visited one where people do, which is America (at the dawn of the 21st Century). Mind you, I grew up in 1950s Australia when there was a division between Catholic and protestant that even affected the small rural town where I lived and was educated in. That division evaporated in the 1960s pretty much, with a zeitgeist that swept the Western world. It was largely driven by post war liberalism, the introduction of the contraceptive pill and a cultural phenomenon called rock and roll. But what I remember growing up in the 60’s, leaving school, going to university and entering the workforce in a major city, was that we males grew our hair long and everyone, including women, questioned everything. We were rebellious: there were marches against our involvement in the Vietnam war and an Australian academic by the name of Germaine Greer published a book called The Female Eunuch.

 

And all of this is relevant to the theme of my thesis, which is that morality is really about social norms, which is why morality evolves, and whether it evolves for the better or worse is dependent on a lot of factors, not least political forces and individuals’ perceptions of their own worth and sense of security within a social context.

 

But getting back to Hegel, many saw the rebellious attitude in the 1960s as a backlash against conservative forces, especially religious based ones, that had arisen in the 1950s. And this in turn, was a reaction to the forces of fascism that had ignited the most widespread and devastating conflict in the whole of human history. There was almost no one who had not been affected by that in Europe or Asia or North America. Even, in far-flung Australia and New Zealand, it seemed that every family had a member or knew someone who had been directly involved in that war. My family was no exception, as I’ve written about elsewhere.

 

In the same issue of Philosophy Now, there is an article by Terrence Thomson (a PhD candidate at the Centre for Research in Modern European Philosophy in Kingston University, London) titled, Kant, Conflict & Universal History. I’ve written about Kant elsewhere, but not in this context. He’s more famously known for his epistemology, discussed in some detail in his Critique of Pure Reason (1781), which was the subject of my essay. But, according to Thomson, 3 years later (1784), he published an article in a ‘prominent intellectual newspaper, titled: Idea for a Universal History from a Cosmopolitan Perspective.’ Without going into too much detail, Kant coined a term, ‘unsociable sociability’, which he contended is ‘a feature of human social interaction’, and which he defined as the human “tendency to enter into society, a tendency that continually threatens to break up this society”. Quoting Thomson (interpreting Kant): ‘...it is a natural human inclination to connect with people and to be part of a larger whole; yet it is also part of our natural inclination to destroy these social bonds through isolationism and divisiveness.’ One has to look no further than the just-held, US presidential election and its immediate aftermath to see this in action. But one could also see this as an example of Hegel’s ‘dialectic’ in action.

 

As I explained in my introduction, Hegel argued that a ‘synthesis’ arises out of an opposition between a ‘thesis’ and its ‘antithesis’, but then the synthesis becomes a new ‘thesis’, which creates a new ‘antithesis’, and so the dialectic never stops. This could also be seen as similar to, if not the same as, the dynamic that Thomson attributes to Kant: the human inclination to ‘belong’ followed by an opposing inclination to ‘break those social bonds’.

 

I take a much simpler view, which is that humans are inherently tribal. And tribalism is a double-edged sword. It creates the division that Kant alludes to in his ‘unsociable sociability’, and it creates the antithesis of Hegel’s dialectic. We see this in the division created by religion throughout history, and not just the petty example I witnessed in my childhood. And now, in the current age, we have a new tribalism in the form of political parties, exemplified by the deep divisions in the US, even before Donald Trump exploited them to the full in his recently eclipsed, 4 year term.

 

And this brings me back, by a very convoluted route, to the subject of this essay: morality and social norms. Trump did his best to change social norms, and, by so doing, change his society’s moral landscape, whether intentionally or not. He made it socially acceptable to be disrespectful to ‘others’, which included women, ‘grab them by the pussy’, immigrants, Muslims, the former President, anyone in the Democratic party, anyone in the GOP who didn’t support him and his own Intelligence community and Defence personnel. He also made white supremacy and fringe conspiracy theorist groups feel legitimised. But, most significantly, beyond everything else, he propagated a social norm whereby you could dismiss any report by any authority whatsoever that didn’t fit in with your worldview – you could simply create your own ‘facts’.

 

In most societies, especially Western democratic societies, we expect social norms to evolve that make people feel more inclusive and that constructively build collaborative relationships, because we know from experience, that that is how we get things done. As a retired naval admiral and self-ascribed conservative (in a TV interview prior to the election) pointed out, Trump did the exact opposite, both at home and abroad: he broke off relations and fomented division wherever and whenever he could. I’ve argued in previous posts that leaders bring out the best in the people they lead, which is how they are ultimately judged; contrarily, Trump brought out the worst in people.

 

In one of my better posts, I discussed at length how a young woman, raped and fatally tortured on a bus in India, exposed the generational divide in social norms in that country at that time, and how it directly affected one’s perception of the morality of that specific incident. 

 

From a Western perspective, especially given the recent ‘me-too’ movement, this is perverse. However, in the late 60s, early 70s, when I was entering adulthood, there was a double-standard when it came to sexual behaviour. It was okay for men to have sex with as many partners as they could find, but it was not alright for women to indulge in the same activity. This led to men behaving more predatory and it was considered normal for women to be ‘seduced’, even if it was against their better-intentions. The double-standard of the day really didn’t encourage much alternative. The introduction of the contraceptive pill, I believe, was the game-changer. Because, theoretically, women could have the same sexual freedom as men without the constant fear of becoming pregnant, which was still a stigma at that time. Now, some of my generation may have a different rear-vision view of this, but I give it as an example of changing social norms occurring concordantly with changing moral perceptions.

 

I write science fiction, as a hobby or pastime, rather than professionally. But what attracted me to sci-fi as a genre, was not so much the futuristic technologies one could conjure up, but the what-if societies that might exist on worlds isolated by astronomical distances. In a recent work, I explored a society that included clones (genetically engineered humans, so not copies per se). In this society, female clones are exploited because they have no family. Instead, they have guardianships that can be sold onto someone else, and this becomes a social norm that’s tacitly accepted. Logically, this leads to sexual exploitation. I admit to being influenced by Blade Runner 2049, though I go in a completely different direction, and my story is more of a psychological thriller than an action thriller. There is sexual exploitation on both sides: I have a man in authority having a sexual relationship with a character by blackmail; and I have a woman sexually exploiting a character so she can manipulate him into committing a crime. Neither of these scenarios were part of my original plot; they evolved in the way that stories do, and became core elements. In fact, it could be argued that the woman is even more evil than the man.

 

Both characters come undone in the end, but, more importantly to me was that the characters should be realistic and not paper cut-outs. I asked someone who’d read it what they thought of the man in power, and they said, ‘Oh, people like him exist, even now.’ 


Wednesday 18 November 2020

Did mathematics create the universe?

 The short answer is no; there is no ‘fire in the equations’. One needs to be careful not to conflate epistemology with ontology. Let’s look at the wave function (ψ) which is a fundamental entity in quantum mechanics (QM). It’s a mathematical formula that gives probabilities of finding a particle existing before the particle is actually ‘observed’. However, there is also some debate about whether the wave function exists in reality. 

 

Mathematics, from a human perspective, is a set of symbols that can be arranged in formulae that can describe and predict physical phenomena. The symbols are human-made, but the relationships, that are entailed in the formulae, are not. In other words, mathematical relationships appear to have a life of their own independent of human minds.

 

So there is a relationship between mathematics, the physical world and the human mind, (probably best explored, if not explained, by Roger Penrose’s 3 worlds philosophy). The relationship between the human mind and the physical world is epistemological - epitomised by the discipline called physics. And mathematics is the medium we use in pursuing that epistemology.

 

Eugene Wigner famously wrote an essay called The unreasonable effectiveness of mathematics in the natural sciences, and it still causes debate half a century after it was written. Wigner refers to the 2 miracles inherent in the Universe’s capacity to be self-comprehending: 

 

It is difficult to avoid the impression that a miracle confronts us here… or the two miracles of the existence of laws of nature and of the human mind’s capacity to divine them.

 

Or to quote Einstein: The most incomprehensible thing about the Universe is that it’s comprehensible.

 

The point is that Wigner’s ‘miracles’ or Einstein’s ‘incomprehensible thing’ are completely dependent on mathematics. But Wigner, in particular, brings together epistemology and ontology under one rubric. Ontology is ‘the nature of being’ (dictionary definition). At its deepest level, the ‘nature of being’ appears to be mathematical.

 

None other than Richard Feynman weighed into the discussion in his book, The Character of Physical Law, specifically in a chapter titled The Relation of Mathematics to Physics, where he expounds:

 

...what turns out to be true is that the more we investigate, the more laws we find, and the deeper we penetrate nature, the more this disease persists. Every one of our laws is a purely mathematical statement in rather complex and abstruse mathematics... Why? I have not the slightest idea. It is only my purpose to tell you about this fact.

 

The ’disease’ he’s referring to and the ‘fact’ he can’t explain is best expressed in his own words:

 

The strange thing about physics is that for the fundamental laws we still need mathematics.

 

In conclusion, he says the following:

 

Physicists cannot make a conversation in any other language. If you want to learn about nature, to appreciate nature, it is necessary to understand the language that she speaks in. She offers her information only in one form.

 

Many scientists and philosophers argue that we create mathematical models that give very reliable and accurate descriptions of reality. All these ‘models’ have epistemological limits, which means we use different mathematics for different scenarios. Nevertheless, there are natural constants and mathematical ‘laws’ that are requisite for complex life to exist. Terry Bollinger (in a Quora post) explained the significance of Planck’s constant in determining the size and stability of atoms, from which everything we can see and touch is made, including ourselves. The fine structure constant is another fundamental dimensionless number that determines the ‘nature of being’ upon which the reality we all know depends.

 

So mathematics didn’t create the Universe, but, at a fundamental level, it determines the Universe we inhabit.



Footnotes: 


1) This was in answer to a question posted on Quora. I did receive an 'upvote' from Masroor Bukhari, who is a former Research Fellow and PhD in Particle Physics at Houston University.


2) Will Singourd, who asked the question, wrote the following:


Thank you for that outstanding answer. This is the most thorough & best answer I've seen on Quora. I've printed it out for reference.

I appreciate all the thought you put in it, plus your elucidating writing skills.

 

Tuesday 3 November 2020

An unprecedented US presidential election, in more ways than one

It’s the eve of the US presidential election, which both sides are arguing will determine the country’s (and by extension, the world’s) trajectory for the foreseeable future. More than that, both sides are contending that if they fail, it will be dire for the entire nation. Basically, they’re arguing that the very soul of the nation is dependent on the outcome. So I’m writing this before I know the result.


In some respects, what’s happening in the US mirrors what’s happening in many Western nations, only, in the US, it’s more extreme. This is a case where emotion overrules rationality, and some would say it’s a litmus test for rationality versus irrationality, to which I would concur.


If one looks at just one aspect of this race, which, in fact, should determine the outcome because, like the presidential election itself, is unprecedented in recent history (literally the past 100 years): I’m talking about the coronavirus or COVID-19. In its third wave, the US broke the daily record for new cases just recently (for the entire world, I believe). My point is that America’s COVID-19 record highlights the irrational side of American politics – in fact, it’s a direct consequence of said irrationality.


I’ve made the point before, because I’ve witnessed it so often, that in an us-them situation or ingroup-outgroup (to use psychology-speak), highly intelligent people often become irrational, and partisan politics is the perfect crucible for ingroup-outgroup mentality.


Anthony Fauci, the Director of the National Institute of Allergy and Infectious Diseases, in a recent interview compared what’s happened in America with Melbourne, Australia’s response to its second wave (which is where I happen to live). To quote The Guardian:


America’s top infectious diseases expert, Dr Anthony Fauci, has praised Melbourne’s response to the coronavirus, saying he “wished” the US could adopt the same mentality.


The major difference is that the pandemic was not politicised like it was in the US, or at least, not nearly to the same degree. There have been some people on the fringe who protested against the lockdown but they gained little sympathy from the mainstream media, the general public or politicians (on either side). In Australia, medical expertise and medical advice was generally accepted with little dissent.


From my external viewpoint, based on what I’ve seen and read, Donald Trump’s ‘base’ includes fringe groups like QAnon, white supremacists and conspiracy theorists of many stripes, but especially conspiracies concerning the ‘deep state’, many of which Trump initiated himself during his incumbency.


I’m one of those who believes that the US was divided before Trump took office, which means the divisions started and were exacerbated during Obama’s terms, especially his second term, when a divided Senate effectively stonewalled any of his proposals. Trump is a symptom of the US’s division, not its cause. But Trump has exploited that division better than anyone before him and continues to do so. Whoever wins this election, the division will remain, and healing America will be a formidable and potentially impossible task for the next incumbent.




Postscript, 8 Nov 2020: The election result is now known, or at least been given by reliable media outlets in the US, although Trump has declared he will challenge the results in some of the so-called battleground states in the courts. It’s part of Trump’s modus operandi, that he’s transferred from the corporate world, that anything and everything can be overcome if you have enough lawyers on your side. It should be pointed out that the result is not officially given until the ‘electoral college’ meets on Dec.,14.


Apparently, there was the highest voter turnout since 1900; that means for both sides of politics. It indicates how deeply and passionately divided the US is. I would just like to make a point that no one else (to my knowledge) has made. In the week of the election, the daily record for new cases of COVID-19 was broken twice. That so many Americans voted for Trump, in the light of his gross mismanagement of the pandemic, indicates the enormous proportion of the population who don’t take the coronavirus seriously; especially, when one looks at the response in other countries. 


Trump’s former political strategist and advisor, Steve Bannon, in a YouTube video, made the extraordinary rhetorical demand that the heads of Dr Anthony Fauci and FBI director Christopher Wray be put on pikes outside the White House. Not surprisingly, YouTube took the video down. Only in a democracy like America, could someone make such an incendiary comment without being put in jail. But it highlights the perverse logic of Trump supporters that they hold the only credible scientist in the Administration responsible for the carnage caused by the pandemic. As I said, the election was, at least partly, a litmus test for rationality versus irrationality.


Tuesday 27 October 2020

My interpretation of QM, so not orthodox

This is another answer I wrote on Quora. I’ve forgotten the question, but the answer is self-explanatory. It doesn’t cover anything new (from me) but it’s more succinct than other posts I’ve written.


I’m not a physicist, but I’m well read in this area and quantum mechanics (QM) has a particular fascination for me.


Someone did a survey at a conference and, from memory, the most popular was still Bohr’s so-called Copenhagen interpretation, which many now call ‘the shut up and calculate school’. I think most physicists no longer believe that consciousness is required to ‘observe’ the outcome of a quantum experiment (like the famous double slit experiment). 


Schrodinger’s famous cat thought experiment was to demonstrate how absurd that is. In his book, What is Life?, Schrodinger asks rhetorically where does the quantum effect become ‘real’. Does it occur in the optic nerve going to the brain? Or does it occur before then or when the person has their ‘Aha’ moment? Most people would now say it happens at the apparatus level, when the isotope decays, even before it affects the cat. 


One of the most popular interpretations seems to be the multiple worlds interpretation (Philip Ball calls it the MWI hypothesis). In this scenario, the universe spits into 2 (or more) so that all possibilities occur in some universe, but you only experience one of them.


There are other interpretations, like David Bohm’s pilot wave and the ‘transaction’ interpretation, which incorporates the time-symmetrical nature of the wave function. But, for the sake of brevity, I’ll discuss Roger Penrose’s, Paul Davies’ and Freeman Dyson’s.


Roger Penrose describes QM in 3 phases: U, R and C (always designated in bold). U is the evolution of the wave function (in Schrodinger’s equation), R is the observation or ‘decoherence’ when the wave function ‘collapses’ (or simply disappears) and C is the classical physics phase. Penrose thinks gravity plays a role in decoherence but I won’t discuss that here. 


Paul Davies argues for John Wheeler’s famous “…participatory universe” in which observers—minds, if you like—are inextricably tied to the concretization of the physical universe emerging from quantum fuzziness over cosmological durations.


This comes from Wheeler’s famous thought experiment that light from a distant quasar could be ‘lensed’ by an intervening massive object, like a galaxy, but we don’t know what path the light takes until it’s observed. This is an extension of his ‘delayed choice’ thought experiment relating to the double slit experiment (later confirmed in a laboratory setting).


Davies discusses this very cogently in an on-line paper and references another paper by Freeman Dyson, where he says, “Dyson concludes that a quantum description cannot be applied to past events.”


Personally, I agree with Dyson that QM describes the future and classical physics describes the past. In other words, I argue that the wave function is in the future, which is why it is never observed. This is consistent with Penrose’s 3 phases, which logically occur in a temporal sequence.


If one takes this approach to Wheeler’s photon from his quasar, it exists in the future of whatever it interacts with, including an observer’s instrument. Let’s assume, hypothetically, that the instrument is the observer’s eye. Because the wave function is time symmetrical the ‘delayed choice’ is really a backwards-in-time pathway to the photon’s source, so the observer sees it instantaneously in the past. In effect, this is the so-called transactional interpretation.


Richard Feynman’s path integral method of QED takes the sum of every path possible (most of which cancel out) to give a probability of where a particle (including a photon) will be observed. If all these paths exist in the future, that’s not a problem; only one of them will exist in the past, observed in retrospect. This is the opposite of the MW interpretation which claims all paths exist simultaneously.


Freeman Dyson comes to the following conclusion: 


“We do not need a human observer to make quantum mechanics work. All we need is a point of reference, to separate past from future, to separate what has happened from what may happen, to separate facts from probabilities.”


The curious thing about that statement is that the ‘point of reference’ is consciousness, because (as Schrodinger pointed out in What is Life?) consciousness is the only thing we know that exists in the continuous present.


This doesn’t make the observer the cause, because the cause is still at the photon’s source. It’s just that consciousness happens to be present in the ‘now’ between the QM future and the classical physics past that Dyson references.


Here is the link to both Davies’ and Dyson’s discussions.


Monday 5 October 2020

Does infinity and the unknowable go hand in glove?

A recurring theme on my blog has been the limits of what we can know. So Marcus du Sautoy’s book, What We Cannot Know, fits the bill. I acquired it after I saw him give a talk at the Royal Institute on the subject, promoting the book, which is entertaining and enlightening in and of itself. I’ve previously read his The Music of the Primes and Finding Moonshine, both of which are very erudite and stimulating. He’s made a few TV programmes as well.


Previously, I’ve written blog posts based on books by Bryan Magee (Ultimate Questions) and Noson S. Yanofsky (The Outer Limits of Reason; What Science, Mathematics, and Logic CANNOT Tell Us). Yanofsky is a Professor in computer science, while Magee was a Professor of Philosophy (later a broadcaster and Member of British Parliament). I have to admit that Yanofsky’s book appealed to me more, because it’s more science based. Magee’s book was very erudite and provocative; my one criticism being that he seemed almost dismissive of the role that mathematics plays in the limits of what we can know. He specifically states that “...rationality requires us to renounce the pursuit of proof in favour of the pursuit of progress.” (My emphasis). However, pursuit of proof is exactly what mathematicians do, and, what’s more, they do it consistently and successfully, even though there is a famous proof that says there are limits to what we can prove (Godel’s Incompleteness Theorem).


Marcus du Sautoy is a mathematician, and a very good communicator as well, as can be evidenced on some of his YouTube videos, including some with Numberphile. But his book is not limited to mathematics. In fact, he discusses pretty much all the fields of our knowledge which appear to incorporate limits, which he metaphorically calls ‘Edges’. These include, chaos theory, quantum mechanics, consciousness, the Universe, and of course, mathematics itself. One is tempted to compare his book with Yanofsky’s, as they are both very erudite and educational, whilst taking different approaches. But I won’t, except to say they are both worth reading.


One aspect of du Sautoy’s book, which is unusual, yet instructive, is that he consulted other experts in their respective fields, including John Polkinghorne, John Barrow, Kristof Koch and Robert May. May, in particular, did pioneering work in chaos theory on animal populations in the 1970s. An ex-pat Australian, he’s now a member of the House of Lords, which is where du Sautoy had lunch with him. All these interlocutors were very stimulating and worthy additional contributors to their respective topics.


Very early on (p.10, in fact) du Sautoy mentions a famous misprediction by French philosopher, Auguste Compte, in 1835, about the stars: “We shall never be able to study, by any method, their chemical composition or their mineralogical structure.” Yet, less than a century later, it was being done by spectroscopy as a virtually standard practice, which in turn led to the knowledge that the Universe was expanding consistently in all directions. Throughout the book, du Sautoy reminds us of Compte’s prediction, when it appears that there are some things we will never know. He also quotes Donald Rumsfeld on the very next page:


There are known knowns; these are things that we know that we know. We also know there are known unknowns, that is to say, we know there are some things we do not know. But there are also unknown unknowns, the ones we don’t know we don’t know.


At the time, people tended to treat Rumsfeld’s statement as a bit of a joke and a piece of political legerdemain, given its context: weapons of mass destruction. However, in the field of science, it’s perfectly correct: there are hierarchies of knowledge, and when one looks back, historically, there have always been unknown unknowns, and, therefore, it’s a safe bet they will exist in the future as well. In other words, our future discoveries are dependent on secrets the Universe has yet to reveal to us mere mortals.


Towards the end of his book, du Sautoy gets more philosophical, which is not surprising, and he makes a point that I’ve not seen or heard before. He argues that some things about the Universe, like time, and the possibility of a multiverse, might remain unknown without physically getting outside the Universe, which is impossible. This, of course, raises the issue of God. Augustine, among others, has argued that God exists outside the Universe, and therefore, outside time. Paul Davies made the same point in his book, The Mind of God, with specific reference to Augustine.

Du Sautoy, who is a self-declared atheist, contends that God represents what we cannot know, which is consistent with the idea that some things we cannot know, can only be known from outside the Universe. But du Sautoy makes the point that there is something that exists outside the Universe that we know and that is mathematics. He, therefore, makes the tongue-in-cheek suggestion that maybe we can replace God with mathematics. Curiously, John Barrow made the same mischievous suggestion in one of his books – probably, Pi in the Sky. According to du Sautoy, Barrow is a Christian, which surprised me as much as du Sautoy, given that you would never know it from his writings. While on the subject of God, John Polkinghorne is a well known theologian as well as a physicist. Again, according to du Sautoy, Polkinghorne contends that God could intervene in the Universe via chaos theory. I once made the same point, although I also said I didn’t believe in an interventionist God, as that leads to people claiming they know God’s will, and that leads to all sorts of acts done in God’s name, and we all know how that usually ends. The problem with believing in an interventionist God is that it axiomatically leads to people believing they can influence said God.

Getting back to the subject at hand, du Sautoy says:

If there was no universe, no matter, no space, nothing. I think there would still be mathematics. Mathematics does not require the physical world to exist.

Following on from du Sautoy’s book, I started re-reading Eli Maor’s book, e: the story of a number, which incidentally covers the history of calculus going back to the ancient Greeks and Archimedes, in particular. The Greeks had a problem in that they couldn’t acknowledge infinity – it was taboo. Maor believes that Archimedes must have known the concept of infinity because he appreciated how an iterative process could converge to a value, but he wasn’t allowed to say so. Even in the modern day, there are mathematicians who wish to be rid of the concept of infinity, yet it’s intrinsic to mathematics everywhere you look.

This is relevant because the very nature of infinity tells us that there will always be truths beyond our kin. You can use a Turing machine (a computer) to calculate all the zeros in Riemann’s hypothesis and, if it’s true, it will never stop. Now, du Sautoy makes an interesting observation about this (which he expounds upon in this video, if you want it firsthand) that it’s possible that Riemann’s hypothesis is unknowable. In fact, there’s a small collection of conjectures associated with prime numbers that fall into this category (the Goldbach conjecture and the twin-prime conjecture being another 2). But here’s the thing: if one can prove that the Riemann hypothesis is unknowable, then it must be true. This is because, if it was untrue, there would have to be at least one result that didn’t fit the hypothesis, which would make it ‘knowable’.

The unknowable possibility is a direct consequence of Godel’s Incompleteness Theorem. To quote du Sautoy:

Godel proved mathematically that within any axiomatic system framework for number theory that was free of contradictions there were true statements about numbers that could not be proved within that framework – a mathematical proof that mathematics has its limitations. (My empasis).

I highlighted that passage because I left it out when proposing a definition to someone on Quora, and as a consequence, my interlocutor tried to argue that my definition was incorrect. Basically, I was saying that within any axiomatic system of mathematics there are ‘truths’ that can’t be proven. That’s Godel’s famous theorem in essence and in practice. However, one can find proofs, in principle, by using new axioms outside that particular system. And we see this in practice. The axiom that geometry can be non-Euclidean created new proofs, and the introduction of -1 created new mathematics, called complex algebra, that gave solutions to previously unsolvable problems.

Towards the end of his book, du Sautoy references a little known and obscure point made by the renowned logician Alonso Church, called the ‘paradox of unknowability’, which proves that unless you know it all, there will always be truths that are by their very nature unknowable.

In effect, Church has extended Godel’s theorem to the physical world. Du Sautoy gives the example of all the dice that are lost in his house. There is either an even number of them or an odd number. One of these is true, but it is unknowable unless he can find them all. A more universal example is whether the Universe is infinite or finite. One of these is true but it’s currently unknowable and may be for all time. Du Sautoy makes the point that if we learn it’s finite then it becomes knowable, but if it’s infinite it may remain forever unknowable. This is similar to the Riemann hypothesis being knowable or unknowable. If it’s false then the Turing machine stops, which makes it finite, but, if it’s true, it is both infinite and unknowable, based on that thought experiment. It was only at this point in my essay that I came up with its title. I’ve expressed it as a question, but it’s really a conclusion.

If we go back to Archimedes and his struggle with the infinite, we can see that probably for most of humankind’s history, the infinite was considered outside the mortal realm. In other words, it was the realm of God. In fact, du Sautoy quotes Descartes: God is the only thing I positively conceive as infinite.

I’ve long contended that mathematics is the only ‘realm’ (for want of a better word) where infinity is completely at home. In Maor’s book, at one point, he discusses the difference between applied mathematics and pure mathematics, and it occurred to me that this distinction could explain the perennial argument about whether mathematics is invented or discovered. But the plethora of infinities, which is also intrinsic to unknowable ‘truths’, as outlined above, infers that there will always be mathematical ‘things’ waiting to be discovered. What’s more, the ‘marriage’ between theoretical physics and pure mathematics has never been more productive.



Addendum 1: After writing this, I re-watched an interview with Norman Wildberger on the subject of infinity and Real numbers. Wildberger is an Australian mathematician with ‘unorthodox’ views on the foundations of mathematics, as he explains in the video.

Wildberger is not a crank: he’s an academic mathematician, who has unusual philosophical ideas on mathematics. He makes the valid point that computers can only work with finite numbers (meaning numbers with a finite decimal extension), and that is the criterion he uses to determine whether something mathematical is ‘real’. He says he doesn’t believe in Real numbers, as they are defined, because they are infinitely uncomputable.

In effect, he argues they have no place in the physical world, but I disagree. In chaos theory, the reason chaotic phenomena are unpredictable is because you have to calculate the initial conditions to infinite decimal places, which is impossible. This is both mathematical and physical evidence that some things are ‘unknowable’.


Addendum 2: Sabine Hossenfelder argues that infinity is only 'real' in the mathematical world. She contends that in physics, it's not 'real', because it's not 'measurable'. She gives a good exposition in this YouTube video.


Saturday 12 September 2020

Dame Diana Rigg (20 Jul 1938 – 10 Sep 2020)

It’s very rare for me to publish 2 posts in 2 days, and possibly unprecedented to publish 3 in less than a week. However, I couldn’t let this pass, for a number of reasons. Arguably, Dame Diana Rigg has had little to do with philosophy but quite a lot to do with culture and, of course, storytelling, which is a topic close to my heart.


In one of the many tributes that came out, there is an embedded video (c/- BBC Archives, 1997), where she talks about acting in a way that most of us don’t perceive it. She says, in effect, that an audience comes to a theatre (or a cinema) because they want to ‘believe’, and an actor has to give them (or honour) that ‘belief’. (I use the word, honour, she didn’t.)


This is not dissimilar to the ‘suspension of disbelief’ that writers attempt to draw from their readers. I’ve watched quite a few of Diana Rigg’s interviews, given over the decades, and I’m always struck by her obvious intelligence, not to mention her wit and goodwill.

 

I confess to being somewhat smitten by her character, Emma Peel, as a teenager. It was from watching her that I learned one falls for the character and not the actor playing her. Seeing her in another role, I was at first surprised, then logically reconciled, that she could readily play someone else less appealing.

 

Emma Peel was a role before its time in which the female could have the same hero status as her male partner. She explained, in one of the interviews I saw, that the role had originally been written for a man and they didn’t have time to rewrite it. So it occurred by accident. Originally, it was Honor Blackman, as Cathy Gale (who also passed away this year). But it was Diana Rigg as Emma Peel who seemed to be the perfect foil for Steed (Patrick Macnee). No one else filled those shoes with quite the same charm.

 

It was a quirky show, as only the British seem to be able to pull off: Steed in his vintage Bentley and Mrs Peel in her Lotus Elan, which I desired almost as much as her character.

 

The show time-travelled without a tardis, combining elements of fantasy and sci-fi that influenced my own writing. I suspect there is a bit of Emma Peel in Elvene, though I’ve never really analysed it.




Friday 11 September 2020

Does history progress? If so, to what?

This is another Question of the Month from Philosophy Now. The last two I submitted weren’t published, but I really don’t mind as the answers they did publish were generally better than mine. Normally, with a question like this, you know what you want to say before you start. In other words, you know what your conclusion is. But, in this case, I had no idea.

 

At first, I wasn’t going to answer, because I thought the question was a bit obtuse. However, I couldn’t help myself. I started by analysing the question and then just followed the logic.


 

 

I found a dissonance to this question, because ‘history’, by definition, is about the past and ‘progress’ infers projection into the future. In fact, a dictionary definition of history tells us it’s “the study of past events, particularly in human affairs”. And a dictionary definition of progress is “forward or onward movement to a destination”. If one puts the two together, there is an inference that history has a ‘destination’, which is also implicit in the question.

 

I’ve never studied history per se, but if one studies the evolution of ideas in any field, be it science, philosophy, arts, literature or music, one can’t fail to confront the history of human ideas, in all their scope and diversity, and all the richness that has arisen out of that, imbued in culture as well as the material and social consequences of civilisations.

 

There are two questions, one dependent on the other, so we need to address the first one first. If one uses metrics like health, wealth, living conditions, peace, then there appears to be progress over the long term. But if one looks closer, this progress is uneven, even unequal, and one wonders if the future will be even more unequal than the present, as technologies become more available and affordable to some societies than others.

 

Progress infers change, and the 20th Century saw more change than in the entire previous history of humankind. I expect the 21st Century will see more change still, which, like the 20th Century, will be largely unpredictable. This leads to the second question, which I’ll rephrase to make more germane to my discussion: what is the ‘destination’ and do we have control over it?

 

Humans, both as individuals and collectives, like to believe that they control their destiny. I would argue that, collectively, we are currently at a cross roads, which is evidenced by the political polarisation we see everywhere in the Western world.

 

But this cross roads has social and material consequences for the future. It’s epitomised by the debate over climate change, which is a litmus test for whether we control our destiny or not. It not only requires political will, but the consensus of a global community, and not just the scientific community. If we do nothing, it will paradoxically have a bigger impact than taking action. But there is hope: the emerging generation appears more predisposed than the current one.


Monday 7 September 2020

Secrets to good writing

I wrote this, because it came up on Quora as a question, What makes good writing?

I should say up front that there are a lot of much better writers than me, most of whom write for television, in various countries, but Europe, UK, America, Australia and New Zealand are the ones I’m most familiar with.

 

I should also point out that you can be ‘good’ at something without being ‘known’, so to speak. Not all ‘good’ cricketers play for Australia and not all ‘good’ footballers play in the national league. I have a friend who has won awards in theatre, yet she’s never made any money out of it; it’s strictly amateur theatre. She was even invited (as part of a group) to partake in a ‘theatre festival’ in Monaco a couple of years ago. Luckily, the group qualified for a government grant so they could participate.

 

Within this context, I call myself a good writer, based partly on feedback and partly on comparing myself to other writers I’ve read. I’ve written about this before, but I’ll keep it simple; almost dot points.

 

Firstly, good writing always tells the story from some character’s point of view (POV) and it doesn’t have to be the same character throughout the story. In fact, you can change POV even within the same scene or within dialogue, but it’s less confusing if you stay in one.

 

You take the reader inside a character’s mind, so they subconsciously become an actor. It’s why the reader is constantly putting themselves in the character’s situation and reacting accordingly.

 

Which brings me to the second point about identifying good writing. It can make the reader cry or laugh or feel angry or scared – in fact, feel any human emotion.

 

Thirdly, good writing makes the reader want to keep returning to the story. There are 2 ways you can do this. The most obvious and easiest way is to create suspense – put someone in jeopardy – which is why crime fiction is so popular.

 

The second way is to make the reader invest in the character(s)’ destiny. They like the characters so much that they keep returning to their journey. This is harder to do, but ultimately more satisfying. Sometimes, you can incorporate both into the same story.


A story should flow, and there is one way that virtually guarantees this. When I attended a screenwriting course (some decades ago), I was told that a scene should either provide information about the story or information about a character or move the story forward. In practice, I found that if I did the last one, the other 2 took care of themselves.


Another ‘trick’ from screenwriting is to write in ‘real time’ with minimal description, which effectively allows the story to unfold like a movie inside the reader’s head.

 

A story is like a journey, and a journey needs a map. A map is a sequence of plot points that are filled in with scenes that become the story.


None of the above are contentious, but my next point is. I contend that good writing is transparent or invisible. By this I mean that readers, by and large, don’t notice good writing, they only notice bad writing. If you watch a movie, the writing is completely invisible. No one consciously comments on good screenwriting; they always comment on the good acting or the good filmmaking, neither of which would exist without a good script.

 

How is this analogous to prose writing? The story takes place in the reader’s imagination, not on the page. Therefore, the writing should be easy-to-read and it should flow, following a subliminal rhythm; and most importantly, the reader should never be thrown out of the story. Writing that says, ‘look at me, see how clever I am’, is the antithesis of this. I concede, not everyone agrees.

 

I’ve said before that if we didn’t dream, stories wouldn’t work. Dream language is the language of stories, and they can both affect us the same way. I remember when I was a kid, movies could affect me just as dramatically as dreams. When reading a story, we inhabit its world in our imagination, conjuring up imagery without conscious effort.

 

 

Example:

 

The world got closer until it eventually took up almost all their vision. Their craft seemed to level out as if it was skimming the surface, but at an ultra-high altitude. As they got lower the dark overhead was replaced by a cobalt-blue and then they passed through clouds and they could see they were travelling across an ocean with waves tipped by froth, and then eventually they approached a shoreline and they seemed to slow down as a long beach stretched like a ribbon from horizon to horizon. Beyond the beach there were hills and mountains, which they accelerated over until they came to flat grassy plains, and in the distance they saw some dots on the ground, which became a village of people and horses and huts that poked into the air like upside down cones.


Wednesday 26 August 2020

Did the Universe see us coming?

 I recently read The Grand Design by Stephen Hawking (2010), co-authored by Leonard Mlodinow, who gets ‘second billing’ (with much smaller font) on the cover, so one is unsure what his contribution was. Having said that, other titles listed by Mlodinow (Euclid’s Window and Feynman’s Rainbow) make me want to search him out. But the prose style does appear to be quintessential Hawking, with liberal lashings of one-liners that we’ve come to know him for. Also, I think one can confidently assume that everything in the book has Hawking’s imprimatur.

 

I found this book so thought-provoking that, on finishing it, I went back to the beginning, so I could re-read his earlier chapters in the context of his later ones. On the very first page he says, rather provocatively, ‘philosophy is dead’. He then spends the rest of the book giving his account of ‘life, the universe and everything’ (which, in one of his early quips, ‘is not 42’). He ends the first chapter (introduction, really) with 3 questions:

 

1)    Why is there something rather than nothing?

2)    Why do we exist?

3)    Why this particular set of laws and not some other?

It’s hard to get more philosophical than this.

 

I haven’t read everything he’s written, but I’m familiar with his ideas and achievements, as well as some of his philosophy and personal prejudices. ‘Prejudice’ is a word that is usually used pejoratively, but I use it in the same sense I use it on myself, regarding my ‘pet’ theories or beliefs. For example, one of my prejudices (contrary to accepted philosophical wisdom) is that AI will not achieve consciousness.

 

Nevertheless, Hawking expresses some ideas that I would not have expected of him. His chapter titled, What is Reality? is where he first challenges the accepted wisdom of the general populace. He argues, rather convincingly, that there are only ‘models of reality’, including the ones we all create inside our heads. He doesn’t say there is no objective reality, but he says that, if we have 2 or more ‘models of reality’ that agree with the evidence, then one cannot say that one is ‘more true’ than another.

 

For example, he says, ‘although it is not uncommon for people to say that Copernicus proved Ptolemy wrong, that is not true’. He elaborates: ‘one can use either picture as a model of the universe, for our observations of the heavens can be explained by assuming either the earth or the sun is at rest’.

 

However, as I’ve pointed out in other posts, either the Sun goes around the Earth or the Earth goes around the Sun. It has to be one or the other, so one of those models is wrong.

 

He argues that we only ‘believe’ there is an ‘objective reality’ because it’s the easiest model to live with. For example, we don’t know whether an object disappears or not when go into another room, nevertheless he cites Hume, ‘who wrote that although we have no rational grounds for believing in an objective reality, we also have no choice but to act as if it’s true’.

 

I’ve written about this before. It’s a well known conundrum (in philosophy) that you don’t know if you’re a ‘brain-in-a-vat’. But I don’t know of a single philosopher who thinks that they are. The proof is in dreams. We all have dreams that we can’t distinguish from reality until we wake up. Hawking also referenced dreams as an example of a ‘reality’ that doesn’t exist objectively. So dreams are completely solipsistic to the extent that all our senses will play along, including taste.

 

Considering Hawking’s confessed aversion to philosophy, this is all very Kantian. We can never know the thing-in-itself. Kant even argued that time and space are a priori constructs of the mind. And if we return to the ‘model of reality’ that exists in your mind: if it didn’t accurately reflect the external objective reality outside your mind, the consequences would be fatal. To me, this is evidence that there is an objective reality independent of one’s mind - it can kill you. However, if you die in a dream, you just wake up.

 

Of course, this all leads to subatomic physics, where the only models of reality are mathematical. But even in this realm, we rely on predictions made by these models to determine if they reflect an objective reality that we can’t see. To return to Kant, the thing-in-itself is dependent on the scale at which we ‘observe’ it. So, at the subatomic scale, our observations may be tracks of particles captured in images, not what we see with the naked eye. The same can be said on the cosmic scale; observations dependent on instruments that may not even be stationed on Earth.

 

To get a different perspective, I recently read an article on ‘reality’ written by Roger Penrose (New Scientist, 16 May 2020) which was updated from one he wrote in 2006. Penrose has no problem with an ‘objective independent reality’, and he goes to some lengths (with examples) to show the extraordinary agreement between our mathematical models and physical reality. 

 

Our mathematical models of physical reality are far from complete, but they provide us with schemes that model reality with great precision – a precision enormously exceeding that of any description free of mathematics.

 

(It should be pointed out that Penrose and Hawking won a joint prize in physics for their work in cosmology.)

 

But Penrose gets to the nub of the issue when he says, ‘...the “reality” that quantum theory seems to be telling us to believe in is so far removed from what we are used to that many quantum theorists would tell us to abandon the very notion of reality’. But then he says in the spirit of an internal dialogue, ‘Where does quantum non-reality leave off and the physical reality that we actually experience begin to take over? Present day quantum theory has no satisfactory answer to this question’. (I try to answer this below.)

 

Hawking spends an entire chapter on this subject, called Alternative Histories. For me, this was the most revealing chapter in his book. He discusses at length Richard Feynman’s ‘sum over histories’ methodology, called QED or quantum electrodynamics. I say methodology instead of theory, because it’s a mathematical method that has proved extraordinarily accurate in concordance with Penrose’s claim above. Feynman compared it to measuring the distance between New York and Seattle (from memory) to within the width dimension of a human hair.

 

Basically, as Hawking expounds, in Feynman’s theory, a quantum particle can take every path imaginable (in the famous double-slit experiment, say) and then he adds them altogether, but because they’re waves, most of them cancel each other out. This leads to the principle of superposition, where a particle can be in 2 places or 2 states at once. However, as soon as it’s ‘observed’ or ‘measured’ it becomes one particle in one state. In fact, according to standard quantum theory, it’s possible for a single photon to be split into 2 paths and be ‘observed’ to interfere with itself, as described in this video. (I've edited this after Wes Hansen from Quora challenged it). I've added a couple of Wes's comments in an addendum below. Personally, I believe 'superposition' is part of the QM description of the future, as alluded to by Freeman Dyson (see  below). So I don't think superposition really occurs.

 

Hawking contends that the ‘alternative histories’ inherent in Feynman’s mathematical method, not only affect the future but also the past. What he is implying is that when an observation is made it determines the past as well as the future. He talks about a ‘top down’ history in lieu of a ‘bottom up’ history, which is the traditional way of looking at things. In other words, cosmological history is one of many ‘alternative histories’ (his terminology) that evolve from QM.

 

This leads to a radically different view of cosmology, and the relation between cause and effect. The histories that contribute to the Feynman sum don’t have an independent existence, but depend on what is being measured. We create history by our observation, rather than history creating us (my emphasis).

 

As it happens, John Wheeler made the exact same contention, and proposed that it could happen on a cosmic scale when we observed light from a distant quasar being ‘gravitationally lensed’ by an intervening galaxy or black hole (refer Davies paper, linked below). Hawking makes specific reference to Wheeler’s conjecture at the end of his chapter. It should be pointed out that Wheeler was a mentor to Feynman, and Feynman even referenced Wheeler’s influence in his Nobel Prize acceptance speech.

 

A contemporary champion of Wheeler’s ideas is Paul Davies, and he even dedicates his book, The Goldilocks Enigma, to Wheeler.

 

Davies wrote a paper which is available on-line, where he describes Wheeler’s idea as the “…participatory universe” in which observers—minds, if you like—are inextricably tied to the concretization of the physical universe emerging from quantum fuzziness over cosmological durations.

 

In the same paper, Davies references and attaches an essay by Freeman Dyson, where he says, “Dyson concludes that a quantum description cannot be applied to past events.”

 

And this leads me back to Penrose’s question: how do we get the ‘reality’ we are familiar with from the mathematically modelled quantum world that strains our credulousness? If Dyson is correct, and the past can only be described by classical physics then QM only describes the future. So how does one reconcile this with Hawking’s alternative histories?

 

I’ve argued elsewhere that the path from the infinitely many paths of Feynman’s theory, is only revealed when an ‘observation’ is made, which is consistent with Hawking’s point, quoted above. But it’s worth quoting Dyson, as well, because Dyson argues that the observer is not the trigger.

 

... the “role of the observer” in quantum mechanics is solely to make the distinction between past and future...

 

What really happens is that the quantum-mechanical description of an event ceases to be meaningful as the observer changes the point of reference from before the event to after it. We do not need a human observer to make quantum mechanics work. All we need is a point of reference, to separate past from future, to separate what has happened from what may happen, to separate facts from probabilities.

 

But, as I’ve pointed out in other posts, consciousness exists in a constant present. The time for ‘us’ is always ‘now’, so the ‘point of reference’, that is key to Dyson’s argument, correlates with the ‘now’ of a conscious observer.

 

We know that ‘decoherence’ is not necessarily dependent on an observer, but dependent on the wave function interacting with ‘classical physics’ objects, like a laboratory apparatus or any ‘macro’ object. Dyson’s distinction between past and future makes sense in this context. Having said that, the interaction could still determine the ‘history’ of the quantum event (like a photon), even it traversed the entire Universe, as in the cosmic background radiation (for example).

 

In Hawking’s subsequent chapters, including one titled, Choosing Our Universe, he invokes the anthropic principle. In fact, there are 2 anthropic principles called the ‘weak’ and the ‘strong’. As Hawking points out, the weak anthropic principle is trivial, because, as I’ve pointed out, it’s a tautology: Only universes that produce observers can be observed.

 

On the other hand, the strong anthropic principle (which Hawking invokes) effectively says, Only universes that produce observers can ‘exist’. One can see that this is consistent with Davies’ ‘participatory universe’.

 

Hawking doesn’t say anything about a ‘participatory universe’, but goes into some detail about the fine-tuning of our universe for life, in particular the ‘miracle’ of how carbon can exist (predicted by Fred Hoyle). There are many such ‘flukes’ in our universe, including the cosmological constant, which Hawking also discusses at some length.

 

Hawking also explains how an entire universe could come into being out of ‘nothing’ because the ‘negative’ gravitational energy cancels all the ‘positive’ matter and radiation energy that we observe (I assume this also includes dark energy and dark matter). Dark energy is really the cosmological constant. Its effect increases with the age of the Universe, because, as the Universe expands, gravitational attraction over cosmological distances decreases while ‘dark energy’ (which repulses) doesn’t. Dark matter explains the stable rotation of galaxies, without which, they’d fly apart.

 

Hawking also describes the Hartle-Hawking model of cosmology (without mentioning James Hartle) whereby he argues that in a QM only universe (at its birth), time was actually a 4th spatial dimension. He calls this the ‘no-boundary’ universe, because, as John Barrow once quipped, ‘Once upon a time, there was no time’. I admit that this ‘model’ appeals to me, because in quantum cosmology, time disappears mathematically.

 

Hawking’s philosophical view is the orthodox one that, if there is a multiverse, then the anthropic principle (weak or strong) ensures that there must be a universe where we can exist. I think there are very good arguments for the multiverse (the cosmological variety, not the QM multiple worlds variety) but I have a prejudice against an infinity of them because then there would be an infinity of me.

 

Hawking is a well known atheist, so, not surprisingly, he provides good arguments against the God hypothesis. There could be a demiurge, but if there is, there is no reason to believe it coincides with any of the Gods of mythology. Every God I know of has cultural ties and that includes the Abrahamic God.

 

For someone who claims that ‘philosophy is dead’, Hawking’s book is surprisingly philosophical and thought-provoking, as all good philosophy should be. In his conclusions, he argues strongly for ‘M theory’, believing it will provide the theory(s) of everything that physicists strive for. M theory, as Hawking acknowledges, requires ‘supersymmetry’, and from what I know and read, there is little or no evidence of it thus far. But I agree with Socrates that every mystery resolved only uncovers more mysteries, which history, thus far, has confirmed over and over.

 

My views have evolved and, along with the ‘strong anthropic principle’, I’m becoming increasingly attracted to Wheeler’s ‘participatory universe’, because the more of its secrets we learn, the more it appears as if ‘the Universe saw us coming’, to paraphrase Freeman Dyson.



Addendum (23Apr2021): Wes Hansen, whom I met on Quora, and who has strong views on this topic, told me outright that he's not a fan of Hawking or Feynman. Not surprisingly, he challenged some of my views and I'm not in a position to say if he's right or wrong. Here are some of his comments:


You know, I would add, the problem with the whole “we create history by observation” thing is, it takes a whole lot of history for light to travel to us from distant galaxies, so it leads to a logical fallacy. Consider:

Suppose we create the past with our observations, then prior to observation the galaxies in the Hubble Deep Fields did not exist. Then where does the light come from? You see, we are actually seeing those galaxies as they existed long ago, some over 10 billion years ago.

We have never observed a single photon interfering with itself, quite the opposite actually: Ian Miller's answer to Can a particle really be in several places at the same time in the subatomic world, or is this just modern mysticism?. This is precisely why I cannot tolerate Hawking or Feynman, it’s absolute nonsense!

Regarding his last point, I think Ian Miller has a point. I don't always agree with Miller, but he has more knowledge on this topic than me. I argue that the superposition, which we infer from the interference pattern, is in the future. The idea of a single photon taking 2 paths and interfering with itself is deduced solely from the interference pattern (see linked video in main text). My view is that superposition doesn't really happen - it's part of the QM description of the future. I admit that I effectively contradicted myself, and I've made an edit to the original post to correct that.