Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
Showing posts with label Being. Show all posts
Showing posts with label Being. Show all posts

Tuesday 8 June 2021

What’s the most fundamental value?

 This is a Question of the Month in Philosophy Now (Issue 143, April/May 2021). I wrote it very quickly, almost on impulse in less than ½ hr, but I spent a lot of time polishing it.


The word ‘fundamental’ is key here: it infers the cornerstone or foundation upon which all other values are built. Carlo Rovelli, who is better known as a physicist than a philosopher, said in an online video that “we are not entities, we are relations”. And I believe this aphorism goes to the heart of what it means to be human. From our earliest cognitive moments to the very end of our days, the quality of our lives is largely dependent on our relationships with others. And, in that context, I would contend that the most important and fundamental value is trust. Without trust, honesty does not have a foothold, and arguably honesty is the glue in any relationship, be it familial, contractual or even between governments and the general public. 

 

Psychologists will tell you that fear and trust cannot co-exist. If someone, either as a child, or a spouse, is caught in a relationship governed by fear, yet completely dependent, the consequence will inevitably result in an inability to find intimacy outside that relationship, because trust will be corroded if not destroyed. 

 

Societies can’t function without trust: traffic would be chaos; projects wouldn’t be executed collaboratively. We all undertake financial transactions every day and there is a strong element of trust involved in all of these that most of us take for granted. Cynics will argue that trust allows others to take advantage of you, which means trust only works if it is reciprocated. If enough people take advantage of those who trust, then it would evaporate and everyone would suddenly dissemble and obfuscate. Relationships would be restricted to one’s closest family and wider interactions would be fraught with hidden agendas, even paranoia. But this is exactly what happens when governments mandate their citizenry to ‘out’ people who don’t toe the party line. 

 

Everything that we value in our relationships and friendships, be it love, integrity, honesty, loyalty or respect, is forfeit without trust. As Carlo Rovelli intimated in his aphoristic declaration, it is through relationships that we are defined by others and how we define ourselves. It is through these relationships that we find love, happiness, security and a sense of belonging. We ultimately judge our lives by the relationships we form over time, both in our professional lives and our social lives. Without trust, they simply don’t exist, except as fake.


                                                --------------------------



I once wrote on this topic before, in 2008. I deliberately avoided reading that post while I wrote this one. To be honest, I’m glad I did as it’s a much better post. However, this is a response to a specific question with a limit of 400 words. Choosing the answer was the easy part – it took seconds – arguing a case was more organic. I’ll add an addendum if it’s published.


Interestingly, 'trust' crops up in my fiction more than once. In the last story I wrote, it took centre stage.


Saturday 6 March 2021

The closest I’ve ever seen to someone explaining my philosophy

 I came across this 8min video of Paul Davies from 5 years ago, where I was surprised to find that he and I had very similar ideas regarding the ‘Purpose’ of the Universe. In more recent videos, he has lighter hair and has lost his moustache, which was a characteristic of his for as long as I’ve followed him.

 

Now, one might think that I shouldn’t be surprised, as I’ve been heavily influenced by Davies over many years (decades even) and read many of his books. But I thought he was a Deist, and maybe he was, because not halfway through he admits he had recently changed his views.

 

But what makes me consider that this video probably comes closest to expressing my own philosophy is when he says that meaning or purpose has evolved and that it’s directly related to the fact that the Universe created the means to understand itself. Both these points I’ve been making for years. In his own words, “We unravel the plot”.


Or to quote John Wheeler (whom Davies admired): “The universe gives birth to consciousness, and consciousness gives meaning to the universe.”





P.S. This is also worth watching: his philosophy on mathematics; to which I would concur. His metaphor of a 'warehouse' is unusual, yet very descriptive and germane in my view.


Thursday 24 December 2020

Does imagination separate us from AI?

 I think this is a very good question, but it depends on how one defines ‘imagination’. I remember having a conversation (via email) with Peter Watson, who wrote an excellent book, A Terrible Beauty (about the minds and ideas of the 20th Century) which covered the arts and sciences with equal erudition, and very little of the politics and conflicts that we tend to associate with that century. In reference to the topic, he argued that imagination was a word past its use-by date, just like introspection and any other term that referred to an inner world. Effectively, he argued that because our inner world is completely dependent on our outer world, it’s misleading to use terms that suggest otherwise.

It’s an interesting perspective, not without merit, when you consider that we all speak and think in a language that is totally dependent on an external environment from our earliest years. 

 

But memory for us is not at all like memory in a computer, which provides a literal record of whatever it stores, including images, words and sounds. On the contrary, our memories of events are ‘reconstructions’, which tend to become less reliable over time. Curiously, the imagination apparently uses the same part of the brain as memory. I’m talking semantic memory, not muscle memory, which is completely different, physiologically. So the imagination, from the brain’s perspective is like a memory of the future. In other words, it’s a projection into the future of something we might desire or fear or just expect to happen. I believe that many animals have this same facility, which they demonstrate when they hunt or, alternatively, evade being hunted.

 

Raymond Tallis, who has a background in neuroscience and writes books as well as a regular column in Philosophy Now, had this to say, when talking about free will:

 

Free agents, then, are free because they select between imagined possibilities, and use actualities to bring about one rather than another.

 

I find a correspondence here with Richard Feynman’s ‘sum over histories’ interpretation of quantum mechanics (QM). There are, in fact, an infinite number of possible paths in the future, but only one is ‘actualised’ in the past.

 

But the key here is imagination. It is because we can imagine a future that we attempt to bring it about - that's free will. And what we imagine is affected by our past, our emotions and our intellectual considerations, but that doesn't make it predetermined.

 

Now, recent advances in AI would appear to do something similar in the form of making predictions based on recordings of past events. So what’s the difference? Well, if we’re playing a game of chess, there might not be a lot of difference, and AI has reached the stage where it can do it even better than humans. There are even computer programmes available now that try and predict what I’m going to write next, based on what I’ve already written. How do you know this hasn’t been written by a machine?

 

Computers use data – lots of it – and use it mindlessly, which means the computer really doesn’t know what it means in the same way we do. A computer can win a game of chess, but it requires a human watching the game to appreciate what it actually did. In the same way that a computer can distinguish one colour from another, including different shades of a single colour, but without ever ‘seeing’ a colour the way we do.

 

So, when we ‘imagine’, we fabricate a mindscape that affects us emotionally. The most obvious examples are in art, including music and stories. We now have computers also creating works of art, including music and stories. But here’s the thing: the computer cannot respond to these works of art the way we do.

 

Imagination is one of the fundamental attributes that makes us humans. An AI can and will (in the future) generate scenarios and select the one that produces the best outcome, given specific criteria. But, even in these situations, it is a tool that a human will use to analyse enormous amounts of data that would be beyond our capabilities. But I wouldn’t call it imagination any more than I would say an AI could see colour.


Friday 11 September 2020

Does history progress? If so, to what?

This is another Question of the Month from Philosophy Now. The last two I submitted weren’t published, but I really don’t mind as the answers they did publish were generally better than mine. Normally, with a question like this, you know what you want to say before you start. In other words, you know what your conclusion is. But, in this case, I had no idea.

 

At first, I wasn’t going to answer, because I thought the question was a bit obtuse. However, I couldn’t help myself. I started by analysing the question and then just followed the logic.


 

 

I found a dissonance to this question, because ‘history’, by definition, is about the past and ‘progress’ infers projection into the future. In fact, a dictionary definition of history tells us it’s “the study of past events, particularly in human affairs”. And a dictionary definition of progress is “forward or onward movement to a destination”. If one puts the two together, there is an inference that history has a ‘destination’, which is also implicit in the question.

 

I’ve never studied history per se, but if one studies the evolution of ideas in any field, be it science, philosophy, arts, literature or music, one can’t fail to confront the history of human ideas, in all their scope and diversity, and all the richness that has arisen out of that, imbued in culture as well as the material and social consequences of civilisations.

 

There are two questions, one dependent on the other, so we need to address the first one first. If one uses metrics like health, wealth, living conditions, peace, then there appears to be progress over the long term. But if one looks closer, this progress is uneven, even unequal, and one wonders if the future will be even more unequal than the present, as technologies become more available and affordable to some societies than others.

 

Progress infers change, and the 20th Century saw more change than in the entire previous history of humankind. I expect the 21st Century will see more change still, which, like the 20th Century, will be largely unpredictable. This leads to the second question, which I’ll rephrase to make more germane to my discussion: what is the ‘destination’ and do we have control over it?

 

Humans, both as individuals and collectives, like to believe that they control their destiny. I would argue that, collectively, we are currently at a cross roads, which is evidenced by the political polarisation we see everywhere in the Western world.

 

But this cross roads has social and material consequences for the future. It’s epitomised by the debate over climate change, which is a litmus test for whether we control our destiny or not. It not only requires political will, but the consensus of a global community, and not just the scientific community. If we do nothing, it will paradoxically have a bigger impact than taking action. But there is hope: the emerging generation appears more predisposed than the current one.


Wednesday 26 August 2020

Did the Universe see us coming?

 I recently read The Grand Design by Stephen Hawking (2010), co-authored by Leonard Mlodinow, who gets ‘second billing’ (with much smaller font) on the cover, so one is unsure what his contribution was. Having said that, other titles listed by Mlodinow (Euclid’s Window and Feynman’s Rainbow) make me want to search him out. But the prose style does appear to be quintessential Hawking, with liberal lashings of one-liners that we’ve come to know him for. Also, I think one can confidently assume that everything in the book has Hawking’s imprimatur.

 

I found this book so thought-provoking that, on finishing it, I went back to the beginning, so I could re-read his earlier chapters in the context of his later ones. On the very first page he says, rather provocatively, ‘philosophy is dead’. He then spends the rest of the book giving his account of ‘life, the universe and everything’ (which, in one of his early quips, ‘is not 42’). He ends the first chapter (introduction, really) with 3 questions:

 

1)    Why is there something rather than nothing?

2)    Why do we exist?

3)    Why this particular set of laws and not some other?

It’s hard to get more philosophical than this.

 

I haven’t read everything he’s written, but I’m familiar with his ideas and achievements, as well as some of his philosophy and personal prejudices. ‘Prejudice’ is a word that is usually used pejoratively, but I use it in the same sense I use it on myself, regarding my ‘pet’ theories or beliefs. For example, one of my prejudices (contrary to accepted philosophical wisdom) is that AI will not achieve consciousness.

 

Nevertheless, Hawking expresses some ideas that I would not have expected of him. His chapter titled, What is Reality? is where he first challenges the accepted wisdom of the general populace. He argues, rather convincingly, that there are only ‘models of reality’, including the ones we all create inside our heads. He doesn’t say there is no objective reality, but he says that, if we have 2 or more ‘models of reality’ that agree with the evidence, then one cannot say that one is ‘more true’ than another.

 

For example, he says, ‘although it is not uncommon for people to say that Copernicus proved Ptolemy wrong, that is not true’. He elaborates: ‘one can use either picture as a model of the universe, for our observations of the heavens can be explained by assuming either the earth or the sun is at rest’.

 

However, as I’ve pointed out in other posts, either the Sun goes around the Earth or the Earth goes around the Sun. It has to be one or the other, so one of those models is wrong.

 

He argues that we only ‘believe’ there is an ‘objective reality’ because it’s the easiest model to live with. For example, we don’t know whether an object disappears or not when go into another room, nevertheless he cites Hume, ‘who wrote that although we have no rational grounds for believing in an objective reality, we also have no choice but to act as if it’s true’.

 

I’ve written about this before. It’s a well known conundrum (in philosophy) that you don’t know if you’re a ‘brain-in-a-vat’. But I don’t know of a single philosopher who thinks that they are. The proof is in dreams. We all have dreams that we can’t distinguish from reality until we wake up. Hawking also referenced dreams as an example of a ‘reality’ that doesn’t exist objectively. So dreams are completely solipsistic to the extent that all our senses will play along, including taste.

 

Considering Hawking’s confessed aversion to philosophy, this is all very Kantian. We can never know the thing-in-itself. Kant even argued that time and space are a priori constructs of the mind. And if we return to the ‘model of reality’ that exists in your mind: if it didn’t accurately reflect the external objective reality outside your mind, the consequences would be fatal. To me, this is evidence that there is an objective reality independent of one’s mind - it can kill you. However, if you die in a dream, you just wake up.

 

Of course, this all leads to subatomic physics, where the only models of reality are mathematical. But even in this realm, we rely on predictions made by these models to determine if they reflect an objective reality that we can’t see. To return to Kant, the thing-in-itself is dependent on the scale at which we ‘observe’ it. So, at the subatomic scale, our observations may be tracks of particles captured in images, not what we see with the naked eye. The same can be said on the cosmic scale; observations dependent on instruments that may not even be stationed on Earth.

 

To get a different perspective, I recently read an article on ‘reality’ written by Roger Penrose (New Scientist, 16 May 2020) which was updated from one he wrote in 2006. Penrose has no problem with an ‘objective independent reality’, and he goes to some lengths (with examples) to show the extraordinary agreement between our mathematical models and physical reality. 

 

Our mathematical models of physical reality are far from complete, but they provide us with schemes that model reality with great precision – a precision enormously exceeding that of any description free of mathematics.

 

(It should be pointed out that Penrose and Hawking won a joint prize in physics for their work in cosmology.)

 

But Penrose gets to the nub of the issue when he says, ‘...the “reality” that quantum theory seems to be telling us to believe in is so far removed from what we are used to that many quantum theorists would tell us to abandon the very notion of reality’. But then he says in the spirit of an internal dialogue, ‘Where does quantum non-reality leave off and the physical reality that we actually experience begin to take over? Present day quantum theory has no satisfactory answer to this question’. (I try to answer this below.)

 

Hawking spends an entire chapter on this subject, called Alternative Histories. For me, this was the most revealing chapter in his book. He discusses at length Richard Feynman’s ‘sum over histories’ methodology, called QED or quantum electrodynamics. I say methodology instead of theory, because it’s a mathematical method that has proved extraordinarily accurate in concordance with Penrose’s claim above. Feynman compared it to measuring the distance between New York and Seattle (from memory) to within the width dimension of a human hair.

 

Basically, as Hawking expounds, in Feynman’s theory, a quantum particle can take every path imaginable (in the famous double-slit experiment, say) and then he adds them altogether, but because they’re waves, most of them cancel each other out. This leads to the principle of superposition, where a particle can be in 2 places or 2 states at once. However, as soon as it’s ‘observed’ or ‘measured’ it becomes one particle in one state. In fact, according to standard quantum theory, it’s possible for a single photon to be split into 2 paths and be ‘observed’ to interfere with itself, as described in this video. (I've edited this after Wes Hansen from Quora challenged it). I've added a couple of Wes's comments in an addendum below. Personally, I believe 'superposition' is part of the QM description of the future, as alluded to by Freeman Dyson (see  below). So I don't think superposition really occurs.

 

Hawking contends that the ‘alternative histories’ inherent in Feynman’s mathematical method, not only affect the future but also the past. What he is implying is that when an observation is made it determines the past as well as the future. He talks about a ‘top down’ history in lieu of a ‘bottom up’ history, which is the traditional way of looking at things. In other words, cosmological history is one of many ‘alternative histories’ (his terminology) that evolve from QM.

 

This leads to a radically different view of cosmology, and the relation between cause and effect. The histories that contribute to the Feynman sum don’t have an independent existence, but depend on what is being measured. We create history by our observation, rather than history creating us (my emphasis).

 

As it happens, John Wheeler made the exact same contention, and proposed that it could happen on a cosmic scale when we observed light from a distant quasar being ‘gravitationally lensed’ by an intervening galaxy or black hole (refer Davies paper, linked below). Hawking makes specific reference to Wheeler’s conjecture at the end of his chapter. It should be pointed out that Wheeler was a mentor to Feynman, and Feynman even referenced Wheeler’s influence in his Nobel Prize acceptance speech.

 

A contemporary champion of Wheeler’s ideas is Paul Davies, and he even dedicates his book, The Goldilocks Enigma, to Wheeler.

 

Davies wrote a paper which is available on-line, where he describes Wheeler’s idea as the “…participatory universe” in which observers—minds, if you like—are inextricably tied to the concretization of the physical universe emerging from quantum fuzziness over cosmological durations.

 

In the same paper, Davies references and attaches an essay by Freeman Dyson, where he says, “Dyson concludes that a quantum description cannot be applied to past events.”

 

And this leads me back to Penrose’s question: how do we get the ‘reality’ we are familiar with from the mathematically modelled quantum world that strains our credulousness? If Dyson is correct, and the past can only be described by classical physics then QM only describes the future. So how does one reconcile this with Hawking’s alternative histories?

 

I’ve argued elsewhere that the path from the infinitely many paths of Feynman’s theory, is only revealed when an ‘observation’ is made, which is consistent with Hawking’s point, quoted above. But it’s worth quoting Dyson, as well, because Dyson argues that the observer is not the trigger.

 

... the “role of the observer” in quantum mechanics is solely to make the distinction between past and future...

 

What really happens is that the quantum-mechanical description of an event ceases to be meaningful as the observer changes the point of reference from before the event to after it. We do not need a human observer to make quantum mechanics work. All we need is a point of reference, to separate past from future, to separate what has happened from what may happen, to separate facts from probabilities.

 

But, as I’ve pointed out in other posts, consciousness exists in a constant present. The time for ‘us’ is always ‘now’, so the ‘point of reference’, that is key to Dyson’s argument, correlates with the ‘now’ of a conscious observer.

 

We know that ‘decoherence’ is not necessarily dependent on an observer, but dependent on the wave function interacting with ‘classical physics’ objects, like a laboratory apparatus or any ‘macro’ object. Dyson’s distinction between past and future makes sense in this context. Having said that, the interaction could still determine the ‘history’ of the quantum event (like a photon), even it traversed the entire Universe, as in the cosmic background radiation (for example).

 

In Hawking’s subsequent chapters, including one titled, Choosing Our Universe, he invokes the anthropic principle. In fact, there are 2 anthropic principles called the ‘weak’ and the ‘strong’. As Hawking points out, the weak anthropic principle is trivial, because, as I’ve pointed out, it’s a tautology: Only universes that produce observers can be observed.

 

On the other hand, the strong anthropic principle (which Hawking invokes) effectively says, Only universes that produce observers can ‘exist’. One can see that this is consistent with Davies’ ‘participatory universe’.

 

Hawking doesn’t say anything about a ‘participatory universe’, but goes into some detail about the fine-tuning of our universe for life, in particular the ‘miracle’ of how carbon can exist (predicted by Fred Hoyle). There are many such ‘flukes’ in our universe, including the cosmological constant, which Hawking also discusses at some length.

 

Hawking also explains how an entire universe could come into being out of ‘nothing’ because the ‘negative’ gravitational energy cancels all the ‘positive’ matter and radiation energy that we observe (I assume this also includes dark energy and dark matter). Dark energy is really the cosmological constant. Its effect increases with the age of the Universe, because, as the Universe expands, gravitational attraction over cosmological distances decreases while ‘dark energy’ (which repulses) doesn’t. Dark matter explains the stable rotation of galaxies, without which, they’d fly apart.

 

Hawking also describes the Hartle-Hawking model of cosmology (without mentioning James Hartle) whereby he argues that in a QM only universe (at its birth), time was actually a 4th spatial dimension. He calls this the ‘no-boundary’ universe, because, as John Barrow once quipped, ‘Once upon a time, there was no time’. I admit that this ‘model’ appeals to me, because in quantum cosmology, time disappears mathematically.

 

Hawking’s philosophical view is the orthodox one that, if there is a multiverse, then the anthropic principle (weak or strong) ensures that there must be a universe where we can exist. I think there are very good arguments for the multiverse (the cosmological variety, not the QM multiple worlds variety) but I have a prejudice against an infinity of them because then there would be an infinity of me.

 

Hawking is a well known atheist, so, not surprisingly, he provides good arguments against the God hypothesis. There could be a demiurge, but if there is, there is no reason to believe it coincides with any of the Gods of mythology. Every God I know of has cultural ties and that includes the Abrahamic God.

 

For someone who claims that ‘philosophy is dead’, Hawking’s book is surprisingly philosophical and thought-provoking, as all good philosophy should be. In his conclusions, he argues strongly for ‘M theory’, believing it will provide the theory(s) of everything that physicists strive for. M theory, as Hawking acknowledges, requires ‘supersymmetry’, and from what I know and read, there is little or no evidence of it thus far. But I agree with Socrates that every mystery resolved only uncovers more mysteries, which history, thus far, has confirmed over and over.

 

My views have evolved and, along with the ‘strong anthropic principle’, I’m becoming increasingly attracted to Wheeler’s ‘participatory universe’, because the more of its secrets we learn, the more it appears as if ‘the Universe saw us coming’, to paraphrase Freeman Dyson.



Addendum (23Apr2021): Wes Hansen, whom I met on Quora, and who has strong views on this topic, told me outright that he's not a fan of Hawking or Feynman. Not surprisingly, he challenged some of my views and I'm not in a position to say if he's right or wrong. Here are some of his comments:


You know, I would add, the problem with the whole “we create history by observation” thing is, it takes a whole lot of history for light to travel to us from distant galaxies, so it leads to a logical fallacy. Consider:

Suppose we create the past with our observations, then prior to observation the galaxies in the Hubble Deep Fields did not exist. Then where does the light come from? You see, we are actually seeing those galaxies as they existed long ago, some over 10 billion years ago.

We have never observed a single photon interfering with itself, quite the opposite actually: Ian Miller's answer to Can a particle really be in several places at the same time in the subatomic world, or is this just modern mysticism?. This is precisely why I cannot tolerate Hawking or Feynman, it’s absolute nonsense!

Regarding his last point, I think Ian Miller has a point. I don't always agree with Miller, but he has more knowledge on this topic than me. I argue that the superposition, which we infer from the interference pattern, is in the future. The idea of a single photon taking 2 paths and interfering with itself is deduced solely from the interference pattern (see linked video in main text). My view is that superposition doesn't really happen - it's part of the QM description of the future. I admit that I effectively contradicted myself, and I've made an edit to the original post to correct that.


 

Friday 3 July 2020

Road safety starts with the driver, not the vehicle

There was recently (pre-COVID-19) a road-safety ad on some cinemas in Australia (and possibly TV) for motorcyclists. We have video of a motorcyclist on a winding road, which I guess is the other side of Healesville, and there is a voiceover of his thoughts. He sees a branch on the road to avoid, he sees a curve coming up, he consciously thinks through changing gears, including clutch manipulation, he sees a van ahead which he overtakes. The point is that there is this continuous internal dialogue based on what he observes while he’s riding. 

What I find intriguing is that this ad is obviously targeted at motorcyclists, yet I fail to see why it doesn’t equally apply to car drivers. I learned to drive (decades ago) from riding motorcycles, not only on winding roads but in city and suburban traffic. I used to do a daily commute along one of the busiest arterial roads from East Sydney to Western Sydney and back, which I’d still claim to be the most dangerous stretch of driving I ever did in my life. 

I had at least one close call and one accident when a panel van turned left into a side road from the middle lane while I was in the left lane (vehicles travel on the left side, a la Britain, in Australia). I not only went over the top of my bike but the van started to drag the bike over me while I was trapped in the gutter, and then he stopped. I was very young and unhurt and he was older and managed to convince me that it was my fault. My biggest concern was not whether I had sustained injuries (I hadn’t) but that the bike was unrideable.

Watching the ad on the screen, which is clearly aimed at a younger version of myself, I thought that’s how I drive all the time, and I learned that from riding bikes, even though I haven’t ridden a bike in more than 3 decades. It occurred to me that most people probably don’t – they put their cars on cruise-control, now ‘adaptive’, and think about something else entirely, possibly having a conversation with someone who is not even in the vehicle.

In Australia, speed limits get lower and lower every year, so that drivers don’t have to think about what they’re doing. The biggest cause of accidents now, I understand, are distractions to the driver. We are transitioning (for want of a better word) to fully autonomous vehicles. In the interim, it seems that since we don’t have automaton cars, we need automaton drivers. Humans actually don’t make good robots. The road-safety ad aimed at motorcyclists is the exact opposite of this thinking.

I’m anomalous in that I still drive a manual and actually enjoy it. I’ve found others of my generation, including women, who feel that driving a manual forces them to think about what they’re doing in a way that an auto doesn’t. In a manual, you are constantly anticipating what gear you need, whether it be for traffic or for a corner, to slow down or to speed up (just like the rider in the ad). It becomes an integral part of driving. I have a 6 speed which is the same as I had on my first 2 motorbikes, and I use the gears in exactly the same way. We are taught to get into top gear as quickly as possible and stay there. But, riding a bike, you soon learn that this is nonsense. In my car, you ideally need to be doing 100km/hr (60 mph) to change into top gear. 

We have cars that do their best to take the driving out of driving, and I’m not convinced that makes us safer, though most people seem to think it does.


Addendum: I acknowledge I’m a fossil like the car I drive. I do drive autos, and it doesn’t change the way I drive, but I don’t think I’ve ever enjoyed the experience. I accept that, in the future, cars probably won’t be enjoyable to drive at all, because they will have no 'feeling'. The Tesla represents the future of motoring, whether autonomous or not.

Monday 18 May 2020

An android of the seminal android storyteller

I just read a very interesting true story about an android built in the early 2000s based on the renowned sci-fi author, Philip K Dick, both in personality and physical appearance. It was displayed in public at a few prominent events where it interacted with the public in 2005, then was lost on a flight between Dallas and Las Vegas in 2006, and has never been seen since. The book is called Lost In Transit; The Strange Story of the Philip K Dick Android by David F Duffy.

You have to read the back cover to know it’s non-fiction published by Melbourne University Press in 2011, so surprisingly a local publication. I bought it from my local bookstore at a 30% discount price as they were closing down for good. They were planning to close by Good Friday but the COVID-19 pandemic forced them to close a good 2 weeks earlier and I acquired it at the 11th hour, looking for anything I might find interesting.

To quote the back cover:

David F Duffy was a postdoctoral fellow at the University of Memphis at the time the android was being developed... David completed a psychology degree with honours at the University of Newcastle [Australia] and a PhD in psychology at Macquarie University, before his fellowship at the University of Memphis, Tennessee. He returned to Australia in 2007 and lives in Canberra with his wife and son.

The book is written chronologically and is based on extensive interviews with the team of scientists involved, as well as Duffy’s own personal interaction with the android. He had an insider’s perspective as a cognitive psychologist who had access to members of the team while the project was active. Like everyone else involved, he is a bit of a sci-fi nerd with a particular affinity and knowledge of the works of Philip K Dick.

My specific interest is in the technical development of the android and how its creators attempted to simulate human intelligence. As a cognitive psychologist, with professionally respected access to the team, Duffy is well placed to provide some esoteric knowledge to an interested bystander like myself.

There were effectively 2 people responsible (or 2 team leaders), David Hanson and Andrew Olney, who were brought together by Professor Art Greasser, head of the Institute of Intelligent Systems, a research lab in the psychology building at the University of Memphis (hence the connection with the author). 

Hanson is actually an artist, and his specialty was building ‘heads’ with humanlike features and humanlike abilities to express facial emotions. His heads included mini-motors that pulled on a ‘skin’, which could mimic a range of facial movements, including talking.

Olney developed the ‘brains’ of the android that actually resided on a laptop and was connected by wires going into the back of the android’s head. Hanson’s objective was to make an android head that was so humanlike that people would interact with it on an emotional and intellectual level. For him, the goal was to achieve ‘empathy’. He had made at least 2 heads before the Philip K Dick project.

Even though the project got the ‘blessing’ of Dick’s daughters, Laura and Isa, and access to an inordinate amount of material, including transcripts of extensive interviews, they had mixed feelings about the end result, and, tellingly, they were ‘relieved’ when the head disappeared. It suggests that it’s not the way they wanted him to be remembered.

In a chapter called Life Inside a Laptop, Duffy gives a potted history of AI, specifically in relation to the Turing test, which challenges someone to distinguish an AI from a human. He also explains the 3 levels of processing that were used to create the android’s ‘brain’. The first level was what Olney called ‘canned’ answers, which were pre-recorded answers to obvious questions and interactions, like ‘Hi’, ‘What’s your name?’, ‘What are you?’ and so on. Another level was ‘Latent Semantic Analysis’ (LSA), which was originally developed in a lab in Colorado, with close ties to Graesser’s lab in Memphis, and was the basis of Grasser’s pet project, ‘AutoTutor’ with Olney as its ‘chief programmer’. AutoTutor was an AI designed to answer technical questions as a ‘tutor’ for students in subjects like physics.

To create the Philip K Dick database, Olney downloaded all of Dick’s opus, plus a vast collection of transcribed interviews from later in his life. The Author conjectures that ‘There is probably more dialogue in print of interviews with Philip K Dick than any other person, alive or dead.’

The third layer ‘broke the input (the interlocutor’s side of the dialogue) into sections and looked for fragments in the dialogue database that seemed relevant’ (to paraphrase Duffy). Duffy gives a cursory explanation of how LSA works – a mathematical matrix using vector algebra – that’s probably a little too esoteric for the content of this post.

In practice, this search and synthesise approach could create a self-referencing loop, where the android would endlessly riff on a subject, going off on tangents, that sounded cogent but never stopped. To overcome this, Olney developed a ‘kill switch’ that removed the ‘buffer’ he could see building up on his laptop. At one display at ComicCon (July 2005) as part of the promotion for A Scanner Darkly (a rotoscope movie by Richard Linklater, starring Keanu Reeves), Hanson had to present the android without Olney, and he couldn’t get the kill switch to work, so Hanson stopped the audio with the mouth still working and asked for the next question. The android simply continued with its monolithic monologue which had no relevance to any question at all. I think it was its last public appearance before it was lost. Dick’s daughters, Laura and Isa, were in the audience and they were not impressed.

It’s a very informative and insightful book, presented like a documentary without video, capturing a very quirky, unique and intellectually curious project. There is a lot of discussion about whether we can produce an AI that can truly mimic human intelligence. For me, the pertinent word in that phrase is ‘mimic’, because I believe that’s the best we can do, as opposed to having an AI that actually ‘thinks’ like a human. 

In many parts of the book, Duffy compares what Graesser’s team is trying to do with LSA with how we learn language as children, where we create a memory store of words, phrases and stock responses, based on our interaction with others and the world at large. It’s a personal prejudice of mine, but I think that words and phrases have a ‘meaning’ to us that an AI can never capture.

I’ve contended before that language for humans is like ‘software’ in that it is ‘downloaded’ from generation to generation. I believe that this is unique to the human species and it goes further than communication, which is its obvious genesis. It’s what we literally think in. The human brain can connect and manipulate concepts in all sorts of contexts that go far beyond the simple need to tell someone what they want them to do in a given situation, or ask what they did with their time the day before or last year or whenever. We can relate concepts that have a spiritual connection or are mathematical or are stories. In other words, we can converse in topics that relate not just to physical objects, but are products of pure imagination.

Any android follows a set of algorithms that are designed to respond to human generated dialogue, but, despite appearances, the android has no idea what it’s talking about. Some of the sample dialogue that Duffy presented in his book, drifted into gibberish as far as I could tell, and that didn’t surprise me.

I’ve explored the idea of a very advanced AI in my own fiction, where ‘he’ became a prominent character in the narrative. But he (yes, I gave him a gender) was often restrained by rules. He can converse on virtually any topic because he has a Google-like database and he makes logical sense of someone’s vocalisations. If they are not logical, he’s quick to point it out. I play cognitive games with him and his main interlocutor because they have a symbiotic relationship. They spend so much time together that they develop a psychological interdependence that’s central to the narrative. It’s fiction, but even in my fiction I see a subtle difference: he thinks and talks so well, he almost passes for human, but he is a piece of software that can make logical deductions based on inputs and past experiences. Of course, we do that as well, and we do it so well it separates us from other species. But we also have empathy, not only with other humans, but other species. Even in my fiction, the AI doesn’t display empathy, though he’s been programmed to be ‘loyal’.

Duffy also talks about the ‘uncanny valley’, which I’ve discussed before. Apparently, Hanson believed it was a ‘myth’ and that there was no scientific data to support it. Duffy appears to agree. But according to a New Scientist article I read in Jan 2013 (by Joe Kloc, a New York correspondent), MRI studies tell another story. Neuroscientists believe the symptom is real and is caused by a cognitive dissonance between 3 types of empathy: cognitive, motor and emotional. Apparently, it’s emotional empathy that breaks the spell of suspended disbelief.

Hanson claims that he never saw evidence of the ‘uncanny valley’ with any of his androids. On YouTube you can watch a celebrity android called Sophie and I didn’t see any evidence of the phenomenon with her either. But I think the reason is that none of these androids appear human enough to evoke the response. The uncanny valley is a sense of unease and disassociation we would feel because it’s unnatural; similar to seeing a ghost - a human in all respects except actually being flesh and blood. 

I expect, as androids, like the Philip K Dick simulation and Sophie, become more commonplace, the sense of ‘unnaturalness’ would dissipate - a natural consequence of habituation. Androids in movies don’t have this effect, but then a story is a medium of suspended disbelief already.

Sunday 10 May 2020

Logic, analysis and creativity

I’ve talked before about the apparent divide between arts and humanities, and science and technology. Someone once called me a polymath, but I don’t think I’m expert enough in any field to qualify. However, I will admit that, for most of my life, I’ve had a foot in both camps, to use a well-worn metaphor. At the risk of being self-indulgent, I’m going to discuss this dichotomy in reference to my own experiences.

I’ve worked in the engineering/construction industry most of my adult life, yet I have no technical expertise there either. Mostly, I worked as a planning and cost control engineer, which is a niche activity that I found I was good at. It also meant I got to work with accountants and lawyers as well as engineers of all disciplines, along with architects. 

The reason I bring this up is because planning is all about logic – in fact, that’s really all it is. At its most basic, it’s a series of steps, some of which are sequential and some in parallel. I started doing this before computers did a lot of the work for you. But even with computers, you have to provide the logic; so if you can’t do that, you can’t do professional planning. I make that distinction because it was literally my profession.

In my leisure time, I write stories and that also requires a certain amount of planning, and I’ve found there are similarities, especially when you have multiple plot lines that interweave throughout the story. For me, plotting is the hardest part of storytelling; it’s a sequential process of solving puzzles. And science is also about solving puzzles, all of which are beyond my abilities, yet I love to try and understand them, especially the ones that defy our intuitive sense of logic. But science is on a different level to both my professional activities and my storytelling. I dabble at the fringes, taking ideas from people much cleverer than me and creating a philosophical pastiche.

Someone on Quora (a student) commented once that studying physics exercised his analytical skills, which he then adapted to other areas of his life. It occurred to me that I have an analytical mind and that is why I took an interest in physics rather than the other way round. Certainly, my work required an analytical approach and I believe I also take an analytical approach to philosophy. In fact, I’ve argued previously that analysis is what separates philosophy from dogma. Anyway, I don’t think it’s unusual for us, as individuals, to take a skill set from one activity and apply it to another apparently unrelated one.

I wrote a post once about the 3 essential writing skills, being character development, evoking emotion and creating narrative tension. The key to all of these is character and, if one was to distil out the most essential skill of all, it would be to write believable dialogue, as if it was spontaneous, meaning unpremeditated, yet not boring or irrelevant to the story. I’m not at all sure it can be taught. Someone once said (Don Burrows) that jazz can’t be taught, because it’s improvisation by its very nature, and I’d argue the same applies to writing dialogue. I’ve always felt that writing fiction has more in common with musical composition than writing non-fiction. In both cases they can come unbidden into one’s mind, sometimes when one is asleep, and they’re both essentially emotive mediums. 

But science too has its moments of creativity, indeed sheer genius; being a combination of sometimes arduous analysis and inspired intuition.

Sunday 9 February 2020

The confessions of a self-styled traveller in the world of ideas

Every now and then, on very rare occasions, you have a memory or a feeling that was so long ago that it feels almost foreign, like it was experienced by someone else. And, possibly it was, as I’m no longer the same person, either physically or in personality.

This particular memory was when I was a teenager and I was aflame with an idealism. It came to me, just today, while I was walking alongside a creek bed, so I’m not sure I can get it back now. It was when I believed I could pursue a career in science, and, in particular, physics. It was completely at odds with every other aspect of my life. At that time, I had very poor social skills and zero self-esteem. Looking back, it seems arrogant, but when you’re young you’re entitled to dream beyond your horizons, otherwise you don’t try.

This blog effectively demonstrates both the extent of my knowledge and the limits of my knowledge, in the half century since. I’ve been most fortunate to work with some very clever people. In fact, I’ve spent my whole working life with people cleverer than me, so I have no delusions.

I consider myself lucky to have lived a mediocre life. What do I mean by mediocre? Well, I’ve never been homeless, and I’ve never gone hungry and I’ve never been unable to pay my bills. I’m not one to take all that for granted; I think there is a good deal of luck involved in avoiding all of those pitfalls. Likewise, I believe I’m lucky not to be famous; I wouldn’t want my life under a microscope, whereby the smallest infraction of society’s rules could have me blamed and shamed on the world stage.

I’ve said previously that the people we admire most are those who seem to be able to live without a facade. I’m not one of those. My facade is that I’m clever: ever since my early childhood, I liked to spruik my knowledge in an effort to impress people, especially adults, and largely succeeded. I haven’t stopped, and this blog is arguably an extension of that impetus. But I will admit to a curiosity which was manifest from a very young age (pre high school), and that’s what keeps me engaged in the world of ideas. The internet has been most efficacious in this endeavour, though I’m also an avid reader of books and magazines, in the sciences, in particular.

But I also have a secret life in the world of fiction. And fiction is the best place to have a secret life. ELVENE is no secret, but it was written almost 2 decades ago. It was unusual in that it was ‘popular’. By popular, I don’t mean it was read by a multitude (it unequivocally wasn’t), but it was universally liked, like a ‘popular’ song. It had a dichotomous world: indigenous and futuristic. This was years before James Cameron’s Avatar, and a completely different storyline. I received accolades like, ‘I enjoyed every page’ and ‘I didn’t want it to end’ and ‘it practically played out like a movie in my head’.

ELVENE was an aberration – a one-off – but I don’t mind, seriously. My fiction has become increasingly dystopian. The advantage of sci-fi (I call mine, science-fantasy) is that you can create what-if worlds. In fact, an Australian literary scholar, Peter Nicholls, created The Encyclopedia of Science Fiction, and a TV doco was made of him called The What If Man.

Anyway, you can imagine isolated worlds, which evolve their own culture and government, not unlike what our world was like before sea and air travel compressed it. So one can imagine something akin to frontier territories where democracy is replaced by autocracy that can either be beneficiary or oppressive or something in between. So I have an autocracy, where the dictator limits travel both on and off his world. Where clones are exploited to become sex workers and people who live there become accustomed to this culture. In other words, it’s not that different to cultures in our past (and some might say, present). The dictator is less Adolf Hitler and more Donald Trump, though that wasn’t deliberate. Like all my characters, he takes on a life of his own and evolves in ways I don’t always anticipate. He’s not evil per se, but he knows how to manipulate people and he demands absolute loyalty, which is yet to be tested.

The thing is that you go where the story and the characters take you, and sometimes they take you into dark territory. But in the dark you look for light. “There’s a crack in everything; that’s how the light gets in” (Leonard Cohen). I confess I like moral dilemmas and I feel, I’ve not only created a cognitive dissonance for one of my characters, but, possibly, for myself as a writer. (Graham Greene was the master of the moral dilemma, but he’s in another class.)

Last year I saw a play put on by my good friend, Elizabeth Bradley, The Woman in the Window, for Canberra REP. It includes a dystopian future that features sex workers as an integral part of the society. It was a surprise to see someone else addressing a similar scenario. The writer was Kiwi, Alma De Groen, and she juxtaposed history (the dissident poet, Anna Akhmatova in Stalin’s Russia) with a dystopian future Australia.

I take a risk by having female protagonists prominent in all my fiction. It’s a risk because there is a lot of controversy about so-called ‘culture appropriation’. I increase that risk by portraying relationships from my female protagonists’ perspectives. However, there is always a sense that they all exist independently of me, which one can only appreciate if you willingly enter a secret world of fiction.

Wednesday 5 February 2020

Australia’s bush fires; 2019-2020

The one word that was used over and over again to describe this ongoing event over a period of 4-5 months was ‘unprecedented’. Australia is a continent unique in the world, not just because of its fauna and flora, but also because of its landscape and its weather. 

We are the second driest continent in the world (after Antarctica) and our river systems are unique. In the northern hemisphere, ‘flow ratios’ (maximum to average flows) for rivers and natural waterways are in the order of 10 to 1, but in Australia they are in the order of 100 to 1. We have the largest overflows on our dams compared to other countries. We are a country of droughts and floods, and bush fires are a part of the environment ever since I can remember in my half a century (and more) of living here.

Having said all that, in the 200 plus years since 'White European settlement’, no one had witnessed anything of this magnitude and ferocity in Australia, over this period of time and over such a large area of the country. ‘Unprecedented’ is the absolutely right word to describe this event.

Personally, I know of no one who was directly impacted by the fires. Correction: I know of one person who sustained property damage and whose business was affected, but who experienced no serious loss. I spent the Christmas, New Year period in an area directly affected called the Southern Highlands of NSW (it gets a special mention in the imbedded video) and I saw firsthand the aftermath of a very small part of this whole catastrophe. Also, I have a niece who works full time in the RFS (Rural Fire Service) in NSW. She works in logistics, and I didn’t see her this Christmas.

One has to make special mention of the people, many of whom are unpaid volunteers, we call the ‘fireys’ who risk their lives to save people and their property. I can’t watch this video without ‘tearing up’ in places. Once you start watching, you’ll find it very compelling viewing, and you’ll find it hard, if not impossible, to stop watching for its 48 min duration.

Four Corners is a renowned investigative programme in Australia that has won numerous awards for excellence in TV journalism. The ABC (Australian Broadcasting Corporation) has taken the unusual step of posting this episode on YouTube the day after it went to air (3 Feb 2020). Normally, you can’t view this outside Australia, but this is far too important for the world not to see.

I hope this is a turning point in the world’s consciousness on the subject of climate change. It’s a contentious subject, even in Australia, even after this event, but I’ve expressed my views on it, on this blog, as early as a decade ago.

This post is directly relevant to my previous post, if you haven’t read it.




Thursday 2 January 2020

Our heritage; our responsibility

I was going to post this on FaceBook, as it's especially relevant to current events happening right across Australia: unprecedented bush fire season; like hell on Earth in some places. FB is not really a forum for philosophical discourse, but I might yet post it.


There is an overriding sensibility (not just in the West either) that Man has a special place in the scheme of things. Now, I’m going to be an existential heretic and assume that we do. We are unique in that we can intellectually grasp the very scale of the Universe and even speculate about its origins to the extent that we have a very good estimate of its age. To quote no one less than Einstein: “The most incomprehensible thing about the Universe is that it’s comprehensible.” And the point is that it’s comprehensible because of ‘Us’.

As Jeremy Lent points out in his bookThe Patterning Instinct; A Cultural History of Humanity’s Search for Meaning, the belief that we are made in God’s image has created a misguided notion that the Universe (and Earth, in particular) was made especially for us.

As I said in my introduction, I’m willing to go along with this, because, if we take it seriously, it has even more serious ramifications. Assuming that there is a creator God, who made ‘Man’ in ‘His’ image, then ‘He’ has bequeathed us a very special responsibility: we are the Earth’s caretakers. And, quite frankly, we’re doing a terrible job.

The irony of this situation is that it would appear that atheists take this responsibility more seriously than theists, though I’m happy to be proven wrong.

The answer to this is also in my introduction, because we have the intellectual ability to not only read the past, but predict the future. It’s our special cognitive skills in ‘comprehensibility’ that give us the ‘edge’. In other words, it is science that provides us with the means to protect our heritage. We are currently doing the exact opposite.

Unlike a lot of people, I don't claim that atheism is superior to theism or vice versa. This is just an argument to demonstrate that either position can lead to the same conclusion.


Friday 27 September 2019

Is the Universe conscious?

This is another question on Quora, and whilst it may seem trivial, even silly, I give it a serious answer.

Because it’s something we take for granted, literally every day of our lives, I find that many discussions on consciousness tend to gloss over its preternatural, epiphenomenal qualities (for want of a better description) and are often seemingly dismissive of its very existence. So let me be blunt: without consciousness, there is no reality. For you. At all.

My views are not orthodox, even heretical, but they are consistent with what I know and with the rest of my philosophy. The question has religious overtones, but I avoid all theological references.

This is the original question:

Is the universe all knowing/conscious?

And this is my answer:

I doubt it very much. If you read books about cosmology (The Book of Universes by John D Barrow, for example) you’ll appreciate how late consciousness arrived in the Universe. According to current estimates, it’s the last 520 million years of 13.8 billion, which is less than 4% of its age.

And as Barrow explains, the Universe needs to be of the mind-boggling scale we observe to allow enough time for complex life (like us) to evolve.

Consciousness is still a mystery, despite advances made in neuroscience. In the latest issue of New Scientist (21 Sep 2019) it’s the cover story: The True Nature of Consciousness; with the attached promise: We’re Finally Cracking the Greatest Mystery of You. But when you read the article the author (neuroscientist, Michael Graziano) seems to put faith in advances in AI achieving consciousness. It’s not the first time I’ve come across this optimism, yet I think it’s misguided. I don’t believe AI will ever become conscious, because it’s not supported by the evidence.

All the examples of consciousness that we know about are dependent on life. In other words, life evolved before consciousness did. With AI, people seem to think that the reverse will happen: a machine intelligence will become conscious and therefore it will be alive. It contradicts everything we have observed to date.

It’s based on the assumption that when a machine achieves a certain level of intelligence, it will automatically become conscious. Yet many animals of so-called lower intelligence (compared to humans) have consciousness and they don’t become more conscious if they become more intelligent. Computers can already beat humans at complex games and they improve all the time, but not one of them exhibits consciousness.

Slightly off-topic but relevant, because it demonstrates that consciousness is not dependent on just acquiring more machine intelligence.

I contend that consciousness is different to every other phenomena we know about, because it has a unique relationship with time. Erwin Schrodinger in his book, What is Life? made the observation that consciousness exists in a constant present. In other words, for a conscious observer, time is always ‘now’.

What’s more, I argue that it’s the only phenomena that does – everything else we observe becomes the past as soon as it happens - just take a photo to demonstrate.

This means that, without memory, you wouldn’t know you were conscious at all and there are situations where this has happened. People have been rendered unconscious, yet continue to behave as if they’re conscious, but later have no memory of it. I believe this is because their brain effectively stopped ‘recording’.

Consciousness occupies no space, even though it appears to be the consequence of material activity – specifically, the neurons in our brains. Because it appears to have a unique relationship with time and it can’t be directly measured, I’m not averse to the idea that it exists in another dimension. In mathematics, higher dimensions are not as aberrant as we perceive them, and I’ve read somewhere that neuron activity can be ‘modelled’ in a higher mathematical dimension. This idea is very speculative and I concede too fringe-thinking for most people.

As far as the Universe goes, I like to point out that reality (for us) requires both a physical world and consciousness - without consciousness there might as well be nothing. The Universe requires consciousness to be self-realised. This is a variant on the strong anthropic principle, originally expressed by Brandon Carter.

The weak anthropic principle says that only universes containing observers can be observed, which is a tautology. The strong anthropic principle effectively says that only universes, that allow conscious observers to emerge, can exist, which is my point about the Universe requiring consciousness to be self-realised. The Universe is not teleological (if you were to rerun the Universe, you’d get a different result) but the Universe has the necessary mathematical parameters to allow sentient life to emerge, which makes it quasi-teleological.

In answer to your question, I don’t think the Universe is conscious from its inception, but it has built into its long evolutionary development the inherent capacity to produce, not only conscious observers, but observers who can grasp the means to comprehend its workings and its origins, through mathematics and science.