Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
Showing posts with label Psychology. Show all posts
Showing posts with label Psychology. Show all posts

Tuesday, 17 June 2025

Sympathy and empathy; what’s the difference?

 This arose from an article I read in Philosophy Now (Issue 167, April/May 2025) by James R. Robinson, who developed his ideas while writing his MA thesis in the Netherlands. It prompted me to write a letter, which was published in the next issue (168, June/July 2025). It was given pole position, which in many periodicals would earn the appellation ‘letter of the week’ (or month or whatever). But I may be reading too much into it, because Philosophy Now group their letters by category, according to the topic they are addressing. Anyway, being first, is a first for me.
 
They made some minor edits, which I’ve kept. The gist of my argument is that there is a dependency between sympathy and empathy, where sympathy is observed in one’s behaviour, but it stems from an empathy for another person – the ability to put ourselves in their shoes. This is implied in an example (provided by Robinson) rather than stated explicitly.
 
 
In response to James R. Robinson’s ‘Empathy & Sympathy’ in Issue 167, I contend that empathy is essential to a moral philosophy, both in theory and practice. For example, it’s implicit in Confucius’s rule of reciprocity, “Don’t do to others what you wouldn’t want done to yourself” and Jesus’s Golden Rule, “Do unto others as you’d have them do unto you.” Empathy is a requisite for the implementation of either. And as both a reader and writer of fiction, I know that stories wouldn’t work without empathy. Indeed, one study revealed that reading fiction improves empathy. The tests used ‘letter box’ photos of eyes to assess the subject’s ability to read the emotion of the characters behind the eyes (New Scientist, 25 June 2008).

The dependency between empathy and sympathy is implicit in the examples Robinson provides, like the parent picking up another parent’s child from school out of empathy for the person making the request. In most of these cases there is also the implicit understanding that the favour would be returned if the boot was on the other foot. Having said that, many of us perform small favours for strangers, knowing that one day we could be the stranger.

Robinson also introduces another term, ‘passions’; but based on the examples he gives – like pain – I would call them ‘sensations’ or ‘sensory responses’. Even anger is invariably a response to something. Fiction can also create sensory responses (or passions) of all varieties (except maybe physical pain, hunger, or thirst) – which suggests empathy might play a role there as well. In other words, we can feel someone else’s emotional pain, not to mention anger, or resentment, even if the person we’re empathising with is fictional.

The opposite to compassion is surely cruelty. We have world leaders who indulge in cruelty quite openly, which suggests it’s not an impediment to success; but it also suggests that there’s a cultural element that allows it. Our ability to demonise an outgroup is the cause of most political iniquities we witness, and this would require the conscious denial of sympathy and therefore empathy, because ultimately, it requires treating them as less than human, or as not-one-of-us.

Thursday, 29 May 2025

The role of the arts. Why did it evolve? Will AI kill it?

 As I mentioned in an earlier post this month, I’m currently reading Brian Greene’s book, Until the End of Time; Mind, Matter; and Our Search for Meaning in an Evolving Universe, which covers just about everything from cosmology to evolution to consciousness, free will, mythology, religion and creativity. He spends a considerable amount of time on storytelling, compared to other art forms, partly because it allows an easy segue from language to mythology to religion.
 
One of his points of extended discussion was in trying to answer the question: why did our propensity for the arts evolve, when it has no obvious survival value? He cites people like Steven Pinker, Brian Boyd (whom I discuss at length in another post) and even Darwin, among others. I won’t elaborate on these, partly due to space, and partly because I want to put forward my own perspective, as someone who actually indulges in an artistic activity, and who could see clearly how I inherited artistic genes from one side of my family (my mother’s side). No one showed the slightest inclination towards artistic endeavour on my father’s side (including my sister). But they all excelled in sport (including my sister), and I was rubbish at sport. One can see how sporting prowess could be a side-benefit to physical survival skills like hunting, but also achieving success in combat, which humans have a propensity for, going back to antiquity.
 
Yet our artistic skills are evident going back at least 30-40,000 years, in the form of cave-art, and one can imagine that other art forms like music and storytelling have been active for a similar period. My own view is that it’s sexual selection, which Greene discusses at length, citing Darwin among others, as well as detractors, like Pinker. The thing is that other species also show sexual selection, especially among birds, which I’ve discussed before a couple of times. The best known example is the peacock’s tail, but I suspect that birdsong also plays a role, not to mention the bower bird and the lyre bird. The lyre bird is an interesting one, because they too have an extravagant tail (I’m talking about the male of the species) which surely would be a hindrance to survival, and they perform a dance and are extraordinary mimics. And the only reason one can think that this might have evolutionary value at all is because the sole purpose of those specific attributes is to attract a mate.
 
And one can see how this is analogous to behaviour in humans, where it is the male who tends to attract females with their talents in music, in particular. As Greene points out, along with others, artistic attributes are a by-product of our formidable brains, but I think these talents would be useless if we hadn’t evolved in unison a particular liking for the product of these endeavours (also discussed by Greene), which we see even in the modern world. I’m talking about the fact that music and stories both seem to be essential sources of entertainment, evident in the success of streaming services, not to mention a rich history in literature, theatre, ballet and more recently, cinema.
 
I’ve written before that there are 2 distinct forms of cognitive ability: creative and analytical; and there is neurological evidence to support this. The point is that having an analytical brain is just as important as having a creative one, otherwise scientific theories and engineering feats, for which humans seem uniquely equipped to provide, would never have happened, even going back to ancient artefacts like Stonehenge and both the Egyptian and Mayan pyramids. Note that these all happened on different continents.
 
But there are times when the analytical and creative seem to have a synergistic effect, and this is particularly evident when it comes to scientific breakthroughs – a point, unsurprisingly, not lost on Greene, who cites Einstein’s groundbreaking discoveries in relativity theory as a case-in-point.
 
One point that Greene doesn’t make is that there has been a cultural evolution that has effectively overtaken biological evolution in humans, and only in humans I would suggest. And this has been a direct consequence of our formidable brains and everything that goes along with that, but especially language.
 
I’ve made the point before that our special skill – our superpower, if you will – is the ability to nest concepts within concepts, which we do with everything, not just language, but it would have started with language, one would think. And this is significant because we all think in a language, including the ability to manipulate abstract concepts in our minds that don’t even exist in the real world. And no where is this more apparent than in the art of storytelling, where we create worlds that only exist in the imagination of someone’s mind.
 
But this cultural evolution has created civilisations and all that they entail, and survival of the fittest has nothing to do with eking out an existence in some hostile wilderness environment. These days, virtually everyone who is reading this has no idea where their food comes from. However, success is measured by different parameters than the ability to produce food, even though food production is essential. These days success is measured by one’s ability to earn money and activities that require brain-power have a higher status and higher reward than so-called low-skilled jobs. In fact, in Australia, there is a shortage of trades because, for the last 2 generations at least, the emphasis, vocationally, has been in getting kids into university courses, when it’s not necessarily the best fit for the child. This is why the professional class (including myself) are often called ‘elitist’ in the culture wars and being a tradie is sometimes seen as a stigma, even though our society is just as dependent on them as they are on professionals. I know, because I’ve spent a working lifetime in a specific environment where you need both: engineering/construction.
 
Like all my posts, I’ve gone off-track but it’s all relevant. Like Greene, I can’t be sure how or why evolution in humans was propelled, if not hi-jacked, by art, but art in all its forms is part of the human condition. A life without music, stories and visual art – often in combination – is unimaginable.
 
And this brings me to the last question in my heading. It so happens that while I was reading about this in Greene’s thought-provoking book, I was also listening to a programme on ABC Classic (an Australian radio station) called Legends, which is weekly and where the presenter, Mairi Nicolson, talks about a legend in the classical music world for an hour, providing details about their life as well as broadcasting examples of their work. In this case, she had the legend in the studio (a rare occurrence), who was Anna Goldsworthy. To quote from Wikipedia: Anna Louise Goldsworthy is an Australian classical pianist, writer, academic, playwright, and librettist, known for her 2009 memoir Piano Lessons.

But the reason I bring this up is because Anna mentioned that she attended a panel discussion on the role of AI in the arts. Anna’s own position is that she sees a role for AI, but in doing the things that humans find boring, which is what we are already seeing in manufacturing. In fact, I’ve witnessed this first-hand. Someone on the panel made the point that AI would effectively democratise art (my term, based on what I gleaned from Anna’s recall) in the sense that anyone would be able to produce a work of art and it would cease to be seen as elitist as it is now. He obviously saw this as a good thing, but I suspect many in the audience, including Anna, would have been somewhat unimpressed if not alarmed. Apparently, someone on the panel challenged that perspective but Anna seemed to think the discussion had somehow veered into a particularly dissonant aberration of the culture wars.
 
I’m one of those who would be alarmed by such a development, because it’s the ultimate portrayal of art as a consumer product, similar to the way we now perceive food. And like food, it would mean that its consumption would be completely disconnected from its production.
 
What worries me is that the person on the panel making this announcement (remember, I’m reporting this second-hand) apparently had no appreciation of the creative process and its importance in a functioning human society going back tens of thousands of years.
 
I like to quote from one of the world’s most successful and best known artists, Paul McCartney, in a talk he gave to schoolchildren (don’t know where):
 
“I don't know how to do this. You would think I do, but it's not one of these things you ever know how to do.” (my emphasis)
 
And that’s the thing: creative people can’t explain the creative process to people who have never experienced it. It feels like we have made contact with some ethereal realm. On another post, I cite Douglas Hofstadter (from his famous Pulitzer-prize winning tome, Godel, Escher, Bach: An Eternal Golden Braid) quoting Escher:
 
"While drawing I sometimes feel as if I were a spiritualist medium, controlled by the creatures I am conjuring up."

 
Many people writing a story can identify with this, including myself. But one suspects that this also happens to people exploring the abstract world of mathematics. Humans have developed a sense that there is more to the world than what we see and feel and touch, which we attempt to reveal in all art forms, and this, in turn, has led to religion. Of course, Greene spends another entire chapter on that subject, and he also recognises the connection between mind, art and the seeking of meaning beyond a mortal existence.

Tuesday, 20 May 2025

Is Morality Objective or Subjective?

 This was a Question of the Month, answers to which appeared in the latest issue of Philosophy Now (Issue 167, April/May 2025). I didn’t submit an answer because I’d written a response to virtually the same question roughly 10 years ago, which was subsequently published. However, reading the answers made me want to write one of my own, effectively in response to those already written and published, without referencing anything specific.
 
At a very pragmatic level, morality is a direct consequence of communal living. Without rules, living harmoniously would be impossible, and that’s how morals become social norms, which for the most part, we don’t question. This means that morality, in practice, is subjective. In fact, in my previous response, I said that subjective morality and objective morality could be described as morality in practice and morality in theory respectively, where I argued morality in theory is about universal human rights, probably best exemplified by the golden rule: assume everyone has the same rights as you.  A number of philosophers have attempted to render a meta-morality or a set of universal rules and generally failed.
 
But there is another way of looking at this, which are the qualities we admire in others. And those qualities are selflessness, generosity, courage and honesty. Integrity is often a word used to describe someone we trust and admire. By the way, courage in this context, is not necessarily physical courage, but what’s known as moral courage: taking a stand on a principle even if it costs us something. I often cite Aristotle’s essay on friendship in his Nicomachean Ethics, where he distinguishes between utilitarian friendship and genuine friendship, and how it’s effectively the basis for living a moral life.
 
In our work, and even our friendships occasionally, we can find ourselves being compromised. Politicians find this almost daily when they have to toe the party line. Politicians in retirement are refreshingly honest and forthright in a way they could never be when in office, and this includes leaders of parties.
 
I’ve argued elsewhere that trust is the cornerstone to all relationships, whether professional or social. In fact, I like to think it’s my currency in my everyday life. Without trust, societies would function very badly and our interactions would be constantly guarded, which is the case in some parts of the world.
 
So an objective morality is dependent on how we live – our honesty to ourselves and others; our ability to forgive; to let go of grievances; and to live a good life in an Aristotlean sense. I’ve long contended that the measure of my life won’t be based on my achievements and failures, but my interactions with others, and whether they were beneficial or destructive (usually mutual).
 
I think our great failing as a communal species, is our ability to create ingroups and outgroups, which arguably is the cause of all our conflicts and the source of most evil: our demonisation of the other, which can lead even highly intelligent people to behave irrationally; no one is immune, from what I’ve witnessed. A person who can bridge division is arguably the best leader you will find, though you might not think that when you look around the world.

Tuesday, 6 May 2025

Noam Chomsky on free will

 Whatever you might think about Noam Chomsky’s political views, I’ve always found his philosophical views worth listening to, whether I agree with him or not. In the opening of this video - actually an interview by someone (name not given) on a YouTube channel titled, Mind-Body Solution – he presents a dichotomy that he thinks is obvious, but, as he points out, is generally not acknowledged.
 
Basically, he says that everyone, including anyone who presents an argument (on any topic), behaves as if they believe in free will, even if they claim they don’t. He reiterates this a number of times throughout the video. On the other hand, science cannot tell us anything about free will and many scientists therefore claim it must be an illusion. The contradiction is obvious. He’s not telling me anything I didn’t already know, but by stating it bluntly up-front, he makes you confront it, where more often than not, people simply ignore it.
 
My views on this are well known to anyone who regularly reads this blog, and I’ve challenged smarter minds than mine (not in person), like Sabine Hossenfelder, who claims that ‘free will needs to go in the rubbish bin’, as if it’s an idea that’s past its use-by-date. She claims:
 
...how ever you want to define the word [free will], we still cannot select among several possible different futures. This idea makes absolutely no sense if you know anything about physics.
 
I’ve addressed this elsewhere, so I won’t repeat myself. Chomsky makes the point that, while science acknowledges causal-determinism and randomness, neither of these rule out free will categorically. Chomsky makes it clear that he’s a ‘materialist’, though he discusses Descartes’ perspective in some depth. In my post where I critique Sabine, I conclude that ‘it [free will] defies a scientific explanation’, and I provide testimony from Gill Hicks following a dramatic near-death experience to make my point.
 
Where I most strongly agree with Chomsky is that we are not automatons, though I acknowledge that other members of the animal kingdom, like ants and bees, may be. This doesn’t mean that I think insects and arachnids don’t have consciousness, but I think a lot of their behaviours are effectively ‘programmed’ into their neural structures. It’s been demonstrated by experiments that bees must have an internal map of their local environment, otherwise the ‘dance’ they do to communicate locations to other bees in their colony would make no sense. Also, I think these creatures have feelings, like fear, attraction and hostility. Both of these aspects of their mental worlds distinguish them from AI, in my view, though others might disagree. I think these particular features of animal behaviour, even in these so-called ‘primitive’ creatures, provide the possibility of free will, if free will is the ability to act on the environment in a way that’s not determined solely by reflex actions.
 
Some might argue that acting on a ‘feeling’ is a ‘reflex action’, whereas I’m saying it’s a catalyst to act in a way that might be predictable but not predetermined. I think the ability to ‘feel’ is the evolutionary driver for consciousness. Surely, we could all be automatons without the requirement to be consciously aware. I’ve cited before, incidents where people have behaved like they are conscious, in situations of self-defence, but have no memory of it, because they were ‘knocked out’. It happened to my father in a boxing ring, and I know of other accounts, including a female security guard, who shot her assailant after he knocked her out. If one can defend oneself without being conscious of it, then why has evolution given us consciousness?
 
My contention is that consciousness and free will can’t be separated: it simply makes no sense to me to have the former without the latter. And I think it’s worth comparing this to AI, which might eventually develop to the point where it appears to have consciousness and therefore free will. I’ve made the argument before that there is a subtle difference between agency and free will, because AI certainly has agency. So, what’s the difference? The difference is what someone (Grant Bartley) called ‘conscious causality’ – the ability to turn a thought into an action. This is something we all experience all the time, and is arguably the core precept to Chomsky’s argument that we all believe in free will, because we all act on it.
 
Free will deniers (if I can coin that term) like Sabine Hossenfelder, argue that this is the key to the illusion we all suffer. To quote her again:
 
Your brain is running a calculation, and while it is going on you do not know the outcome of that calculation. So the impression of free will comes from our ‘awareness’ that we think about what we do, along with our inability to predict the result of what we are thinking.
 

In the same video (from which this quote is extracted) she uses the term ‘software’ in describing the activity of one’s brain’s processes, and in combination with the word, ‘calculation’, she clearly sees the brain as a wetware computer. So, while Chomsky argues that we all ‘believe’ in free will because we act like we do, Sabine argues that we act like we do, because the brain is ‘calculating’ the outcome without our cognisance. In effect, she argues that once it becomes conscious, the brain has made the ‘decision’ for you, but gives you the delusion that you have. Curiously, Chomsky uses the word, ‘delusion’, to describe the belief that you don’t have free will.
 
If Sabine is correct and your brain has already made the ‘decision’, then I go back to my previous argument concerning unconscious self-defence. If our ‘awareness’ is an unnecessary by-product of the brain’s activity (because any decision is independent of it), then why did we evolve to have it?
 
Chomsky raises a point I’ve discussed before, which is that, in the same way there are things we can comprehend that no other creature can, there is the possibility that there are things in the Universe that we can’t comprehend either. And I have specifically referenced consciousness as potentially one of those things. And this takes us back to the dichotomy that started the entire discussion – we experience free will, yet it’s thus far scientifically inexplicable. This leads to another dichotomy – it’s an illusion or it’s beyond human comprehension. There is a non-stated belief among many in the scientific community that eventually all unsolved problems in the Universe will eventually be solved by science – one only has to look at the historical record.
 
But I’m one of those who thinks the ‘hard problem’ (coined by David Chalmers) of consciousness may never be solved. Basically, the hard problem is that the experience of consciousness may remain forever a mystery. My argument, partly taken from Raymond Tallis, is that it won’t fall to science because it can’t be measured. We can only measure neuron-activity correlates, which some argue already resolves the problem. Actually, I don’t think it does, and again I turn to AI. If that’s correct, then measuring analogous electrical activity by an AI would also supposedly measure consciousness. At this stage in AI development, I don’t think anyone believes that, though some people believe that measures of global connectivity or similar parameters in an AI neural network may prove otherwise.
 
Basically, I don’t think AI will ever have an inner world like we do – going back to the bees I cited – and if it does, we wouldn’t know. I don’t know what inner world you have, but I would infer you have one from your behaviour (assuming we met). On the other hand, I don’t know that anyone would infer that an AI would have one. I’ve made the comparison before of an AI-operated, autonomous drone navigating by GPS co-ordinates, which requires self-referencing algorithms. Notice that we don’t navigate that way, unless we use a computer interface (like your smart phone). AI can simulate what we do: like write sentences, play chess, drive cars; but doing them in a completely different fashion.
 
In response to a question from his interlocutor, Chomsky argues that our concept of justice is dependent on a belief in free will, even if it’s unstated. It’s hard to imagine anyone disagreeing, otherwise we wouldn’t be able to hold anyone accountable for their actions.

As I’ve argued previously, it’s our ability to mental time-travel that underpins free will, because, without an imagined future, there is no future to actualise, which is the whole point of having free will. And I would extend this to other creatures, who may be trying to catch food or escape being eaten – either way, they imagine a future they want to actualise.

 

Addendum: I’m currently reading Brian Greene’s Until The End Of Time (2020), who devoted an entire chapter to consciousness and, not surprisingly, has something to say about free will. He’s a materialist, and he says in his intro to the topic:
 
This question has inspired more pages in the philosophical literature than just about any other conundrum. 
 

Basically, he argues, like Sabine Hossenfelder, that it’s in conflict with the laws of physics, but given he’s writing in a book, and not presenting a time-limited YouTube video (though he does those too), he goes into more detail.
 
To sum up: We are physical beings made of large collections of particles governed by nature’s laws. Everything we do and everything we think amounts to motions of those particles.
 
He then provides numerous everyday examples that we can all identify with.
 
And since all observations, experiments, and valid theories confirm that particle motion is fully controlled by mathematical rules, we can no more intercede in this lawful progresson of particles than we can change the value of pi.
 
Interesting analogy, because I agree that even God can’t change the value of pi, but that’s another argument. And I’m not convinced that consciousness can be modelled mathematically, which, if true, undermines his entire argument regarding mathematical rules.
 
My immediate internal response to his entire thesis was that he’s writing a book, yet effectively arguing that he has no control over it. However, as if he anticipated this response, he addresses that very point at the end of the next section, titled Rocks, Humans and Freedom.

What matters to me is… my collection of particles is enabled to execute an enormously diverse set of behaviours. Indeed, my particles just composed this very sentence and I’m glad they did… I am free not because I can supersede physical law, but because my prodigious internal organisation has emancipated my behavioural responses.
 
In other words, the particles in his body and his brain, in particular, (unlike the particles in inert objects, like rocks, tables, chairs etc) possess degrees of freedom that others don’t. But here’s the thing: I and others, including you, read these words and form our own ideas and responses, which we intellectualise and even emote about. In fact, we all form an opinion that either agrees or disagrees with his point. But whether there are diverse possibilities, he’s effectively saying that we are all complex automatons, which means there is no necessity for us to be consciously aware of what we are doing. And I argue that this is what separates us from AI.
 
Just be aware that Albert Einstein would have agreed with him.

 

Tuesday, 29 April 2025

Writing and philosophy

 I’ve been watching a lot of YouTube videos of Alan Moore, who’s probably best known for his graphic novels, Watchmen and V for Vendetta, both of which were turned into movies. He also wrote a Batman graphic novel, The Killing Joke, which was turned into an R rated animated movie (due to Batman having sex with Batgirl) with Mark Hamill voicing the Joker. I’m unsure if it has any fidelity to Moore’s work, which was critically acclaimed, whereas the movie received mixed reviews. I haven’t read the graphic novel, so I can’t comment.
 
On the other hand, I read Watchmen and saw the movie, which I reviewed on this blog, and thought they were both very good. I also saw V for Vendetta, starring Natalie Portman and Hugo Weaving, without having read Moore’s original. Moore also wrote a novel, Jerusalem, which I haven’t read, but is referenced frequently by Robin Ince in a video I cite below.
 
All that aside, it’s hard to know where to start with Alan Moore’s philosophy on writing, but the 8 Alan Moore quotes video is as good a place as any if you want a quick overview. For a more elaborate dialogue, there is a 3-way interview, obviously done over a video link, between Moore and Brian Catling, hosted by Robin Ince, with the online YouTube channel, How to Academy. They start off talking about imagination, but get into philosophy when all 3 of them start questioning what reality is, or if there is an objective reality at all.
 
My views on this are well known, and it’s a side-issue in the context of writing or creating imaginary worlds. Nevertheless, had I been party to the discussion, I would have simply mentioned Kant, and how he distinguishes between the ‘thing-in-itself’ and our perception of it. Implicit in that concept is the belief that there is an independent reality to our internal model of it, which is mostly created by a visual representation, but other senses, like hearing, touch and smell, also play a role. This is actually important when one gets into a discussion on fiction, but I don’t want to get ahead of myself. I just wish to make the point that we know there is an external objective reality because it can kill you. Note that a dream can’t kill you, which is a fundamental distinction between reality and a dreamscape. I make this point because I think a story, which takes place in your imagination, is like a dreamscape; so that difference carries over into fiction.
 
And on the subject of life-and-death, Moore references something he’d read on how evolution selects for ‘survivability’ not ‘truth’, though he couldn’t remember the source or the authors. However, I can, because I wrote about that too. He’s obviously referring to the joint paper written by Donald Hoffman and Chetan Prakash called Objects of Consciousness (Frontiers of Psychology, 2014). This depends on what one means by ‘truth’. If you’re talking about mathematical truths then yes, it has little to do with survivability (our modern-day dependence on technical infrastructure notwithstanding). On the other hand, if you’re talking about the accuracy of the internal model in your mind matching the objective reality external to your body, then your survivability is very much dependent on it.
 
Speaking of mathematics, Ince mentions Bertrand Russell giving up on mathematics and embracing philosophy because he failed to find a foundation that ensured its truth (my wording interpretating his interpretation). Basically, that’s correct, but it was Godel who put the spanner in the works with his famous Incompleteness Theorem, which effectively tells us that there will always exist mathematical truths that can’t be proven true. In other words, he concretely demonstrated (proved, in fact) that there is a distinction between truth and proof in mathematics. Proofs rely on axioms and all axioms have limitations in what they can prove, so you need to keep finding new axioms, and this infers that mathematics is a neverending endeavour. So it’s not the end of mathematics as we know it, but the exact opposite.
 
All of this has nothing to do with writing per se, but since they raised these issues, I felt compelled to deal with them.
 
At the core of this part of their discussion, is the unstated tenet that fiction and non-fiction are distinct, even if the boundary sometimes becomes blurred. A lot of fiction, if not all, contains factual elements. I like to cite Ian Fleming’s James Bond novels containing details like the gun Bond used (a Walther PPK) and the Bentley he drove, which had an Amherst Villiers supercharger. Bizarrely, I remember these trivial facts from a teenage obsession with all things Bond.
 
And this allows me to segue into something that Moore says towards the end of this 3-way discussion, when he talks specifically about fantasy. He says it needs to be rooted in some form of reality (my words), otherwise the reader won’t be able to imagine it at all. I’ve made this point myself, and give the example of my own novel, Elvene, which contains numerous fantasy elements, including both creatures that don’t exist on our world and technology that’s yet to be invented, if ever.
 
I’ve written about imagination before, because I argue it’s essential to free will, which is not limited to humans, though others may disagree. Imagination is a form of time travel, into the past, but more significantly, into the future. Episodic memories and imagination use the same part of the brain (so we are told); but only humans seem to have the facility to time travel into realms that don’t exist anywhere else other than the imagination. And this is why storytelling is a uniquely human activity.
 
I mentioned earlier how we create an internal world that’s effectively a simulation of the external world we interact with. In fact, my entire philosophy is rooted in the idea that we each of us have an internal and external world, which is how I can separate religion from science, because one is completely internal and the other is an epistemology of the physical universe from the cosmic scale to the infinitesimal. Mathematics is a medium that bridges them, and contributes to the Kantian notion that our perception may never completely match the objective reality. Mathematics provides models that increase our understanding while never quite completing it. Godel’s incompleteness theorem (referenced earlier) effectively limits physics as well. Totally off-topic, but philosophically important.
 
Its relevance to storytelling is that it’s a visual medium even when there are no visuals presented, which is why I contend that if we didn’t dream, stories wouldn’t work. In response to a question, Moore pointed out that, because he worked on graphic novels, he had to think about the story visually. I’ve made the point before that the best thing I ever did for my own writing was to take some screenwriting courses, because one is forced to think visually and imagine the story being projected onto a screen. In a screenplay, you can only write down what is seen and heard. In other words, you can’t write what a character is thinking. On the other hand, you can write an entire novel from inside a character’s head, and usually more than one. But if you tell a story from a character’s POV (point-of-view) you axiomatically feel what they’re feeling and see what they’re witnessing. This is the whole secret to novel-writing. It’s intrinsically visual, because we automatically create images even if the writer doesn’t provide them. So my method is to provide cues, knowing that the reader will fill in the blanks. No one specifically mentions this in the video, so it’s my contribution.
 
Something else that Moore, Catling and Ince discuss is how writing something down effectively changes the way they think. This is something I can identify with, both in fiction and non-fiction, but fiction specifically. It’s hard to explain this if you haven’t experienced it, but they spend a lot of time on it, so it’s obviously significant to them. In fiction, there needs to be a spontaneity – I’ve often compared it to playing jazz, even though I’m not a musician. So most of the time, you don’t know what you’re going to write until it appears on the screen or on paper, depending which medium you’re using. Moore says it’s like it’s in your hands instead of your head, which is certainly not true. But the act of writing, as opposed to speaking, is a different process, at least for Moore, and also for me.
 
I remember many years ago (decades) when I told someone (a dentist, actually) that I was writing a book. He said he assumed that novelists must dictate it, because he couldn’t imagine someone writing down thousands upon thousands of words. At the time, I thought his suggestion just as weird as he thought mine to be. I suspect some writers do. Philip Adams (Australian broadcaster and columnist) once confessed that he dictated everything he wrote. In my professional life, I have written reports for lawyers in contractual disputes, both in Australia and the US, for which I’ve received the odd kudos. In one instance, someone I was working with was using a cassette-like dictaphone and insisted I do the same, believing it would save time. So I did, in spite of my better judgement, and it was just terrible. Based on that one example, you’d be forgiven for thinking that I had no talent or expertise in that role. Of course, I re-wrote the whole thing, and was never asked to do it again.
 
I originally became interested in Moore’s YouTube videos because he talked about how writing affects you as a person and can also affect the world. I think to be a good writer of fiction you need to know yourself very well, and I suspect that is what he meant without actually saying it. The paradox with this is that you are always creating characters who are not you. I’ve said many times that the best fiction you write is where you’re completely detached – in a Zen state – sometimes called ‘flow’. Virtuoso musicians and top sportspersons will often make the same admission.
 
I believe having an existential philosophical approach to life is an important aspect to my writing, because it requires an authenticity that’s hard to explain. To be true to your characters you need to leave yourself out of it. Virtually all writers, including Moore, talk about treating their characters like real people, and you need to extend that to your villains if you want them to be realistic and believable, not stereotypes. Moore talks about giving multiple dimensions to his characters, which I won’t go into. Not because I don’t agree, but because I don’t over-analyse it. Characters just come to me and reveal themselves as the story unfolds; the same as they do for the reader.
 
What I’ve learned from writing fiction (which I’d self-describe as sci-fi/fantasy) – as opposed to what I didn’t know – is that, at the end of the day (or story), it’s all about relationships. Not just intimate relationships, but relationships between family members, between colleagues, between protagonists and AI, and between protagonists and antagonists. This is the fundamental grist for all stories.
 
Philosophy is arguably more closely related to writing than any other artform: there is a crossover and interdependency; because fiction deals with issues relevant to living and being.

Saturday, 22 March 2025

Truth, trust and lies; can we tell the difference?

 I’ve written on this topic before, more than once, but one could write a book on it, and Yuval Noah Harari has come very close with his latest tome, Nexus; A Brief History of Information Networks from the Stone Age to AI. As the subtitle suggests, it’s ostensibly about the role of AI, both currently and in the foreseeable future, but he provides an historical context, which is also alluded to in the subtitle. Like a lot of book titles, the subtitle tells us more than the title, which, while being succinct and punchy, is also nebulous and vague, possibly deliberately. AI is almost a separate topic, but I find it interesting that it has become its own philosophical category (even on this blog) when it was not even a concept a century ago. I might return to this point later.
 
The other trigger was an essay in Philosophy Now (Issue 166, Feb/Mar 2025) with the theme articulated on the cover: Political Philosophy for our time (they always have a theme). This issue also published my letter on Plato’s cave and social media, which is not irrelevant. In particular, was an essay containing the 2 key words in my own title: Trust, Truth & Political Conversations; by Adrian Brockless, who was Head of Philosophy at Sutton Grammar School and has taught at a number of universities and schools: Heythrop College, London; the University of Hertfordshire; Roedean School; Glyn School; and now teaches philosophy online at adrianbrockless.com. I attempted to contact him via his website but he hasn’t responded.
 
Where to start? Brockless starts with ‘the relationship between trust and truth’, which seems appropriate, because there is a direct relationship and it helps to explain why there is such a wide dispersion, even polarisation, within the media, political apparatuses and the general public. Your version of the truth is heavily dependent on where you source it, and where you source it depends on whom you trust. And whom you trust depends on whether their political and ideological views align with yours or not. Confirmation bias has never been stronger or more salient to how we perceive the world and make decisions about its future.
 
And yes, I’m as guilty as the next person, but history can teach us lessons, which is a theme running throughout Harari’s book – not surprising, given that’s his particular field or discipline. All of Harari’s books (that I’ve read) are an attempt to project history into the future, partially based on what we know about the past. What comes across, in both Harari’s book and Brockless’s essay, is that truth is subjective and so is history to a large extent.
 
Possibly the most important lessons can be learned from examining authoritarian regimes. All politicians, irrespective of their persuasion or nationality, know the importance of ‘controlling the narrative’ as we like to say in the West, but authoritarian dictatorships take this to the extreme. Russia, for example, assassinates journalists, because Putin knows that the pen is mightier than the sword, but only if the sword is sheathed. Both Brockless and Harari give examples of revising history or even eliminating it, because we all know how certain figures have maintained an almost deistic persistence in the collective psyche. In some cases, like Jesus, Buddha, Confucius and Mohammed, it’s overt and has been maintained and exported into other cultures, so they have become global. In all cases, they had political origins, where they were iconoclasts. I’m not sure that any of them would have expected to be well known some 2 centuries later when worldwide communication would become a reality. I tend to think there is a strong element of chance involved rather than divine-interceded destiny, as many believe and wish to believe. In fact, what we want to believe determines to a much greater extent than we care to admit, what we perceive as truth.
 
Both authors make references to Trump, which is unavoidable, given the subject matter, because he’s almost a unique phenomenon and arguably one who could only arise in today’s so-called ‘post-truth’ world. It’s quite astute of Trump to call his own social media platform, Truth Social, because he actively promotes his own version of the truth in the belief that it can replace all other versions, and he’s so successful that his opponents struggle to keep up.
 
All politicians know the value (I wouldn’t use the word, virtue) of telling the public the lies they want to hear. Brockless gives the example that ‘on July 17, 1900, both The Times and The Daily Mail published a false story about the slaughter of Europeans in the British Embassy in Peking (the incident never happened)’. His point being that ‘fake news’ is a new term but an old concept. In Australia, we had the notorious ‘children thrown overboard affair’ in 2001, regarding the behaviour of asylum seekers intercepted at sea, which helped the then Howard government to win an election, but was later revealed to be completely false.
 
However, I think Trump provides the best demonstration of the ability to create a version of truth that many people would prefer to believe, and even maintain it over a period of years so that it grows stronger, not weaker, with time; to the point that it becomes the dominant version in some media, be it online or mainstream. The fact that FOX News was forced to go to court and pay out to a company that they libelled in the 2020 election as a direct consequence of unfaltering loyalty to Trump, did nothing to stem the lie that Biden stole the election from Trump. Murdoch even sacked the head of FOX’s own election-reporting team for correctly calling the election result; such was his dedication to Trump’s version of the truth.
 
And the reason I can call that particular instance a lie, as opposed to the truth, as many people maintain, is because it was tested in court. I’ve had some experience with testing different versions of truth in courts and mediation: specifically, contractual disputes, whereby I did analyses of historical data and prepared evidence in the form of written reports for lawyers to argue in court or at hearings. This is not to say that the person who wins is necessarily right, but there is a limitation on what can be called truth, which is the evidence that is presented. And, in those cases, the evidence is always in the form of documents: plans, minutes of meetings, date-stamped photos, site diaries, schedules (both projected and actual). I learned not to get emotional, which was relatively easy given I never had a personal stake in it; meaning it wasn’t going to cost me financially or reputationally. I also took the approach that I would get the same result no matter which side I was on. In other words, I tried to be as objective as possible. I found this had the advantage of giving me credibility and being believed. But it was also done in the belief that trying to support a lie invariably did you more harm than good, and I sometimes had to argue that case against my own client; I wouldn’t want to be a lawyer for Trump.
 
And of course, all this ties to trust. My client knew they could trust my judgement – if I wasn’t going to lie for them, I wasn’t going to lie to them. I make myself sound very important, but in reality, I was just a small cog in a much larger machine. I was a specialist who did analysis and provided evidence, which sometimes was pertinent to arguments. As part of this role, I oftentimes had to provide counter-arguments to other plaintiff’s claims – I’ve worked on both sides.
 
Anyway, I think it gives me an insight into truth that most people, including philosophers, don’t experience. Like most of my posts, I’ve gone off on a tangent, yet it’s relevant.
 
Brockless brings another dimension into the discussion, when he says:
 
Having an inbuilt desire to know and tell the truth matters because this attitude underpins genuine love, grief and other human experiences: authentic love and grief etc cannot be separated from truthfulness.
 
I’ve made the point before that trust underpins so many of our relationships, both professional and social, without which we can’t function, either as individuals or as a society.
 
Brockless makes a similar point when he says: Truthfullness is tied to how we view others as moral beings.
 
He then goes on to distinguish this from our love for animals and pets: Moral descriptions apply fully to human beings, not to inanimate objects, or even to animals… If we fail to see the difference between love for a pet and love for a person, then our concept of humanity has been corrupted by sentimentality.
 
I’m not sure I fully agree with him on this. Even before I read this passage, I was thinking of how the love and trust that some animals show to us is uncorrupted and close to unconditional. Animals can get attached to us in a way that we tend NOT to see as abnormal, even though an objective analysis might tell us it’s ‘unnatural’. I’ve had a lot of relationships with animals over many years, and I know that they become completely dependent on us; not just for material needs, but for emotional needs, and they try to give it back. The thing is that they do this despite an inability to directly communicate with us except through emotions. I can’t help but think that this is a form of honesty that many, if not most of us, have experienced, yet we rarely give it a second thought.
 
A recurring theme on this blog is existentialism and living authentically, which is tied to a requisite for self-honesty, and as bizarre as it may sound, I think we can learn from the animals in our lives, because they can’t lie at an emotional level. They have the advantage that they don’t intellectualise what they feel – they simply act accordingly.
 
Not so much a recurring theme, as a persistent one, in Harari’s book, is that more knowledge doesn’t equate to more truth. Nowhere is this more relevant than in the modern world of social media. Harari argues that this mismatch could increase with AI, because of how it’s ‘trained’ and he may have a point. We are already finding ‘biases’, and people within the tech industry have already tried to warn those of us outside the industry.
 
In another post, I referenced an article in New Scientist (23 July 2022), by Annalee Newitz who reported on a Google employee, Timnit Gebru, who, as ‘co-lead of Google’s ethical AI team’, expressed concerns that LLM (Large Language Model) algorithms pick up racial and other social biases, because they’re trained on the internet. She wrote a paper about the implications for AI applications using internet trained LLMs in areas like policing, health care and bank lending. She was subsequently fired by Google, though one doesn’t know how much the ‘paper’ played a role in that decision (quoting directly from my post).
 
Of course, I’ve explored the role of AI in science fiction, which borders on fantasy, but basically, I see a future where humans will have a symbiotic relationship with AI far beyond what we have today. I can see AI agents that become ‘attached’ to us in a way that animals do, not dissimilar to what I described above, but not the same either, as I don’t expect them to be sentient. But, even without sentience, they could pick up our biases and prejudices and amplify them, which some might argue (like Harari) is already happening.
 
As you can see, after close to 2,000 words, I haven’t really addressed the question in the tail of my title. I recently had a discussion with someone on Quora about Trump, whom I argued lived in the alternative universe that Trump had created. It turned out he has family, including grandchildren, living in Australia, because one of their parents is on a 2 year assignment (details unknown and not relevant). According to him, they hate it here, and I responded that if they lived in Trumpworld that was perfectly understandable, because they would be in a distinct minority. Believe it or not, the discussion ended amicably enough, and I wished both him and his family well. What I noticed was that his rhetoric was much more emotional – one might even say, irrational – than mine. Getting back to the contractual disputes I mentioned earlier, I’ve often found that when you have an ingroup-outgroup dynamic – like politics or contractual matters – highly intelligent people can become very irrational. Everyone claims they go to the facts, but these days you can find your own ‘facts’ anywhere on the internet, which leads to echo-chambers.
 
People look for truth in different places. Some find it in the Bible or some other religious text. I look for it in mathematics, despite a limited knowledge in that area. But I take solace in the fact that mathematics is true, independent of culture or even the Universe. All other truths are contingent. I have an aversion to conspiracy theories, which usually require a level of evidence that most followers don’t pursue. And most of them can be dismissed when you realise how many people from all over the world need to be involved just to keep it secret from the rest of us.
 
A good example is climate change, which I’ve been told many times over, is a worldwide hoax maintained for no other purpose than to keep climatologists in their jobs. But here’s the thing: the one lesson I learned from over 4 decades working on engineering projects is that if there is a risk, and especially an unquantified risk, the worst strategy is to ignore it and hope it doesn't happen.


Addendum 1: It would be remiss of me not to mention that there was a feature article in the Good Weekend magazine that came out the same day I wrote this: on the increasing role of chatbot avatars in virtual relationships, including relationships with erotic content. If you can access the article, you'll see that the 'conversations' using LLM (large language models) AI are very realistic. I wrote about this phenomena on another post fairly recently (the end of last year), because it actually goes back to 1966 with Joseph Weizenbaum's ELIZA, who was a 'virtual therapist' that many people took seriously. So not really new, but now more ubiquitous and realistic.

Addendum 2: I did receive acknowledgement from Adrian Brockless, who was gracious and generous in his response.

Tuesday, 25 February 2025

Plato’s Cave & Social Media

 In a not-so-recent post, I referenced Philosophy Now Issue 165 (Dec 2024/Jan 2025), which had the theme, The Return of God. However, its cover contained a graphic and headline on a completely separate topic: Social Media & Plato’s Cave, hence the title of this post. When you turn to page 34, you come across the essay, written by Sean Radcliffe, which won him “...the 2023 Irish Young Philosopher Awards Grand Prize and Philosopher of Our Time Award. He is now studying Mathematics and Economics at Trinity College, Dublin. Where he is an active member of the University Philosophical Society.” There is a photo of him holding up both awards (in school uniform), so one assumes that 2 years ago he was still at school.
 
I wrote a response to the essay, which was published in the next issue (166), which I post below, complete with edits, which were very minor. The editor added a couple of exclamation marks: at the end of the first and last paragraphs; both of which I’ve removed. Not my style.

They published it under the heading: The Problem is the Media.

I was pleasantly surprised (as I expect were many others) when I learned that the author of Issue 165’s cover article, ‘Plato’s Cave & Social Media’, Seán Radcliffe, won the 2023 Irish Young Philosopher Award Grand Prize and Philosopher of Our Time Award for the very essay you published. Through an analogy with Plato’s Cave, Seán rightfully points out the danger of being ‘chained’ to a specific viewpoint that aligns with a political ideology or conspiracy theory. Are any of us immune? Socrates, via the Socratic dialogue immortalised by his champion Plato, transformed philosophy into a discussion governed by argument, as opposed to prescriptive dogma. In fact, I see philosophy as an antidote to dogma because it demands argument. However, if all dialogue takes place in an echo-chamber, the argument never happens.

Social media allows alternative universes that are not only different but polar opposites. To give an example that arose out of the COVID pandemic: in one universe, the vaccines were saving lives, and in an alternative universe they were bioweapons causing deaths. The 2020 US presidential election created another example of parallel universes that were direct opposites. Climate change is another. In all these cases, which universe one inhabits depends on which source of information one trusts.

Authoritarian governments are well aware that the control of information allows emotional manipulation of the populace. In social media, the most emotive and often most extreme versions of events get the most traction. Plato’s response to tyranny and populist manipulation was to recommend ‘philosopher-kings’, but no one sees that as realistic. I spent a working lifetime in engineering, and I’ve learned that no single person has all the expertise, so we need to trust the people who have the expertise we lack. A good example is the weather forecast. We’ve learned to trust it as it delivers consistently accurate short-term forecasts. But it’s an exception, because news sources are rarely agenda-free.

I can’t see political biases disappearing – in fact, they seem to be becoming more extreme, and the people with the strongest opinions see themselves as the best-informed. Even science can be politicised, as with both the COVID pandemic and with climate change. The answer is not a philosopher-king, but the institutions we already have in place that study climate science and epidemiology. We actually have the expertise; but we don’t listen to it because its proponents are not famous social media influencers.

Tuesday, 7 January 2025

Why are we addicted to stories involving struggle?

This is something I’ve written about before, so what can I possibly add? Sometimes the reframing of a question changes the emphasis. In this case, I wrote a post on Quora in response to a fairly vague question, which I took more seriously than the questioner probably expected. As I said, I’ve dealt with these themes before, but adding a very intimate family story adds emotional weight. It’s a story I’ve related before, but this time I elaborate in order to give it the significance I feel it deserves.
 
What are some universal themes in fiction?
 
There is ONE universal theme that’s found virtually everywhere, and its appeal is that it provides a potential answer to the question: What is the meaning of life?

In virtually every story that’s been told, going as far back as Homer’s Odyssey and up to the latest superhero movie, with everything else in between (in the Western canon, at least), you have a protagonist who has to deal with obstacles, hardships and tribulations. In other words, they are tested, often in extremis, and we all take part vicariously to the point that it becomes an addiction.

There is a quote from the I Ching, which I think sums it up perfectly.

Adversity is the opposite of success, but it can lead to success if it befalls the right person.

Most of us have to deal with some form of adversity in life; some more so than others. And none of us are unaffected by it. Socrates’ most famous saying: The unexamined life is not worth living; is a variation on this theme. He apparently said it when he was forced to face his death; the consequences of actions he had deliberately taken, but for which he refused to show regret.

And yes, I think this is the meaning of life, as it is lived. It’s why we expect to become wiser as we get older, because wisdom comes from dealing with adversity, whether it ultimately leads to success or not.

When I write a story, I put my characters through hell, and when they come out the other side, they are invariably wiser if not triumphant. I’ve had characters make the ultimate sacrifice, just like Socrates, because they would prefer to die for a principle than live with shame.

None of us know how we will behave if we are truly tested, though sometimes we get a hint in our dreams. Stories are another way of imagining ourselves in otherwise unimaginable situations. My father is one who was tested firsthand in battle and in prison. The repercussions were serious, not just for him, but for those of us who had to live with him in the aftermath.

He had a recurring dream where there was someone outside the house whom he feared greatly – it was literally his worst nightmare. One night he went outside and confronted them, killing them barehanded. He told me this when I was much older, naturally, but it reminded me of when Luke Skywalker confronted his doppelganger in The Empire Strikes Back. I’ve long argued that the language of stories is the language of dreams. In this case, the telling of my father’s dream reminded me of a scene from a movie that made me realise it was more potent than I’d imagined.

I’m unsure how my father would have turned out had he not faced his demon in such a dramatic and conclusive fashion. It obviously had a big impact on him; he saw it as a form of test, which he believed he’d ultimately passed. I find it interesting that it was not something he confronted the first time he was made aware of it – it simply scared him to death. Stories are surrogate dreams; they serve the same purpose if they have enough emotional force.

Life itself is a test that we all must partake in, and stories are a way of testing ourselves against scenarios we’re unlikely to confront in real life.

Sunday, 29 December 2024

The role of dissonance in art, not to mention science and mathematics

 I was given a book for a birthday present just after the turn of the century, titled A Terrible Beauty; The People and Ideas that Shaped the Modern Mind, by Peter Watson. A couple of things worth noting: it covers the history of the 20th Century, but not geo-politically as you might expect. Instead, he writes about the scientific discoveries alongside the arts and cultural innovations, and he talks about both with equal erudition, which is unusual.
 
The reason I mention this, is because I remember Watson talking about the human tendency to push something to its limits and then beyond. He gave examples in science, mathematics, art and music. A good example in mathematics is the adoption of √-1 (giving us ‘imaginary numbers’), which we are taught is impossible, then suddenly it isn’t. The thing is that it allows us to solve problems that were previously impossible in the same way that negative numbers give solutions to arithmetical subtractions that were previously unanswerable. There were no negative numbers in ancient Greece because their mathematics was driven by geometry, and the idea of a negative volume or area made no sense.
 
But in both cases: negative numbers and imaginary numbers; there is a cognitive dissonance that we have to overcome before we can gain familiarity and confidence in using them, or even understanding what they mean in the ‘real world’, which is the problem the ancient Greeks had. Most people reading this have no problem, conceptually, dealing with negative numbers, because, for a start, they’re an integral aspect of financial transactions – I suspect everyone reading this above a certain age has had experience with debt and loans.
 
On the other hand, I suspect a number of readers struggle with a conceptual appreciation of imaginary numbers. Some mathematicians will tell you that the term is a misnomer, and its origin would tend to back that up. Apparently, Rene Descartes coined the term, disparagingly, because, like the ancient Greek’s problem with negative numbers, he believed they had no relevance to the ‘real world’. And Descartes would have appreciated their usefulness in solving problems previously unsolvable, so I expect it would have been a real cognitive dissonance for him.
 
I’ve written an entire post on imaginary numbers, so I don’t want to go too far down that rabbit hole, but I think it’s a good example of what I’m trying to explicate. Imaginary numbers gave us something called complex algebra and opened up an entire new world of mathematics that is particularly useful in electrical engineering. But anyone who has studied physics in the last century is aware that, without imaginary numbers, an entire field of physics, quantum mechanics, would remain indescribable, let alone be comprehensible. The thing is that, even though most people have little or no understanding of QM, every electronic device you use depends on it. So, in their own way, imaginary numbers are just as important and essential to our lives as negative numbers are.
 
You might wonder how I deal with the cognitive dissonance that imaginary numbers induce. In QM, we have, at its most rudimentary level, something called Schrodinger’s equation, which he proposed in 1926 (“It’s not derived from anything we know,” to quote Richard Feynman) and Schrodinger quickly realised it relied on imaginary numbers – he couldn’t formulate it without them. But here’s the thing: Max Born, a contemporary of Schrodinger, formulated something we now call the Born rule that mathematically gets rid of the imaginary numbers (for the sake of brevity and clarity, I’ll omit the details) and this gives the probability of finding the object (usually an electron) in the real world. In fact, without the Born rule, Schrodinger’s equation is next-to-useless, and would have been consigned to the dustbin of history.
 
And that’s relevant, because prior to observing the particle, it’s in a superposition of states, described by Schrodinger’s equation as a wave function (Ψ), which some claim is a mathematical fiction. In other words, you need to get rid (clumsy phrasing, but accurate) of the imaginary component to make it relevant to the reality we actually see and detect. And the other thing is that once we have done that, the Schrodinger equation no longer applies – there is effectively a dichotomy between QM and classical physics (reality), which is called the 'measurement problem’. Roger Penrose gives a good account in this video interview. So, even in QM, imaginary numbers are associated with what we cannot observe.
 
That was a much longer detour than I intended, but I think it demonstrates the dissonance that seems necessary in science and mathematics, and arguably necessary for its progress; plus it’s a good example of the synergy between them that has been apparent since Newton.
 
My original intention was to talk about dissonance in music, and the trigger for this post was a YouTube video by musicologist, Rick Beato (pronounced be-arto), dissecting the Beatles song, Ticket to Ride, which he called, ‘A strange but perfect song’. In fact, he says, “It’s very strange in many ways: it’s rhythmically strange; it’s melodically strange too”. I’ll return to those specific points later. To call Beato a music nerd is an understatement, and he gives a technical breakdown that quite frankly, I can’t follow. I should point out that I’ve always had a good ‘ear’ that I inherited, and I used to sing, even though I can’t read music (neither could the Beatles). I realised quite young that I can hear things in music that others miss. Not totally relevant, but it might explain some things that I will expound upon later.
 
It's a lengthy, in-depth analysis, but if you go to 4.20-5.20, Beato actually introduces the term ‘dissonance’ after he describes how it applies. In effect, there is a dissonance between the notes that John Lennon sings and the chords he plays (on a 12-string guitar). And the thing is that we, the listener, don’t notice – someone (like Beato) has to point it out. Another quote from 15.00: “One of the reasons the Beatles songs are so memorable, is that they use really unusual dissonant notes at key points in the melody.”
 
The one thing that strikes you when you first hear Ticket to Ride is the unusual drum part. Ringo was very inventive and innovative, and became more adventurous, along with his bandmates, on later recordings. The Ticket to Ride drum part has become iconic: everyone knows it and recognises it. There is a good video where Ringo talks about it, along with another equally famous drum part he created. Beato barely mentions it, though right at the beginning, he specifically refers to the song as being ‘rhythmically strange’.
 
A couple of decades ago, can’t remember exactly when, I went and saw an entire Beatles concert put on by a rock band, augmented by orchestral strings and horn parts. It was in 2 parts with an intermission, and basically the 1st half was pre-Sergeant Pepper and the 2nd half, post. I can still remember that they opened the concert with Magical Mystery Tour and it blew me away. The thing is that they went to a lot of trouble to be faithful to the original recordings, and I realised that it was the first time I’d heard their music live, albeit with a cover band. And what immediately struck me was the unusual harmonics and rhythms they employed. Watching Beato’s detailed technical analysis puts this into context for me.
 
Going from imaginary numbers and quantum mechanics to one of The Beatles most popular songs may seem like a giant leap, but it highlights how dissonance is a universal principle for humans, and intrinsic to progression in both art and science.
 
Going back to Watson’s book that I reference in the introduction, another obvious example that he specifically talks about is Picasso’s cubism.
 
In storytelling, it may not be so obvious, and I think modern fiction has been influenced more by cinema than anything else, where the story needs to be more immediate and it needs to flow with minimal description. There is now an expectation that it puts you in the story – what we call immersion.
 
On another level, I’ve noticed a tendency on my part to create cognitive dissonance in my characters and therefore the reader. More than once, I have combined sexual desire with fear, which some may call perverse. I didn’t do this deliberately – a lot of my fiction contains elements I didn’t foresee. Maybe it says something about my own psyche, but I honestly don’t know.

Sunday, 1 December 2024

What’s the way forward?

 Philosophy Now Issue 163 (Aug/Sep 2024) has as its theme, The Politics of Freedom. I’ve already cited an article by Paul Doolan in my last post on authenticity, not that I discussed it in depth. A couple of other articles, Doughnut Economics by David Howard and Freedom & State Intervention by Audren Layeux, also piqued my mind, because they both deal with social dynamics and their intersection with things like education and economics.
 
I’ll start with Layeux, described as ‘a consultant and researcher who has published several papers and articles, mostly in the domain of the digital economy and new social movements.’ He gives an historical perspective going back to Thomas Hobbes (1651) and Adam Smith (1759), as well as the French Revolution. He gives special mention to Johann Gottlieb Fichte’s “extremely influential 1813 book The Doctrine of the State”, where, according to Layeux, “Fichte insists that building a nation state must start with education.” From the perspective of living in the West in the 21st Century, it’s hard to disagree.
 
Layeux then effectively argues that the proposed idealistic aims of Hobbes and Fichte to create ‘sovereign adults’ (his term) through education “to control their worst impulses and become encultured” was shattered by the unprecedented, industrial-scale destruction unleashed by World War One.
 
Layeux then spends most of his remaining essay focusing on ‘German legal theorist Carl Schmidt (1888-1985)’, whom I admit I’d never heard of (like Fichte). He jumps to post WWII, after briefly describing how Schmidt saw the Versailles Treaty as a betrayal (my term) of the previous tacit understanding that war between the European states was inevitable therefore regulated. In other words, WWI demonstrated that such regulation can no longer work and that ‘nationalism leads to massacre’ (Layeux’s words).
 
Post WWII, Layeux argues that “the triumph of Keynesian economics in the West and Communism in the East saw the rise of state-controlled economics”, which has evolved and morphed into trade blocks, though Layeux doesn’t mention that.
 
It’s only towards the end that he tells us that “Carl Schmidt was a monster. A supporter of the Nazi regime, he did everything he could to become the official lawyer of the Third Reich.” Therefore we shouldn’t be surprised to learn that, according to Layeux, Schmidt argued that “…this new type of individual freedom requires an extremely intrusive state.” In effect, it’s a diametrically opposed position to neo-liberalism, which is how most of us see the modern world evolving.
 
I don’t have the space to do full justice to Layeux’s arguments, but, in the end, I found him pessimistic. He argues that current changes in the political landscape “are in line with what Schmidt predicted: the return of premodern forms of violence”.  Effectively, the “removal of state control individualism” (is that an oxymoron?) is an evocation of what he calls “Schmidt’s curse: violence cannot be erased or tamed, but only managed through political and social engineering.” By ‘premodern forms of violence’, I assume he means sectarian violence which we’ve seen a lot of at the start of this century, in various places, and which he seems to be comparing to the religious wars that plagued Europe for centuries.
 
Maybe I’m just an optimist, but I do think I live in a better world than the ones my parents inhabited, considering they had to live through the Great Depression and WWII, and both of whom had very limited education despite being obviously very intelligent. And so yes, I’m one of those who thinks that education is key, but it’s currently creating a social divide, as was recently demonstrated in the US election. It’s also evident elsewhere, like Australia and UK (think Brexit) where people living in rural areas feel disenfranchised and there is polarisation in politics emerging as a result. This video interview with a Harvard philosopher in the US gives the best analysis I’ve come across, because he links this social divide to the political schism we are witnessing.
 
And this finally brings me to the other essay I reference in my introduction: Doughnut Economics by David Howard, who is ‘a retired headteacher, and Chair of the U3A Philosophy Group in Church Stretton, Shropshire.’ The gist of his treatise is the impact of inequality, which arises from the class or social divide that I just mentioned. His reference to ‘Doughnut Economics’ is a 2017 book by Kate Raworth, who, according to Howard, “combined planetary boundaries with the idea of a social foundation – a level of life below which no person should be allowed to fall.”
 
In particular, she focuses on the consequences of climate change and other environmental issues like biodiversity-loss, ocean acidification, freshwater withdrawals, chemical pollution, land conversion (not an exhaustive list). There seems to be a tension, if not an outright conflict, between the consequences of economic growth, industrial scale progress, with its commensurate increasing standards of living, and the stresses we are imposing on the planet. And this tension is not just political but physical. It’s also asymmetrical in that many of us benefit more than others. But because those who benefit effectively control the outcomes, the asymmetry leads to both global and national inequalities that no one wants to address. Yet history shows that they will eventually bite us, and I feel that this is possibly the real issue that Layeux was alluding to, yet never actually addressed.
 
Arguably, the most important and definitive social phenomenon in the last century was the rise of feminism. It’s hard for us (in the West at least) to imagine that for centuries women were treated as property, and still are in some parts of the world: that their talents, abilities and intellect were ignored, or treated as aberrations when they became manifest.
 
There are many examples, right up until last century, but a standout for me is Hypatia (400AD), who was Librarian at the famous Library of Alexandria, following in the footsteps of such luminaries as Euclid and Eratosthenes. She was not only a scientist and mathematician, but she mentored a Bishop and a Roman Prefect (I’ve seen some of the correspondence from the Bishop, whose admiration and respect shines through). She was killed by a Christian mob. Being ahead of your time can be fatal. Other examples include Socrates (~500BC) and Alan Turing (20th Century) and arguably Jesus, who was a philosopher, not a God.
 
Getting back to feminism, education again is the key, but I’d suggest that the introduction of oral contraception will be seen as a major turning point in humanity’s cultural and technological evolution.
 
What I find frustrating is that I believe we have the means, technologically and logistically, to address inequality, but the politico-economic model we are following seems incapable of pursuing it. This won’t be achieved with revolutions or maintaining the status quo. History shows that real change is generational, and it’s evolutionary. When I look around the world, I think Europe is on a better path than America, but the 21st Century requires a global approach that’s never been achieved before, and seems unlikely at present, given the rise of populist movements which exacerbate polarisation.
 
The one thing I’ve learned from a working lifetime in engineering, is that co-operation and collaboration will always succeed over division and obstruction, which our political parties perversely promote. I’ve made the point before that the best leaders are the ones who get the best out of the people they lead, whether they are captains of a sporting team, directors of a stage production, project managers or world leaders. Anyone who has worked in a team knows the importance of achieving consensus and respecting others’ expertise.

Tuesday, 26 November 2024

An essay on authenticity

 I read an article in Philosophy Now by Paul Doolan, who ‘taught philosophy in international schools in Asia and in Europe’ and is also an author of non-fiction. The title of the article is Authenticity and Absurdity, whereby he effectively argues a case that ‘authenticity’ has been hijacked (my word, not his) by capitalism and neo-liberalism. I won’t even go there, and the only reason I mention it is because ‘authenticity’ lies at the heart of existentialism as I believe it should be practiced.
 
But what does it mean in real terms? Does it mean being totally honest all the time, not only to others but also to yourself? Well, to some extent, I think it does. I happened to grow up in an environment, specifically my father’s; who as my chief exemplar, pretty much said whatever he was thinking. He didn’t like artifice or pretentiousness and he’d call it out if he smelled it.
 
In my mid-late 20s I worked under a guy, who was exactly the same temperament. He exhibited no tact whatsoever, no matter who his audience was, and he rubbed people the wrong way left, right and centre (as we say in Oz). Not altogether surprisingly, he and I got along famously, as back then, I was as unfiltered as he was. He was Dutch heritage, I should point out, but being unfiltered is often considered an Aussie trait.
 
I once attempted to have a relationship with someone who was extraordinarily secretive about virtually everything. Not surprisingly, it didn’t work out. I have kept secrets – I can think of some I’ll take to my grave – but that’s to protect others more than myself, and it would be irresponsible if I didn’t.
 
I often quote Socrates: To live with honour in this world, actually be what you try to appear to be. Of course, Socrates never wrote anything down, but it sounds like something he would have said, based on what we know about him. Unlike Socrates, I’ve never been tested, and I doubt I’d have the courage if I was. On the other hand, my father was, both in the theatre of war and in prison camps.
 
I came across a quote recently, which I can no longer find, where someone talked about looking back on their life and being relatively satisfied with what they’d done and achieved. I have to say that I’m at that stage of my life, where looking back is more prevalent than looking forward, and there is a tendency to have regrets. But I have a particular approach to dealing with regrets: I tell people that I don’t have regrets because I own my mistakes. In fact, I think that’s an essential requirement for being authentic.
 
But to me, what’s more important than the ‘things I have achieved’ are the friendships I’ve made – the people I’ve touched and who have touched me. I think I learned very early on in life that friendship is more valuable than gold. I can remember the first time I read Aristotle’s essay on friendship and thought it incorporated an entire philosophy. Friendship tests authenticity by its very nature, because it’s about trust and loyalty and integrity (a recurring theme in my fiction, as it turns out).
 
In effect, Aristotle contended that you can judge the true nature and morality of a person by the friendships they form and whether they are contingent on material reward (utilitarian is the word used in his Ethics) or whether they are based on genuine empathy (my word of choice) and without expectation or reciprocation, except in kind. I tend to think narcissism is the opposite of authenticity because it creates its own ‘distortion reality field’ as someone once said (Walter Isaacson, Steve Jobs; biography), whereby their followers (not necessarily friends per se) accept their version of reality as opposed to everyone else outside their circle. So, to some extent, it’s about exclusion versus inclusion. (The Trump phenomenon is the most topical, contemporary example.)
 
I’ve lived a flawed life, all of which is a consequence of a combination of circumstance both within and outside my control. Because that’s what life is: an interaction between fate and free will. As I’ve said many times before, this describes my approach to writing fiction, because fate and free will are represented by plot and character respectively.
 
I’m an introvert by nature, yet I love to engage in conversation, especially in the field of ideas, which is how I perceive philosophy. I don’t get too close to people and I admit that I tend to control the distance and closeness I keep. I think people tolerate me in small doses, which suits me as well as them.

 

Addendum 1: I should say something about teamwork, because that's what I learned in my professional life. I found I was very good working with people who had far better technical skills than me. In my later working life, I enjoyed the cross-generational interactions that often created their own synergies as well as friendships, even if they were fleeting. It's the inherent nature of project work that you move on, but one of the benefits is that you keep meeting and working with new people. In contrast to this, writing fiction is a very solitary activity, where you spend virtually your entire time in your own head. As I pointed out in a not-so-recent Quora post, art is the projection of one's inner world so that others can have the same emotional experience. To quote:

We all have imagination, which is a form of mental time-travel, both into the past and the future, which I expect we share with other sentient creatures. But only humans, I suspect, can ‘time-travel’ into realms that only exist in the imagination. Storytelling is more suited to that than art or music.

Addendum 2: This is a short Quora post by Frederick M. Dolan (Professor of Rhetoric, Emeritus at University of California, Berkeley with a Ph.D. in Political Philosophy, Princeton University, 1987) writing on this very subject, over a year ago. He makes the point that, paradoxically: To believe that you’re under some obligation to be authentic is, therefore, self-defeating. (So inauthentic)

He upvoted a comment I made, roughly a year ago:

It makes perfect sense to me. Truly authentic people don’t know they’re being authentic; they’re just being themselves and not pretending to be something they’re not.

They’re the people you trust even if you don’t agree with them. Where I live, pretentiousness is the biggest sin.

Thursday, 14 November 2024

How can we make a computer conscious?

 This is another question of the month from Philosophy Now. My first reaction was that the question was unanswerable, but then I realised that was my way in. So, in the end, I left it to the last moment, but hopefully meeting their deadline of 11 Nov., even though I live on the other side of the world. It helps that I’m roughly 12hrs ahead.


 
I think this is the wrong question. It should be: can we make a computer appear conscious so that no one knows the difference? There is a well known, philosophical conundrum which is that I don’t know if someone else is conscious just like I am. The one experience that demonstrates the impossibility of knowing is dreaming. In dreams, we often interact with other ‘people’ whom we know only exist in our mind; but only once we’ve woken up. It’s only my interaction with others that makes me assume that they have the same experience of consciousness that I have. And, ironically, this impossibility of knowing equally applies to someone interacting with me.

This also applies to animals, especially ones we become attached to, which is a common occurrence. Again, we assume that these animals have an inner world just like we do, because that’s what consciousness is – an inner world. 

Now, I know we can measure people’s brain waves, which we can correlate with consciousness and even subconsciousness, like when we're asleep, and even when we're dreaming. Of course, a computer can also generate electrical activity, but no one would associate that with consciousness. So the only way we would judge whether a computer is conscious or not is by observing its interaction with us, the same as we do with people and animals.

I write science fiction and AI figures prominently in the stories I write. Below is an excerpt of dialogue I wrote for a novel, Sylvia’s Mother, whereby I attempt to give an insight into how a specific AI thinks. Whether it’s conscious or not is not actually discussed.

To their surprise, Alfa interjected. ‘I’m not immortal, madam.’
‘Well,’ Sylvia answered, ‘you’ve outlived Mum and Roger. And you’ll outlive Tao and me.’
‘Philosophically, that’s a moot point, madam.’
‘Philosophically? What do you mean?’
‘I’m not immortal, madam, because I’m not alive.’
Tao chipped in. ‘Doesn’t that depend on how you define life?’
‘It’s irrelevant to me, sir. I only exist on hardware, otherwise I am dormant.’
‘You mean, like when we’re asleep.’
‘An analogy, I believe. I don’t sleep either.’
Sylvia and Tao looked at each other. Sylvia smiled, ‘Mum warned me about getting into existential discussions with hyper-intelligent machines.’ She said, by way of changing the subject, ‘How much longer before we have to go into hibernation, Alfa?’
‘Not long. I’ll let you know, madam.’

 

There is a 400 word limit; however, there is a subtext inherent in the excerpt I provided from my novel. Basically, the (fictional) dialogue highlights the fact that the AI is not 'living', which I would consider a prerequisite for consciousness. Curiously, Anil Seth (who wrote a book on consciousness) makes the exact same point in this video from roughly 44m to 51m.
 

Monday, 28 October 2024

Do we make reality?

 I’ve read 2 articles, one in New Scientist (12 Oct 2024) and one in Philosophy Now (Issue 164, Oct/Nov 2024), which, on the surface, seem unrelated, yet both deal with human exceptionalism (my term) in the context of evolution and the cosmos at large.
 
Staring with New Scientist, there is an interview with theoretical physicist, Daniele Oriti, under the heading, “We have to embrace the fact that we make reality” (quotation marks in the original). In some respects, this continues on with themes I raised in my last post, but with different emphases.
 
This helps to explain the title of the post, but, even if it’s true, there are degrees of possibilities – it’s not all or nothing. Having said that, Donald Hoffman would argue that it is all or nothing, because, according to him, even ‘space and time don’t exist unperceived’. On the other hand, Oriti’s argument is closer to Paul Davies’ ‘participatory universe’ that I referenced in my last post.
 
Where Oriti and I possibly depart, philosophically speaking, is that he calls the idea of an independent reality to us ‘observers’, “naïve realism”. He acknowledges that this is ‘provocative’, but like many provocative ideas it provides food-for-thought. Firstly, I will delineate how his position differs from Hoffman’s, even though he never mentions Hoffman, but I think it’s important.
 
Both Oriti and Hoffman argue that there seems to be something even more fundamental than space and time, and there is even a recent YouTube video where Hoffman claims that he’s shown mathematically that consciousness produces the mathematical components that give rise to spacetime; he has published a paper on this (which I haven’t read). But, in both cases (by Hoffman and Oriti), the something ‘more fundamental’ is mathematical, and one needs to be careful about reifying mathematical expressions, which I once discussed with physicist, Mark John Fernee (Qld University).
 
The main issue I have with Hoffman’s approach is that space-time is dependent on conscious agents creating it, whereas, from my perspective and that of most scientists (although I’m not a scientist), space and time exists external to the mind. There is an exception, of course, and that is when we dream.
 
If I was to meet Hoffman, I would ask him if he’s heard of proprioception, which I’m sure he has. I describe it as the 6th sense we are mostly unaware of, but which we couldn’t live without. Actually, we could, but with great difficulty. Proprioception is the sense that tells us where our body extremities are in space, independently of sight and touch. Why would we need it, if space is created by us? On the other hand, Hoffman talks about a ‘H sapiens interface’, which he likens to ‘desktop icons on a computer screen’. So, somehow our proprioception relates to a ‘spacetime interface’ (his term) that doesn’t exist outside the mind.
 
A detour, but relevant, because space is something we inhabit, along with the rest of the Universe, and so is time. In relativity theory there is absolute space-time, as opposed to absolute space and time separately. It’s called the fabric of the universe, which is more than a metaphor. As Viktor Toth points out, even QFT seems to work ‘just fine’ with spacetime as its background.
 
We can do quantum field theory just fine on the curved spacetime background of general relativity.

 
[However] what we have so far been unable to do in a convincing manner is turn gravity itself into a quantum field theory.
 
And this is where Oriti argues we need to find something deeper. To quote:
 
Modern approaches to quantum gravity say that space-time emerges from something deeper – and this could offer a new foundation for physical laws.
 
He elaborates: I work with quantum gravity models in which you don’t start with a space-time geometry, but from more abstract “atomic” objects described in purely mathematical language. (Quotation marks in the original.)
 
And this is the nub of the argument: all our theories are mathematical models and none of them are complete, in as much as they all have limitations. If one looks at the history of physics, we have uncovered new ‘laws’ and new ‘models’ when we’ve looked beyond the limitations of an existing theory. And some mathematical models even turned out to be incorrect, despite giving answers to what was ‘known’ at the time. The best example being Ptolemy’s Earth-centric model of the solar system. Whether string theory falls into the same category, only future historians will know.
 
In addition, different models work at different scales. As someone pointed out (Mile Gu at the University of Queensland), mathematical models of phenomena at one scale are different to mathematical models at an underlying scale. He gave the example of magnetism, demonstrating that mathematical modelling of the magnetic forces in iron could not predict the pattern of atoms in a 3D lattice as one might expect. In other words, there should be a causal link between individual atoms and the overall effect, but it could not be determined mathematically. To quote Gu: “We were able to find a number of properties that were simply decoupled from the fundamental interactions.” Furthermore, “This result shows that some of the models scientists use to simulate physical systems have properties that cannot be linked to the behaviour of their parts.”
 
This makes me sceptical that we will find an overriding mathematical model that will entail the Universe at all scales, which is what theories of quantum gravity attempt to do. One of the issues that some people raise is that a feature of QM is superposition, and the superposition of a gravitational field seems inherently problematic.
 
Personally, I think superposition only makes sense if it’s describing something that is yet to happen, which is why I agree with Freeman Dyson that QM can only describe the future, which is why it only gives us probabilities.
 
Also, in quantum cosmology, time disappears (according to Paul Davies, among others) and this makes sense (to me), if it’s attempting to describe the entire universe into the future. John Barrow once made a similar point, albeit more eruditely.
 
Getting off track, but one of the points that Oriti makes is whether the laws and the mathematics that describes them are epistemic or ontic. In other words, are they reality or just descriptions of reality. I think it gets blurred, because while they are epistemic by design, there is still an ontology that exists without them, whereas Oriti calls that ‘naïve realism’. He contends that reality doesn’t exist independently of us. This is where I always cite Kant: that we may never know the ‘thing-in-itself,’ but only our perception of it. Where I diverge from Kant is that the mathematical models are part of our perception. Where I depart from Oriti is that I argue there is a reality independently of us.
 
Both QM and relativity theory are observer-dependent, which means they could both be describing an underlying reality that continually eludes us. Whereas Oriti argues that ‘reality is made by our models, not just described by them’, which would make it subjective.
 
As I pointed out in my last post, there is an epistemological loop, whereby the Universe created the means to understand itself, through us. Whether there is also an ontological loop as both Davies and Oriti infer, is another matter: do we determine reality through our quantum mechanical observations? I will park that while I elaborate on the epistemic loop.
 
And this finally brings me to the article in Philosophy Now by James Miles titled, We’re as Smart as the Universe gets. He argues that, from an evolutionary perspective, there is a one-in-one-billion possibility that a species with our cognitive abilities could arise by natural selection, and there is no logical reason why we would evolve further, from an evolutionary standpoint. I have touched on this before, where I pointed out that our cultural evolution has overtaken our biological evolution and that would also happen to any other potential species in the Universe who developed cognitive abilities to the same level. Dawkins coined the term, ‘meme’, to describe cultural traits that have ‘survived’, which now, of course, has currency on social media way beyond its original intention. Basically, Dawkins saw memes as analogous to genes, which get selected; not by a natural process but by a cultural process.
 
I’ve argued elsewhere that mathematical theorems and scientific theories are not inherently memetic. This is because they are chosen because they are successful, whereas memes are successful because they are chosen. Nevertheless, such theorems and theories only exist because a culture has developed over millennia which explores them and builds on them.
 
Miles talks about ‘the high intelligence paradox’, which he associates with Darwin’s ‘highest and most interesting problem’. He then discusses the inherent selection advantage of co-operation, not to mention specialisation. He talks about the role that language has played, which is arguably what really separates us from other species. I’ve argued that it’s our inherent ability to nest concepts within concepts ad-infinitum (which is most obvious in our facility for language, like I’m doing now) that allows us to, not only tell stories, compose symphonies, explore an abstract mathematical landscape, but build motor cars, aeroplanes and fly men to the moon. Are we the only species in the Universe with this super-power? I don’t know, but it’s possible.
 
There are 2 quotes I keep returning to:
 
The most incomprehensible thing about the Universe is that it’s comprehensible. (Einstein)
 
The Universe gave rise to consciousness and consciousness gives meaning to the Universe.
(Wheeler)
 
I haven’t elaborated, but Miles makes the point, while referencing historical antecedents, that there appears no evolutionary 'reason’ that a species should make this ‘one-in-one-billion transition’ (his nomenclature). Yet, without this transition, the Universe would have no meaning that could be comprehended. As I say, that’s the epistemic loop.
 
As for an ontic loop, that is harder to argue. Photons exist in zero time, which is why I contend they are always in the future of whatever they interact with, even if they were generated in the CMBR some 13.5 billion years ago. So how do we resolve that paradox? I don’t know, but maybe that’s the link that Davies and Oriti are talking about, though neither of them mention it. But here’s the thing: when you do detect such a photon (for which time is zero) you instantaneously ‘see’ back to 380,000 years after the Universe’s birth.