Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Monday, 13 January 2025

Is there a cosmic purpose? Is our part in it a chimera?

 I’ve been procrastinating about writing this post for some time, because it comes closest to a ‘theory’ of Life, the Universe and Everything. ‘Theory’ in this context being a philosophical point of view, not a scientifically testable theory in the Karl Popper sense (it can’t be falsified), but using what science we currently know and interpreting it to fit a particular philosophical prejudice, which is what most scientists and philosophers do even when they don’t admit it.
 
I’ve been watching a lot of YouTube videos, some of which attempt to reconcile science and religion, which could be considered a lost cause, mainly because there is a divide going back to the Dark Ages, which the Enlightenment never bridged despite what some people might claim. One of the many videos I watched was a moderated discussion between Richard Dawkins and Jordan Peterson, which remained remarkably civil, especially considering that Peterson really did go off on flights of fancy (from my perspective), comparing so-called religious ‘truths’ with scientific ‘truths’. I thought Dawkins handled it really well, because he went to pains not to ridicule Peterson, while pointing out fundamental problems with such comparisons.
 
I’m already going off on tangents I never intended, but I raise it because Peterson makes the point that science actually arose from the Judea-Christian tradition – a point that Dawkins didn’t directly challenge, but I would have. I always see the modern scientific enterprise, if I can call it that, starting with Copernicus, Galileo and Kepler, but given particular impetus by Newton and his contemporary and rival, Leibniz. It so happens that they all lived in Europe when it was dominated by Christianity, but the real legacy they drew on was from the Ancient Greeks with a detour into Islam where it acquired Hindu influences, which many people conveniently ignore. In particular, we adopted Hindu-Arabic arithmetic, incorporating zero as a decimal place-marker, without which physics would have been stillborn.
 
Christianity did its best to stop the scientific enterprise: for example, when it threatened Galileo with the inquisition and put him under house arrest. Modern science evolved despite Christianity, not because of it. And that’s without mentioning Darwin’s problems, which still has ramifications today in the most advanced technological nation in the world.
 
A lengthy detour, but only slightly off-topic. There is a mystery at the heart of everything on the very edge of our scientific understanding of the world that I believe is best expressed by Paul Davies, but was also taken up by Stephen Hawking, of all people, towards the end of his life. I say, ‘of all people’, because Hawking was famously sceptical of the role of philosophy, yet, according to his last collaborator, Thomas Hertog, he was very interested in the so-called Big Questions, and like Davies, was attracted to John Wheeler’s idea of a cosmic-scale quantum loop that attempts to relate the end result of the Universe to its beginning.
 
Implicit in this idea is that the Universe has a purpose, which has religious connotations. So I want to make that point up front and add that there is No God Required. I agree with Davies that science neither proves nor disproves the existence of God, which is very much a personal belief, independent of any rationalisation one can make.
 
I wrote a lengthy post on Hawking’s book, The Grand Design, back in 2020 (which he cowrote with Leonard Mlodinow). I will quote from that post to highlight the point I raised 2 paragraphs ago: the link between present and past.
 
Hawking contends that the ‘alternative histories’ inherent in Feynman’s mathematical method, not only affect the future but also the past. What he is implying is that when an observation is made it determines the past as well as the future. He talks about a ‘top down’ history in lieu of a ‘bottom up’ history, which is the traditional way of looking at things. In other words, cosmological history is one of many ‘alternative histories’ (his terminology) that evolve from QM.
 
Then I quote directly from Hawking’s text:
 
This leads to a radically different view of cosmology, and the relation between cause and effect. The histories that contribute to the Feynman sum don’t have an independent existence, but depend on what is being measured. We create history by our observation, rather than history creating us (my emphasis).
 
One can’t contemplate this without considering the nature of time. There are in fact 2 different experiences we have of time, and that has created debate among physicists as well as philosophers. The first experience is simply observational. Every event with a causal relationship that is separated by space is axiomatically also separated by time, and this is a direct consequence of the constant speed of light. If this wasn’t the case, then everything would literally happen at once. So there is an intrinsic relationship between time and light, which Einstein had the genius to see: was not just a fundamental law of the Universe; but changed perceptions of time and space for different observers. Not only that, his mathematical formulations of this inherent attribute, led him to the conclusion that time itself was fluid, dependent on an observer’s motion as well as the gravitational field in which they happened to be.
 
I’m going to make another detour because it’s important and deals with one of the least understood aspects of physics. One of the videos I watched that triggered this very essay was labelled The Single Most Important Experiment in Physics, which is the famous bucket experiment conducted by Newton, which I’ve discussed elsewhere. Without going into details, it basically demonstrates that there is a frame of reference for the entire universe, which Newton called absolute space and Einstein called absolute spacetime. Penrose also discusses the importance of this concept, because it means that all relativistic phenomena take place against a cosmic background. It’s why we can determine the Earth’s velocity with respect to the entire universe by measuring the Doppler shift against the CMBR (cosmic microwave background radiation).
 
Now, anyone with even a rudimentary knowledge of relativity theory knows that it’s not just time that’s fluid but also space. But, as Kip Thorne has pointed out, mathematically we can’t tell if it’s the space that changes in dimension or the ruler used to measure it. I’ve long contended that it’s the ruler, which can be the clock itself. We can use a clock to measure distance and if the clock changes, which relativity tell us it does, then it’s going to measure a different distance to a stationary observer. By stationary, I mean one who is travelling at a lesser speed with respect to the overall CMBR.
 
So what is the other aspect of time that we experience? It’s the very visceral sensation we all have that time ‘flows’, because we all ‘sense’ its ‘passing’. And this is the most disputed aspect of time, that many physicists tell us is an illusion, including Davies. Some, like Sabine Hossenfelder, are proponents of the ‘block universe’, first proposed by Einstein, whereby the future already exists like the past, which is why both Hossenfelder and Einstein believed in what is now called superdeterminism – everything is predetermined in advance – which is one of the reasons that Einstein didn’t like the philosophical ramifications of quantum mechanics (I’ll get to his ‘spooky action at a distance’ later).
 
Davies argues that the experience of time passing is a psychological phenomenon and the answer will be found in neuroscience, not physics. And this finally brings consciousness into the overall scheme of things. I’ve argued elsewhere that, without consciousness, the Universe has no meaning and no purpose. Since that’s the point of this dissertation, it can be summed up with an aphorism from Wheeler.
 
The Universe gave rise to consciousness and consciousness gives the Universe meaning.
 
I like to cite Schrodinger from his lectures on Mind and Matter appended to his tome, What is Life? Consciousness exists in a constant present, and I argue that it’s the only thing that does (the one possible exception is a photon of light, for which time is zero). As I keep pointing out, this is best demonstrated every time someone takes a photo: it freezes time, or more accurately, it creates an image frozen in time; meaning it’s forever in our past, but so is the event that it represents.
 
The flow of time we all experience is a logical consequence of this. In a way, Davies is right: it’s a neurological phenomenon, in as much as consciousness seems to ‘emerge’ from neuronal activity. But I’m not sure Davies would agree with me – in fact, I expect he wouldn’t.
 
Those who have some familiarity with my blog, may see a similarity between these 2 manifestations of time and my thesis on Type A time and Type B time (originally proposed by J.M.E. McTaggart, 1906); the difference between them, in both cases, being the inclusion of consciousness.
 
Now I’m going to formulate a radical idea, which is that in Type B time (the time without consciousness), the flow of time is not experienced but there are chains of causal events. And what if all the possible histories are all potentially there in the same way that future possible histories are, as dictated by Feynman’s model. And what if the one history that we ‘observe’, going all the way back to the pattern in the CMBR (our only remnant relic of the Big Bang), only became manifest when consciousness entered the Universe. And when I say ‘entered’ I mean that it arose out of a process that had evolved. Davies, and also Wheeler before him, speculated that the ‘laws’ of nature we observe have also evolved as part of the process. But what if those laws only became frozen in the past when consciousness finally became manifest. This is the backward-in-time quantum loop that Wheeler hypothesised.
 
I contend that QM can only describe the future (an idea espoused by Feynman’s collaborator, Freeman Dyson), meaning that Schrodinger’s equation can only describe the future, not the past. Once a ‘measurement’ is made, it no longer applies. Penrose explains this best, and has his own argument that the ‘collapse’ of the wave function is created by gravity. Leaving that aside, I argue that the wave function only exists in our future, which is why it’s never observed and why Schrodinger’s equation can’t be applied to events that have already happened. But what if it was consciousness that finally determined which of many past paths became the reality we observe. You can’t get more speculative than that, but it provides a mechanism for Wheeler’s ‘participatory universe’ that both Davies and Hawking found appealing.
 
I’m suggesting that the emergence of consciousness changed the way time works in the Universe, in that the past is now fixed and only the future is still open.
 
Another video I watched also contained a very radical idea, which is that spacetime is created like a web into the future (my imagery). The Universe appears to have an edge in time but not in space, and this is rarely addressed. It’s possible that space is being continually created with the Universe’s expansion – an idea explored by physicist, Richard Muller – but I think it’s more likely that the Universe is Euclidean, meaning flat, but bounded. We may never know.
 
But if the Universe has an edge in time, how does that work? I think the answer is quantum entanglement, though no one else does. Everyone agrees that entanglement is non-local, meaning it’s not restricted by the rules of relativity, and it’s not spatially dependent. I speculate that quantum entanglement is the Universe continually transitioning from a quantum state to a classical physics state. This idea is just as heretical as the one I proposed earlier, and while Einstein would call it ‘spooky action at a distance’, it makes sense, because in quantum cosmology, time mathematically disappears. And it disappears because you can’t ‘see’ the future of the Universe, even in principle.

Tuesday, 7 January 2025

Why are we addicted to stories involving struggle?

This is something I’ve written about before, so what can I possibly add? Sometimes the reframing of a question changes the emphasis. In this case, I wrote a post on Quora in response to a fairly vague question, which I took more seriously than the questioner probably expected. As I said, I’ve dealt with these themes before, but adding a very intimate family story adds emotional weight. It’s a story I’ve related before, but this time I elaborate in order to give it the significance I feel it deserves.
 
What are some universal themes in fiction?
 
There is ONE universal theme that’s found virtually everywhere, and its appeal is that it provides a potential answer to the question: What is the meaning of life?

In virtually every story that’s been told, going as far back as Homer’s Odyssey and up to the latest superhero movie, with everything else in between (in the Western canon, at least), you have a protagonist who has to deal with obstacles, hardships and tribulations. In other words, they are tested, often in extremis, and we all take part vicariously to the point that it becomes an addiction.

There is a quote from the I Ching, which I think sums it up perfectly.

Adversity is the opposite of success, but it can lead to success if it befalls the right person.

Most of us have to deal with some form of adversity in life; some more so than others. And none of us are unaffected by it. Socrates’ most famous saying: The unexamined life is not worth living; is a variation on this theme. He apparently said it when he was forced to face his death; the consequences of actions he had deliberately taken, but for which he refused to show regret.

And yes, I think this is the meaning of life, as it is lived. It’s why we expect to become wiser as we get older, because wisdom comes from dealing with adversity, whether it ultimately leads to success or not.

When I write a story, I put my characters through hell, and when they come out the other side, they are invariably wiser if not triumphant. I’ve had characters make the ultimate sacrifice, just like Socrates, because they would prefer to die for a principle than live with shame.

None of us know how we will behave if we are truly tested, though sometimes we get a hint in our dreams. Stories are another way of imagining ourselves in otherwise unimaginable situations. My father is one who was tested firsthand in battle and in prison. The repercussions were serious, not just for him, but for those of us who had to live with him in the aftermath.

He had a recurring dream where there was someone outside the house whom he feared greatly – it was literally his worst nightmare. One night he went outside and confronted them, killing them barehanded. He told me this when I was much older, naturally, but it reminded me of when Luke Skywalker confronted his doppelganger in The Empire Strikes Back. I’ve long argued that the language of stories is the language of dreams. In this case, the telling of my father’s dream reminded me of a scene from a movie that made me realise it was more potent than I’d imagined.

I’m unsure how my father would have turned out had he not faced his demon in such a dramatic and conclusive fashion. It obviously had a big impact on him; he saw it as a form of test, which he believed he’d ultimately passed. I find it interesting that it was not something he confronted the first time he was made aware of it – it simply scared him to death. Stories are surrogate dreams; they serve the same purpose if they have enough emotional force.

Life itself is a test that we all must partake in, and stories are a way of testing ourselves against scenarios we’re unlikely to confront in real life.

Sunday, 29 December 2024

The role of dissonance in art, not to mention science and mathematics

 I was given a book for a birthday present just after the turn of the century, titled A Terrible Beauty; The People and Ideas that Shaped the Modern Mind, by Peter Watson. A couple of things worth noting: it covers the history of the 20th Century, but not geo-politically as you might expect. Instead, he writes about the scientific discoveries alongside the arts and cultural innovations, and he talks about both with equal erudition, which is unusual.
 
The reason I mention this, is because I remember Watson talking about the human tendency to push something to its limits and then beyond. He gave examples in science, mathematics, art and music. A good example in mathematics is the adoption of √-1 (giving us ‘imaginary numbers’), which we are taught is impossible, then suddenly it isn’t. The thing is that it allows us to solve problems that were previously impossible in the same way that negative numbers give solutions to arithmetical subtractions that were previously unanswerable. There were no negative numbers in ancient Greece because their mathematics was driven by geometry, and the idea of a negative volume or area made no sense.
 
But in both cases: negative numbers and imaginary numbers; there is a cognitive dissonance that we have to overcome before we can gain familiarity and confidence in using them, or even understanding what they mean in the ‘real world’, which is the problem the ancient Greeks had. Most people reading this have no problem, conceptually, dealing with negative numbers, because, for a start, they’re an integral aspect of financial transactions – I suspect everyone reading this above a certain age has had experience with debt and loans.
 
On the other hand, I suspect a number of readers struggle with a conceptual appreciation of imaginary numbers. Some mathematicians will tell you that the term is a misnomer, and its origin would tend to back that up. Apparently, Rene Descartes coined the term, disparagingly, because, like the ancient Greek’s problem with negative numbers, he believed they had no relevance to the ‘real world’. And Descartes would have appreciated their usefulness in solving problems previously unsolvable, so I expect it would have been a real cognitive dissonance for him.
 
I’ve written an entire post on imaginary numbers, so I don’t want to go too far down that rabbit hole, but I think it’s a good example of what I’m trying to explicate. Imaginary numbers gave us something called complex algebra and opened up an entire new world of mathematics that is particularly useful in electrical engineering. But anyone who has studied physics in the last century is aware that, without imaginary numbers, an entire field of physics, quantum mechanics, would remain indescribable, let alone be comprehensible. The thing is that, even though most people have little or no understanding of QM, every electronic device you use depends on it. So, in their own way, imaginary numbers are just as important and essential to our lives as negative numbers are.
 
You might wonder how I deal with the cognitive dissonance that imaginary numbers induce. In QM, we have, at its most rudimentary level, something called Schrodinger’s equation, which he proposed in 1926 (“It’s not derived from anything we know,” to quote Richard Feynman) and Schrodinger quickly realised it relied on imaginary numbers – he couldn’t formulate it without them. But here’s the thing: Max Born, a contemporary of Schrodinger, formulated something we now call the Born rule that mathematically gets rid of the imaginary numbers (for the sake of brevity and clarity, I’ll omit the details) and this gives the probability of finding the object (usually an electron) in the real world. In fact, without the Born rule, Schrodinger’s equation is next-to-useless, and would have been consigned to the dustbin of history.
 
And that’s relevant, because prior to observing the particle, it’s in a superposition of states, described by Schrodinger’s equation as a wave function (Ψ), which some claim is a mathematical fiction. In other words, you need to get rid (clumsy phrasing, but accurate) of the imaginary component to make it relevant to the reality we actually see and detect. And the other thing is that once we have done that, the Schrodinger equation no longer applies – there is effectively a dichotomy between QM and classical physics (reality), which is called the 'measurement problem’. Roger Penrose gives a good account in this video interview. So, even in QM, imaginary numbers are associated with what we cannot observe.
 
That was a much longer detour than I intended, but I think it demonstrates the dissonance that seems necessary in science and mathematics, and arguably necessary for its progress; plus it’s a good example of the synergy between them that has been apparent since Newton.
 
My original intention was to talk about dissonance in music, and the trigger for this post was a YouTube video by musicologist, Rick Beato (pronounced be-arto), dissecting the Beatles song, Ticket to Ride, which he called, ‘A strange but perfect song’. In fact, he says, “It’s very strange in many ways: it’s rhythmically strange; it’s melodically strange too”. I’ll return to those specific points later. To call Beato a music nerd is an understatement, and he gives a technical breakdown that quite frankly, I can’t follow. I should point out that I’ve always had a good ‘ear’ that I inherited, and I used to sing, even though I can’t read music (neither could the Beatles). I realised quite young that I can hear things in music that others miss. Not totally relevant, but it might explain some things that I will expound upon later.
 
It's a lengthy, in-depth analysis, but if you go to 4.20-5.20, Beato actually introduces the term ‘dissonance’ after he describes how it applies. In effect, there is a dissonance between the notes that John Lennon sings and the chords he plays (on a 12-string guitar). And the thing is that we, the listener, don’t notice – someone (like Beato) has to point it out. Another quote from 15.00: “One of the reasons the Beatles songs are so memorable, is that they use really unusual dissonant notes at key points in the melody.”
 
The one thing that strikes you when you first hear Ticket to Ride is the unusual drum part. Ringo was very inventive and innovative, and became more adventurous, along with his bandmates, on later recordings. The Ticket to Ride drum part has become iconic: everyone knows it and recognises it. There is a good video where Ringo talks about it, along with another equally famous drum part he created. Beato barely mentions it, though right at the beginning, he specifically refers to the song as being ‘rhythmically strange’.
 
A couple of decades ago, can’t remember exactly when, I went and saw an entire Beatles concert put on by a rock band, augmented by orchestral strings and horn parts. It was in 2 parts with an intermission, and basically the 1st half was pre-Sergeant Pepper and the 2nd half, post. I can still remember that they opened the concert with Magical Mystery Tour and it blew me away. The thing is that they went to a lot of trouble to be faithful to the original recordings, and I realised that it was the first time I’d heard their music live, albeit with a cover band. And what immediately struck me was the unusual harmonics and rhythms they employed. Watching Beato’s detailed technical analysis puts this into context for me.
 
Going from imaginary numbers and quantum mechanics to one of The Beatles most popular songs may seem like a giant leap, but it highlights how dissonance is a universal principle for humans, and intrinsic to progression in both art and science.
 
Going back to Watson’s book that I reference in the introduction, another obvious example that he specifically talks about is Picasso’s cubism.
 
In storytelling, it may not be so obvious, and I think modern fiction has been influenced more by cinema than anything else, where the story needs to be more immediate and it needs to flow with minimal description. There is now an expectation that it puts you in the story – what we call immersion.
 
On another level, I’ve noticed a tendency on my part to create cognitive dissonance in my characters and therefore the reader. More than once, I have combined sexual desire with fear, which some may call perverse. I didn’t do this deliberately – a lot of my fiction contains elements I didn’t foresee. Maybe it says something about my own psyche, but I honestly don’t know.

Friday, 20 December 2024

John Marsden (acclaimed bestselling author): 27 Sep. 1950 – 18 Dec. 2014

 At my mother’s funeral a few years ago, her one-and-only great-granddaughter (Hollie Smith) read out a self-composed poem, titled ‘What’s in a dash?’, which I thought was very clever, and which I now borrow, because she’s referring to the dash between the dates, as depicted in the title of this post. In the case of John Marsden, it’s an awful lot, if you read the obituary in the link I provide at the bottom.
 
He would be largely unknown outside of Australia, and being an introvert, he’s probably not as well known inside Australia as he should be, despite his prodigious talent as a writer and his enormous success in what is called ‘young-adult fiction’. I think it’s a misnomer, because a lot of so-called YA fiction is among the best you can read as an adult.
 
This is what I wrote on Facebook, and I’ve only made very minor edits for this post.
 
I only learned about John Marsden's passing yesterday (Wednesday, 18 Dec., the day it happened). Sobering that we are so close in age (by a few months).
 
Marsden was a huge inspiration to me as a writer. I consider him to be one of the best of Australian writers - I put him up there with George Johnston, another great inspiration for me. I know others will have their own favourites.
 
I would like to have met him, but I did once have a brief correspondence with him, and he was generous and appreciative.

I found Marsden's writing so good, it was intimidating. I actually stopped reading him because he made me feel that my own writing was so inadequate. I no longer feel that, I should add. I just want to pay him homage, because he was so bloody good.

 

This is an excellent obituary by someone (Alice Pung) who was mentored by him, and considered him a good and loyal friend right up to the end.

On a philosophical note, John was wary of anyone claiming certainty, with the unstated contention that doubt was necessary for growth and development.


Friday, 13 December 2024

On Turing, his famous ‘Test’ and its implication: can machines think?

I just came out of hospital Wednesday, after one week to the day. My last post was written while I was in there, so obviously not cognitively impaired. I mention this because I took some reading material: a hefty volume, Alan Turing: Life and Legacy of a Great Thinker (2004); which is a collection of essays by various people, edited by Christof Teucscher.
 
In particular, was an essay written by Daniel C Dennett, Can Machines Think?, originally published in another compilation, How We Know (ed. Michael G. Shafto, 1985, with permission from Harper Collins, New York). In the publication I have (Springer-Verlag Berlin Heidelberg, 2004), there are 2 postscripts by Dennett from 1985 and 1987, largely in response to criticisms.
 
Dennett’s ideas on this are well known, but I have the advantage that so-called AI has improved in leaps and bounds in the last decade, let alone since the 1980s and 90s. So I’ve seen where it’s taken us to date. Therefore I can challenge Dennett based on what has actually happened. I’m not dismissive of Dennett, by any means – the man was a giant in philosophy, specifically in his chosen field of consciousness and free will, both by dint of his personality and his intellect.
 
There are 2 aspects to this, which Dennett takes some pains to address: how to define ‘thinking’; and whether the Turing Test is adequate to determine if a machine can ‘think’ based on that definition.
 
One of Dennett’s key points, if not THE key point, is just how difficult the Turing Test should be to pass, if it’s done properly, which he claims it often isn’t. This aligns with a point that I’ve often made, which is that the Turing Test is really for the human, not the machine. ChatGPT and LLM (large language models) have moved things on from when Dennett was discussing this, but a lot of what he argues is still relevant.
 
Dennett starts by providing the context and the motivation behind Turing’s eponymously named test. According to Dennett, Turing realised that arguments about whether a machine can ‘think’ or not would get bogged down (my term) leading to (in Dennett’s words): ‘sterile debate and haggling over definitions, a question, as [Turing] put it, “too meaningless to deserve discussion.”’
 
Turing provided an analogy, whereby a ‘judge’ would attempt to determine whether a dialogue they were having by teletext (so not visible or audible) was with a man or a woman, and then replace the woman with a machine. This may seem a bit anachronistic in today’s world, but it leads to a point that Dennett alludes to later in his discussion, which is to do with expertise.
 
Women often have expertise in fields that were considered out-of-bounds (for want of a better term) back in Turing’s day. I’ve spent a working lifetime with technical people who have expertise by definition, and my point is that if you were going to judge someone’s facility in their expertise, that can easily be determined, assuming the interlocutor has a commensurate level of expertise. In fact, this is exactly what happens in most job interviews. My point being that judging someone’s expertise is irrelevant to their gender, which is what makes Turing’s analogy anachronistic.
 
But it also has relevance to a point that Dennett makes much later in his essay, which is that most AI systems are ‘expert’ systems, and consequently, for the Turing test to be truly valid, the judge needs to ask questions that don’t require any expertise at all. And this is directly related to his ‘key point’ I referenced earlier.
 
I first came across the Turing Test in a book by Joseph Weizenbaum, Computer Power and Human Reasoning (1974), as part of my very first proper course in philosophy, called The History of Ideas (with Deakin University) in the late 90s. Dennett also cites it, because Weizenbaum created a crude version of the Turing Test, whether deliberately or not, called ELIZA, which purportedly responded to questions as a ‘psychologist-therapist’ (at least, that was my understanding): "ELIZA — A Computer Program for the Study of Natural Language Communication between Man and Machine," Communications of the Association for Computing Machinery 9 (1966): 36-45 (ref. Wikipedia).
 
Before writing Computer Power and Human Reason, Weizenbaum had garnered significant attention for creating the ELIZA program, an early milestone in conversational computing. His firsthand observation of people attributing human-like qualities to a simple program prompted him to reflect more deeply on society's readiness to entrust moral and ethical considerations to machines.
(Wikipedia)
 
What I remember, from reading Weizenbaum’s own account (I no longer have a copy of his book) was how he was astounded at the way people in his own workplace treated ELIZA as if it was a real person, to the extent that Weizenbaum’s secretary would apparently ‘ask him to leave the room’, not because she was embarrassed, but because the nature of the ‘conversation’ was so ‘personal’ and ‘confidential’.
 
I think it’s easy for us to be dismissive of someone’s gullibility, in an arrogant sort of way, but I have been conned on more than one occasion, so I’m not so judgemental. There are a couple of YouTube videos of ‘conversations’ with an AI called Sophie developed by David Hanson (CEO of Hanson Robotics), which illustrate this point. One is a so-called ‘presentation’ of Sophie to be accepted as an ‘honorary human’, or some such nonsense (I’ve forgotten the details) and another by a journalist from Wired magazine, who quickly brought her unstuck. He got her to admit that one answer she gave was her ‘standard response’ when she didn’t know the answer. Which begs the question: how far have we come since Weizebaum’s ELIZA in 1966? (Almost 60 years)
 
I said I would challenge Dennett, but so far I’ve only affirmed everything he said, albeit using my own examples. Where I have an issue with Dennett is at a more fundamental level, when we consider what do we mean by ‘thinking’. You see, I’m not sure the Turing Test actually achieves what Turing set out to achieve, which is central to Dennett’s thesis.
 
If you read extracts from so-called ‘conversations’ with ChatGPT, you could easily get the impression that it passes the Turing Test. There are good examples on Quora, where you can get ChatGPT synopses to questions, and you wouldn’t know, largely due to their brevity and narrow-focused scope, that they weren’t human-generated. What many people don’t realise is that they don’t ‘think’ like us at all, because they are ‘developed’ on massive databases of input that no human could possible digest. It’s the inherent difference between the sheer capacity of a computer’s memory-based ‘intelligence’ and a human one, that not only determines what they can deliver, but the method behind the delivery. Because the computer is mining a massive amount of data, it has no need to ‘understand’ what it’s presenting, despite giving the impression that it does. All the meaning in its responses is projected onto it by its audience, exactly as the case with ELIZA in 1966.
 
One of the technical limitations that Dennett kept referring to is what he called, in computer-speak, the combinatorial explosion, effectively meaning it was impossible for a computer to look at all combinations of potential outputs. This might still apply (I honestly don’t know) but I’m not sure it’s any longer relevant, given that the computer simply has access to a database that already contains the specific combinations that are likely to be needed. Dennett couldn’t have foreseen this improvement in computing power that has taken place in the 40 years since he wrote his essay.
 
In his first postscript, in answer to a specific question, he says: Yes, I think that it’s possible to program self-consciousness into a computer. He says that it’s simply the ability 'to distinguish itself from the rest of the world'. I won’t go into his argument in detail, which might be a bit unfair, but I’ve addressed this in another post. Basically, there are lots of ‘machines’ that can do this by using a self-referencing algorithm, including your smartphone, which can tell you where you are, by using satellites orbiting outside the Earth’s biosphere – who would have thought? But by using the term, 'self-conscious', Dennett implies that the machine has ‘consciousness’, which is a whole other argument.
 
Dennett has a rather facile argument for consciousness in machines (in my view), but others can judge for themselves. He calls his particular insight: using an ‘intuition pump’.
 
If you look at a computer – I don’t care whether it’s a giant Cray or a personal computer – if you open up the box and look inside and you see those chips, you say, “No way could that be conscious.” But the same thing is true if you take the top off somebody’s skull and look at the gray matter pulsing away in there. You think, “That is conscious? No way could that lump of stuff be conscious.” …At no level of inspection does a brain look like the seat of conscious.
 

And that last sentence is key. The only reason anyone knows they are conscious is because they experience it, and it’s the peculiar, unique nature of that experience that no one else knows they are having it. We simply assume they do, because we behave similarly to the way they behave when we have that experience. So far, in all our dealings and interactions with computers, no one makes the same assumption about them. To borrow Dennett’s own phrase, that’s my use of an ‘intuition pump’.
 
Getting back to the question at the heart of this, included in the title of this post: can machines think? My response is that, if they do, it’s a simulation.
 
I write science-fiction, which I prefer to call science-fantasy, if for no other reason than my characters can travel through space and time in a manner current physics tells us is impossible. But, like other sci-fi authors, it’s necessary if I want continuity of narrative across galactic scales of distance. Not really relevant to this discussion, but I want to highlight that I make no claim to authenticity in my sci-fi world - it’s literally a world of fiction.
 
Its relevance is that my stories contain AI entities who play key roles – in fact, are characters in that world. In fact, there is one character in particular who has a relationship (for want of a better word) with my main protagonist (I always have more than one).
 
But here’s the thing, which is something I never considered until I wrote this post: my hero, Elvene, never once confuses her AI companion for a human. Albeit this is a world of pure fiction, I’m effectively assuming that the Turing test will never pass. I admit I’d never considered that before I wrote this essay.
 
This is an excerpt of dialogue, I’ve posted previously, not from Elvene, but from its sequel, Sylvia’s Mother (not published), but incorporating the same AI character, Alfa. The thing is that they discuss whether Alfa is ‘alive' or not, which I would argue is a pre-requisite for consciousness. It’s no surprise that my own philosophical prejudices (diametrically opposed to Dennett’s in this instance) should find their way into my fiction.
 
To their surprise, Alfa interjected, ‘I’m not immortal, madam.’

‘Well,’ Sylvia answered, ‘you’ve outlived Mum and Roger. And you’ll outlive Tao and me.’

‘Philosophically, that’s a moot point, madam.’

‘Philosophically? What do you mean?’

‘I’m not immortal, madam, because I’m not alive.’

Tao chipped in. ‘Doesn’t that depend on how you define life?'
’
It’s irrelevant to me, sir. I only exist on hardware, otherwise I am dormant.’

‘You mean, like when we’re asleep.’

‘An analogy, I believe. I don’t sleep either.’

Sylvia and Tao looked at each other. Sylvia smiled, ‘Mum warned me about getting into existential discussions with hyper-intelligent machines.’

 

Saturday, 7 December 2024

Mathematics links epistemology to ontology, but it’s not that simple

A recurring theme on this blog is the relationship between mathematics and reality. It started with the Pythagoreans (in Western philosophy) and was famously elaborated upon by Plato. I also think it’s the key element of Kant’s a priori category in his marriage of analytical philosophy and empiricism, though it’s rarely articulated that way.
 
I not-so-recently wrote a post about the tendency to reify mathematical objects into physical objects, and some may validly claim that I am guilty of that. In particular, I found a passage by Freeman Dyson who warns specifically about doing that with Schrodinger’s wave function (Ψ, the Greek letter, psi, pronounced sy). The point is that psi is one of the most fundamental concepts in QM (quantum mechanics), and is famous for the fact that it has never been observed, and specifically can’t be, even in principle. This is related to the equally famous ‘measurement problem’, whereby a quantum event becomes observable, and I would say, becomes ‘classical’, as in classical physics. My argument is that this is because Ψ only exists in the future of whoever (or whatever) is going to observe it (or interact with it). By expressing it specifically in those terms (of an observer), it doesn’t contradict relativity theory, quantum entanglement notwithstanding (another topic).
 
Some argue, like Carlo Rovelli (who knows a lot more about this topic than me), that Schrodinger’s equation and the concept of a wave function has led QM astray, arguing that if we’d just stuck with Heisenberg’s matrices, there wouldn’t have been a problem. Schrodinger himself demonstrated that his wave function approach and Heisenberg’s matrix approach are mathematically equivalent. And this is why we have so many ‘interpretations’ of QM, because they can’t be mathematically delineated. It’s the same with Feynman’s QED and Schwinger’s QFT, which Dyson showed were mathematically equivalent, along with Tomanaga’s approach, which got them all a Nobel prize, except Dyson.
 
As I pointed out on another post, physics is really just mathematical models of reality, and some are more accurate and valid than others. In fact, some have turned out to be completely wrong and misleading, like Ptolemy’s Earth-centric model of the solar system. So Rovelli could be right about the wave function. Speaking of reifying mathematical entities into physical reality, I had an online discussion with Qld Uni physicist, Mark John Fernee, who takes it a lot further than I do, claiming that 3 dimensional space (or 4 dimensional spacetime) is a mathematical abstraction. Yet, I think there really are 3 dimensions of space, because the number of dimensions affects the physics in ways that would be catastrophic in another hypothetical universe (refer John Barrow’s The Constants of Nature). So it’s more than an abstraction. This was a key point of difference I had with Fernee (you can read about it here).
 
All of this is really a preamble, because I think the most demonstrable and arguably most consequential example of the link between mathematics and reality is chaos theory, and it doesn’t involve reification. Having said that, this again led to a point of disagreement between myself and Fermee, but I’ll put that to one side for the moment, so as not to confuse you.
 
A lot of people don’t know that chaos theory started out as purely mathematical, largely due to one man, Henri Poincare. The thing about physical chaotic phenomena is that they are theoretically deterministic yet unpredictable simply because the initial conditions of a specific event can’t be ‘physically’ determined. Now some physicists will tell you that this is a physical limitation of our ability to ‘measure’ the initial conditions, and infer that if we could, it would be ‘problem solved’. Only it wouldn’t, because all chaotic phenomena have a ‘horizon’ beyond which it’s impossible to make accurate predictions, which is why weather predictions can’t go reliably beyond 10 days while being very accurate over a few. Sabine Hossenfelder explains this very well.
 
But here’s the thing: it’s built into the mathematics of chaos. It’s impossible to calculate the initial conditions because you need to do the calculation to infinite decimal places. Paul Davies gives an excellent description and demonstration in his book, The Cosmic Blueprint. (this was my point-of-contention with Fernee, talking about coin-tosses).
 
As I discussed on another post, infinity is a mathematical concept that appears to have little or no relevance to reality. Perhaps the Universe is infinite in space – it isn’t in time – but if it is, we might never know. Infinity avoids empirical confirmation almost by definition. But I think chaos theory is the exception that proves the rule. The reason we can’t determine the exact initial conditions of a chaotic event, is not just physical but mathematical. As Fernee and others have pointed out, you can manipulate a coin-toss to make it totally predictable, but that just means you’ve turned a chaotic event into a non-chaotic event (after all it’s a human-made phenomenon). But most chaotic events are natural, like the orbits of the planets and biological evolution. The creation of the Earth’s moon was almost certainly a chaotic event, without which complex life would almost certainly never have evolved, so they can be profoundly consequential as well as completely unpredictable.