Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Friday, 20 December 2024

John Marsden (acclaimed bestselling author): 27 Sep. 1950 – 18 Dec. 2014

 At my mother’s funeral a few years ago, her one-and-only great-granddaughter (Hollie Smith) read out a self-composed poem, titled ‘What’s in a dash?’, which I thought was very clever, and which I now borrow, because she’s referring to the dash between the dates, as depicted in the title of this post. In the case of John Marsden, it’s an awful lot, if you read the obituary in the link I provide at the bottom.
 
He would be largely unknown outside of Australia, and being an introvert, he’s probably not as well known inside Australia as he should be, despite his prodigious talent as a writer and his enormous success in what is called ‘young-adult fiction’. I think it’s a misnomer, because a lot of so-called YA fiction is among the best you can read as an adult.
 
This is what I wrote on Facebook, and I’ve only made very minor edits for this post.
 
I only learned about John Marsden's passing yesterday (Wednesday, 18 Dec., the day it happened). Sobering that we are so close in age (by a few months).
 
Marsden was a huge inspiration to me as a writer. I consider him to be one of the best of Australian writers - I put him up there with George Johnston, another great inspiration for me. I know others will have their own favourites.
 
I would like to have met him, but I did once have a brief correspondence with him, and he was generous and appreciative.

I found Marsden's writing so good, it was intimidating. I actually stopped reading him because he made me feel that my own writing was so inadequate. I no longer feel that, I should add. I just want to pay him homage, because he was so bloody good.

 

This is an excellent obituary by someone (Alice Pung) who was mentored by him, and considered him a good and loyal friend right up to the end.

On a philosophical note, John was wary of anyone claiming certainty, with the unstated contention that doubt was necessary for growth and development.


Friday, 13 December 2024

On Turing, his famous ‘Test’ and its implication: can machines think?

I just came out of hospital Wednesday, after one week to the day. My last post was written while I was in there, so obviously not cognitively impaired. I mention this because I took some reading material: a hefty volume, Alan Turing: Life and Legacy of a Great Thinker (2004); which is a collection of essays by various people, edited by Christof Teucscher.
 
In particular, was an essay written by Daniel C Dennett, Can Machines Think?, originally published in another compilation, How We Know (ed. Michael G. Shafto, 1985, with permission from Harper Collins, New York). In the publication I have (Springer-Verlag Berlin Heidelberg, 2004), there are 2 postscripts by Dennett from 1985 and 1987, largely in response to criticisms.
 
Dennett’s ideas on this are well known, but I have the advantage that so-called AI has improved in leaps and bounds in the last decade, let alone since the 1980s and 90s. So I’ve seen where it’s taken us to date. Therefore I can challenge Dennett based on what has actually happened. I’m not dismissive of Dennett, by any means – the man was a giant in philosophy, specifically in his chosen field of consciousness and free will, both by dint of his personality and his intellect.
 
There are 2 aspects to this, which Dennett takes some pains to address: how to define ‘thinking’; and whether the Turing Test is adequate to determine if a machine can ‘think’ based on that definition.
 
One of Dennett’s key points, if not THE key point, is just how difficult the Turing Test should be to pass, if it’s done properly, which he claims it often isn’t. This aligns with a point that I’ve often made, which is that the Turing Test is really for the human, not the machine. ChatGPT and LLM (large language models) have moved things on from when Dennett was discussing this, but a lot of what he argues is still relevant.
 
Dennett starts by providing the context and the motivation behind Turing’s eponymously named test. According to Dennett, Turing realised that arguments about whether a machine can ‘think’ or not would get bogged down (my term) leading to (in Dennett’s words): ‘sterile debate and haggling over definitions, a question, as [Turing] put it, “too meaningless to deserve discussion.”’
 
Turing provided an analogy, whereby a ‘judge’ would attempt to determine whether a dialogue they were having by teletext (so not visible or audible) was with a man or a woman, and then replace the woman with a machine. This may seem a bit anachronistic in today’s world, but it leads to a point that Dennett alludes to later in his discussion, which is to do with expertise.
 
Women often have expertise in fields that were considered out-of-bounds (for want of a better term) back in Turing’s day. I’ve spent a working lifetime with technical people who have expertise by definition, and my point is that if you were going to judge someone’s facility in their expertise, that can easily be determined, assuming the interlocutor has a commensurate level of expertise. In fact, this is exactly what happens in most job interviews. My point being that judging someone’s expertise is irrelevant to their gender, which is what makes Turing’s analogy anachronistic.
 
But it also has relevance to a point that Dennett makes much later in his essay, which is that most AI systems are ‘expert’ systems, and consequently, for the Turing test to be truly valid, the judge needs to ask questions that don’t require any expertise at all. And this is directly related to his ‘key point’ I referenced earlier.
 
I first came across the Turing Test in a book by Joseph Weizenbaum, Computer Power and Human Reasoning (1974), as part of my very first proper course in philosophy, called The History of Ideas (with Deakin University) in the late 90s. Dennett also cites it, because Weizenbaum created a crude version of the Turing Test, whether deliberately or not, called ELIZA, which purportedly responded to questions as a ‘psychologist-therapist’ (at least, that was my understanding): "ELIZA — A Computer Program for the Study of Natural Language Communication between Man and Machine," Communications of the Association for Computing Machinery 9 (1966): 36-45 (ref. Wikipedia).
 
Before writing Computer Power and Human Reason, Weizenbaum had garnered significant attention for creating the ELIZA program, an early milestone in conversational computing. His firsthand observation of people attributing human-like qualities to a simple program prompted him to reflect more deeply on society's readiness to entrust moral and ethical considerations to machines.
(Wikipedia)
 
What I remember, from reading Weizenbaum’s own account (I no longer have a copy of his book) was how he was astounded at the way people in his own workplace treated ELIZA as if it was a real person, to the extent that Weizenbaum’s secretary would apparently ‘ask him to leave the room’, not because she was embarrassed, but because the nature of the ‘conversation’ was so ‘personal’ and ‘confidential’.
 
I think it’s easy for us to be dismissive of someone’s gullibility, in an arrogant sort of way, but I have been conned on more than one occasion, so I’m not so judgemental. There are a couple of YouTube videos of ‘conversations’ with an AI called Sophie developed by David Hanson (CEO of Hanson Robotics), which illustrate this point. One is a so-called ‘presentation’ of Sophie to be accepted as an ‘honorary human’, or some such nonsense (I’ve forgotten the details) and another by a journalist from Wired magazine, who quickly brought her unstuck. He got her to admit that one answer she gave was her ‘standard response’ when she didn’t know the answer. Which begs the question: how far have we come since Weizebaum’s ELIZA in 1966? (Almost 60 years)
 
I said I would challenge Dennett, but so far I’ve only affirmed everything he said, albeit using my own examples. Where I have an issue with Dennett is at a more fundamental level, when we consider what do we mean by ‘thinking’. You see, I’m not sure the Turing Test actually achieves what Turing set out to achieve, which is central to Dennett’s thesis.
 
If you read extracts from so-called ‘conversations’ with ChatGPT, you could easily get the impression that it passes the Turing Test. There are good examples on Quora, where you can get ChatGPT synopses to questions, and you wouldn’t know, largely due to their brevity and narrow-focused scope, that they weren’t human-generated. What many people don’t realise is that they don’t ‘think’ like us at all, because they are ‘developed’ on massive databases of input that no human could possible digest. It’s the inherent difference between the sheer capacity of a computer’s memory-based ‘intelligence’ and a human one, that not only determines what they can deliver, but the method behind the delivery. Because the computer is mining a massive amount of data, it has no need to ‘understand’ what it’s presenting, despite giving the impression that it does. All the meaning in its responses is projected onto it by its audience, exactly as the case with ELIZA in 1966.
 
One of the technical limitations that Dennett kept referring to is what he called, in computer-speak, the combinatorial explosion, effectively meaning it was impossible for a computer to look at all combinations of potential outputs. This might still apply (I honestly don’t know) but I’m not sure it’s any longer relevant, given that the computer simply has access to a database that already contains the specific combinations that are likely to be needed. Dennett couldn’t have foreseen this improvement in computing power that has taken place in the 40 years since he wrote his essay.
 
In his first postscript, in answer to a specific question, he says: Yes, I think that it’s possible to program self-consciousness into a computer. He says that it’s simply the ability 'to distinguish itself from the rest of the world'. I won’t go into his argument in detail, which might be a bit unfair, but I’ve addressed this in another post. Basically, there are lots of ‘machines’ that can do this by using a self-referencing algorithm, including your smartphone, which can tell you where you are, by using satellites orbiting outside the Earth’s biosphere – who would have thought? But by using the term, 'self-conscious', Dennett implies that the machine has ‘consciousness’, which is a whole other argument.
 
Dennett has a rather facile argument for consciousness in machines (in my view), but others can judge for themselves. He calls his particular insight: using an ‘intuition pump’.
 
If you look at a computer – I don’t care whether it’s a giant Cray or a personal computer – if you open up the box and look inside and you see those chips, you say, “No way could that be conscious.” But the same thing is true if you take the top off somebody’s skull and look at the gray matter pulsing away in there. You think, “That is conscious? No way could that lump of stuff be conscious.” …At no level of inspection does a brain look like the seat of conscious.
 

And that last sentence is key. The only reason anyone knows they are conscious is because they experience it, and it’s the peculiar, unique nature of that experience that no one else knows they are having it. We simply assume they do, because we behave similarly to the way they behave when we have that experience. So far, in all our dealings and interactions with computers, no one makes the same assumption about them. To borrow Dennett’s own phrase, that’s my use of an ‘intuition pump’.
 
Getting back to the question at the heart of this, included in the title of this post: can machines think? My response is that, if they do, it’s a simulation.
 
I write science-fiction, which I prefer to call science-fantasy, if for no other reason than my characters can travel through space and time in a manner current physics tells us is impossible. But, like other sci-fi authors, it’s necessary if I want continuity of narrative across galactic scales of distance. Not really relevant to this discussion, but I want to highlight that I make no claim to authenticity in my sci-fi world - it’s literally a world of fiction.
 
Its relevance is that my stories contain AI entities who play key roles – in fact, are characters in that world. In fact, there is one character in particular who has a relationship (for want of a better word) with my main protagonist (I always have more than one).
 
But here’s the thing, which is something I never considered until I wrote this post: my hero, Elvene, never once confuses her AI companion for a human. Albeit this is a world of pure fiction, I’m effectively assuming that the Turing test will never pass. I admit I’d never considered that before I wrote this essay.
 
This is an excerpt of dialogue, I’ve posted previously, not from Elvene, but from its sequel, Sylvia’s Mother (not published), but incorporating the same AI character, Alfa. The thing is that they discuss whether Alfa is ‘alive' or not, which I would argue is a pre-requisite for consciousness. It’s no surprise that my own philosophical prejudices (diametrically opposed to Dennett’s in this instance) should find their way into my fiction.
 
To their surprise, Alfa interjected, ‘I’m not immortal, madam.’

‘Well,’ Sylvia answered, ‘you’ve outlived Mum and Roger. And you’ll outlive Tao and me.’

‘Philosophically, that’s a moot point, madam.’

‘Philosophically? What do you mean?’

‘I’m not immortal, madam, because I’m not alive.’

Tao chipped in. ‘Doesn’t that depend on how you define life?'
’
It’s irrelevant to me, sir. I only exist on hardware, otherwise I am dormant.’

‘You mean, like when we’re asleep.’

‘An analogy, I believe. I don’t sleep either.’

Sylvia and Tao looked at each other. Sylvia smiled, ‘Mum warned me about getting into existential discussions with hyper-intelligent machines.’

 

Saturday, 7 December 2024

Mathematics links epistemology to ontology, but it’s not that simple

A recurring theme on this blog is the relationship between mathematics and reality. It started with the Pythagoreans (in Western philosophy) and was famously elaborated upon by Plato. I also think it’s the key element of Kant’s a priori category in his marriage of analytical philosophy and empiricism, though it’s rarely articulated that way.
 
I not-so-recently wrote a post about the tendency to reify mathematical objects into physical objects, and some may validly claim that I am guilty of that. In particular, I found a passage by Freeman Dyson who warns specifically about doing that with Schrodinger’s wave function (Ψ, the Greek letter, psi, pronounced sy). The point is that psi is one of the most fundamental concepts in QM (quantum mechanics), and is famous for the fact that it has never been observed, and specifically can’t be, even in principle. This is related to the equally famous ‘measurement problem’, whereby a quantum event becomes observable, and I would say, becomes ‘classical’, as in classical physics. My argument is that this is because Ψ only exists in the future of whoever (or whatever) is going to observe it (or interact with it). By expressing it specifically in those terms (of an observer), it doesn’t contradict relativity theory, quantum entanglement notwithstanding (another topic).
 
Some argue, like Carlo Rovelli (who knows a lot more about this topic than me), that Schrodinger’s equation and the concept of a wave function has led QM astray, arguing that if we’d just stuck with Heisenberg’s matrices, there wouldn’t have been a problem. Schrodinger himself demonstrated that his wave function approach and Heisenberg’s matrix approach are mathematically equivalent. And this is why we have so many ‘interpretations’ of QM, because they can’t be mathematically delineated. It’s the same with Feynman’s QED and Schwinger’s QFT, which Dyson showed were mathematically equivalent, along with Tomanaga’s approach, which got them all a Nobel prize, except Dyson.
 
As I pointed out on another post, physics is really just mathematical models of reality, and some are more accurate and valid than others. In fact, some have turned out to be completely wrong and misleading, like Ptolemy’s Earth-centric model of the solar system. So Rovelli could be right about the wave function. Speaking of reifying mathematical entities into physical reality, I had an online discussion with Qld Uni physicist, Mark John Fernee, who takes it a lot further than I do, claiming that 3 dimensional space (or 4 dimensional spacetime) is a mathematical abstraction. Yet, I think there really are 3 dimensions of space, because the number of dimensions affects the physics in ways that would be catastrophic in another hypothetical universe (refer John Barrow’s The Constants of Nature). So it’s more than an abstraction. This was a key point of difference I had with Fernee (you can read about it here).
 
All of this is really a preamble, because I think the most demonstrable and arguably most consequential example of the link between mathematics and reality is chaos theory, and it doesn’t involve reification. Having said that, this again led to a point of disagreement between myself and Fermee, but I’ll put that to one side for the moment, so as not to confuse you.
 
A lot of people don’t know that chaos theory started out as purely mathematical, largely due to one man, Henri Poincare. The thing about physical chaotic phenomena is that they are theoretically deterministic yet unpredictable simply because the initial conditions of a specific event can’t be ‘physically’ determined. Now some physicists will tell you that this is a physical limitation of our ability to ‘measure’ the initial conditions, and infer that if we could, it would be ‘problem solved’. Only it wouldn’t, because all chaotic phenomena have a ‘horizon’ beyond which it’s impossible to make accurate predictions, which is why weather predictions can’t go reliably beyond 10 days while being very accurate over a few. Sabine Hossenfelder explains this very well.
 
But here’s the thing: it’s built into the mathematics of chaos. It’s impossible to calculate the initial conditions because you need to do the calculation to infinite decimal places. Paul Davies gives an excellent description and demonstration in his book, The Cosmic Blueprint. (this was my point-of-contention with Fernee, talking about coin-tosses).
 
As I discussed on another post, infinity is a mathematical concept that appears to have little or no relevance to reality. Perhaps the Universe is infinite in space – it isn’t in time – but if it is, we might never know. Infinity avoids empirical confirmation almost by definition. But I think chaos theory is the exception that proves the rule. The reason we can’t determine the exact initial conditions of a chaotic event, is not just physical but mathematical. As Fernee and others have pointed out, you can manipulate a coin-toss to make it totally predictable, but that just means you’ve turned a chaotic event into a non-chaotic event (after all it’s a human-made phenomenon). But most chaotic events are natural, like the orbits of the planets and biological evolution. The creation of the Earth’s moon was almost certainly a chaotic event, without which complex life would almost certainly never have evolved, so they can be profoundly consequential as well as completely unpredictable.
 

Sunday, 1 December 2024

What’s the way forward?

 Philosophy Now Issue 163 (Aug/Sep 2024) has as its theme, The Politics of Freedom. I’ve already cited an article by Paul Doolan in my last post on authenticity, not that I discussed it in depth. A couple of other articles, Doughnut Economics by David Howard and Freedom & State Intervention by Audren Layeux, also piqued my mind, because they both deal with social dynamics and their intersection with things like education and economics.
 
I’ll start with Layeux, described as ‘a consultant and researcher who has published several papers and articles, mostly in the domain of the digital economy and new social movements.’ He gives an historical perspective going back to Thomas Hobbes (1651) and Adam Smith (1759), as well as the French Revolution. He gives special mention to Johann Gottlieb Fichte’s “extremely influential 1813 book The Doctrine of the State”, where, according to Layeux, “Fichte insists that building a nation state must start with education.” From the perspective of living in the West in the 21st Century, it’s hard to disagree.
 
Layeux then effectively argues that the proposed idealistic aims of Hobbes and Fichte to create ‘sovereign adults’ (his term) through education “to control their worst impulses and become encultured” was shattered by the unprecedented, industrial-scale destruction unleashed by World War One.
 
Layeux then spends most of his remaining essay focusing on ‘German legal theorist Carl Schmidt (1888-1985)’, whom I admit I’d never heard of (like Fichte). He jumps to post WWII, after briefly describing how Schmidt saw the Versailles Treaty as a betrayal (my term) of the previous tacit understanding that war between the European states was inevitable therefore regulated. In other words, WWI demonstrated that such regulation can no longer work and that ‘nationalism leads to massacre’ (Layeux’s words).
 
Post WWII, Layeux argues that “the triumph of Keynesian economics in the West and Communism in the East saw the rise of state-controlled economics”, which has evolved and morphed into trade blocks, though Layeux doesn’t mention that.
 
It’s only towards the end that he tells us that “Carl Schmidt was a monster. A supporter of the Nazi regime, he did everything he could to become the official lawyer of the Third Reich.” Therefore we shouldn’t be surprised to learn that, according to Layeux, Schmidt argued that “…this new type of individual freedom requires an extremely intrusive state.” In effect, it’s a diametrically opposed position to neo-liberalism, which is how most of us see the modern world evolving.
 
I don’t have the space to do full justice to Layeux’s arguments, but, in the end, I found him pessimistic. He argues that current changes in the political landscape “are in line with what Schmidt predicted: the return of premodern forms of violence”.  Effectively, the “removal of state control individualism” (is that an oxymoron?) is an evocation of what he calls “Schmidt’s curse: violence cannot be erased or tamed, but only managed through political and social engineering.” By ‘premodern forms of violence’, I assume he means sectarian violence which we’ve seen a lot of at the start of this century, in various places, and which he seems to be comparing to the religious wars that plagued Europe for centuries.
 
Maybe I’m just an optimist, but I do think I live in a better world than the ones my parents inhabited, considering they had to live through the Great Depression and WWII, and both of whom had very limited education despite being obviously very intelligent. And so yes, I’m one of those who thinks that education is key, but it’s currently creating a social divide, as was recently demonstrated in the US election. It’s also evident elsewhere, like Australia and UK (think Brexit) where people living in rural areas feel disenfranchised and there is polarisation in politics emerging as a result. This video interview with a Harvard philosopher in the US gives the best analysis I’ve come across, because he links this social divide to the political schism we are witnessing.
 
And this finally brings me to the other essay I reference in my introduction: Doughnut Economics by David Howard, who is ‘a retired headteacher, and Chair of the U3A Philosophy Group in Church Stretton, Shropshire.’ The gist of his treatise is the impact of inequality, which arises from the class or social divide that I just mentioned. His reference to ‘Doughnut Economics’ is a 2017 book by Kate Raworth, who, according to Howard, “combined planetary boundaries with the idea of a social foundation – a level of life below which no person should be allowed to fall.”
 
In particular, she focuses on the consequences of climate change and other environmental issues like biodiversity-loss, ocean acidification, freshwater withdrawals, chemical pollution, land conversion (not an exhaustive list). There seems to be a tension, if not an outright conflict, between the consequences of economic growth, industrial scale progress, with its commensurate increasing standards of living, and the stresses we are imposing on the planet. And this tension is not just political but physical. It’s also asymmetrical in that many of us benefit more than others. But because those who benefit effectively control the outcomes, the asymmetry leads to both global and national inequalities that no one wants to address. Yet history shows that they will eventually bite us, and I feel that this is possibly the real issue that Layeux was alluding to, yet never actually addressed.
 
Arguably, the most important and definitive social phenomenon in the last century was the rise of feminism. It’s hard for us (in the West at least) to imagine that for centuries women were treated as property, and still are in some parts of the world: that their talents, abilities and intellect were ignored, or treated as aberrations when they became manifest.
 
There are many examples, right up until last century, but a standout for me is Hypatia (400AD), who was Librarian at the famous Library of Alexandria, following in the footsteps of such luminaries as Euclid and Eratosthenes. She was not only a scientist and mathematician, but she mentored a Bishop and a Roman Prefect (I’ve seen some of the correspondence from the Bishop, whose admiration and respect shines through). She was killed by a Christian mob. Being ahead of your time can be fatal. Other examples include Socrates (~500BC) and Alan Turing (20th Century) and arguably Jesus, who was a philosopher, not a God.
 
Getting back to feminism, education again is the key, but I’d suggest that the introduction of oral contraception will be seen as a major turning point in humanity’s cultural and technological evolution.
 
What I find frustrating is that I believe we have the means, technologically and logistically, to address inequality, but the politico-economic model we are following seems incapable of pursuing it. This won’t be achieved with revolutions or maintaining the status quo. History shows that real change is generational, and it’s evolutionary. When I look around the world, I think Europe is on a better path than America, but the 21st Century requires a global approach that’s never been achieved before, and seems unlikely at present, given the rise of populist movements which exacerbate polarisation.
 
The one thing I’ve learned from a working lifetime in engineering, is that co-operation and collaboration will always succeed over division and obstruction, which our political parties perversely promote. I’ve made the point before that the best leaders are the ones who get the best out of the people they lead, whether they are captains of a sporting team, directors of a stage production, project managers or world leaders. Anyone who has worked in a team knows the importance of achieving consensus and respecting others’ expertise.

Tuesday, 26 November 2024

An essay on authenticity

 I read an article in Philosophy Now by Paul Doolan, who ‘taught philosophy in international schools in Asia and in Europe’ and is also an author of non-fiction. The title of the article is Authenticity and Absurdity, whereby he effectively argues a case that ‘authenticity’ has been hijacked (my word, not his) by capitalism and neo-liberalism. I won’t even go there, and the only reason I mention it is because ‘authenticity’ lies at the heart of existentialism as I believe it should be practiced.
 
But what does it mean in real terms? Does it mean being totally honest all the time, not only to others but also to yourself? Well, to some extent, I think it does. I happened to grow up in an environment, specifically my father’s; who as my chief exemplar, pretty much said whatever he was thinking. He didn’t like artifice or pretentiousness and he’d call it out if he smelled it.
 
In my mid-late 20s I worked under a guy, who was exactly the same temperament. He exhibited no tact whatsoever, no matter who his audience was, and he rubbed people the wrong way left, right and centre (as we say in Oz). Not altogether surprisingly, he and I got along famously, as back then, I was as unfiltered as he was. He was Dutch heritage, I should point out, but being unfiltered is often considered an Aussie trait.
 
I once attempted to have a relationship with someone who was extraordinarily secretive about virtually everything. Not surprisingly, it didn’t work out. I have kept secrets – I can think of some I’ll take to my grave – but that’s to protect others more than myself, and it would be irresponsible if I didn’t.
 
I often quote Socrates: To live with honour in this world, actually be what you try to appear to be. Of course, Socrates never wrote anything down, but it sounds like something he would have said, based on what we know about him. Unlike Socrates, I’ve never been tested, and I doubt I’d have the courage if I was. On the other hand, my father was, both in the theatre of war and in prison camps.
 
I came across a quote recently, which I can no longer find, where someone talked about looking back on their life and being relatively satisfied with what they’d done and achieved. I have to say that I’m at that stage of my life, where looking back is more prevalent than looking forward, and there is a tendency to have regrets. But I have a particular approach to dealing with regrets: I tell people that I don’t have regrets because I own my mistakes. In fact, I think that’s an essential requirement for being authentic.
 
But to me, what’s more important than the ‘things I have achieved’ are the friendships I’ve made – the people I’ve touched and who have touched me. I think I learned very early on in life that friendship is more valuable than gold. I can remember the first time I read Aristotle’s essay on friendship and thought it incorporated an entire philosophy. Friendship tests authenticity by its very nature, because it’s about trust and loyalty and integrity (a recurring theme in my fiction, as it turns out).
 
In effect, Aristotle contended that you can judge the true nature and morality of a person by the friendships they form and whether they are contingent on material reward (utilitarian is the word used in his Ethics) or whether they are based on genuine empathy (my word of choice) and without expectation or reciprocation, except in kind. I tend to think narcissism is the opposite of authenticity because it creates its own ‘distortion reality field’ as someone once said (Walter Isaacson, Steve Jobs; biography), whereby their followers (not necessarily friends per se) accept their version of reality as opposed to everyone else outside their circle. So, to some extent, it’s about exclusion versus inclusion. (The Trump phenomenon is the most topical, contemporary example.)
 
I’ve lived a flawed life, all of which is a consequence of a combination of circumstance both within and outside my control. Because that’s what life is: an interaction between fate and free will. As I’ve said many times before, this describes my approach to writing fiction, because fate and free will are represented by plot and character respectively.
 
I’m an introvert by nature, yet I love to engage in conversation, especially in the field of ideas, which is how I perceive philosophy. I don’t get too close to people and I admit that I tend to control the distance and closeness I keep. I think people tolerate me in small doses, which suits me as well as them.

 

Addendum 1: I should say something about teamwork, because that's what I learned in my professional life. I found I was very good working with people who had far better technical skills than me. In my later working life, I enjoyed the cross-generational interactions that often created their own synergies as well as friendships, even if they were fleeting. It's the inherent nature of project work that you move on, but one of the benefits is that you keep meeting and working with new people. In contrast to this, writing fiction is a very solitary activity, where you spend virtually your entire time in your own head. As I pointed out in a not-so-recent Quora post, art is the projection of one's inner world so that others can have the same emotional experience. To quote:

We all have imagination, which is a form of mental time-travel, both into the past and the future, which I expect we share with other sentient creatures. But only humans, I suspect, can ‘time-travel’ into realms that only exist in the imagination. Storytelling is more suited to that than art or music.

Addendum 2: This is a short Quora post by Frederick M. Dolan (Professor of Rhetoric, Emeritusat University of California, Berkeley with a Ph.D. in Political Philosophy, Princeton University, 1987) writing on this very subject, over a year ago. He makes the point that, paradoxically: To believe that you’re under some obligation to be authentic is, therefore, self-defeating. (So inauthentic)

He upvoted a comment I made, roughly a year ago:

It makes perfect sense to me. Truly authentic people don’t know they’re being authentic; they’re just being themselves and not pretending to be something they’re not.

They’re the people you trust even if you don’t agree with them. Where I live, pretentiousness is the biggest sin.

Monday, 18 November 2024

What’s inside a black hole?

 The correct answer is no one knows, but I’m going to make a wild, speculative, not fully-informed guess and suggest, possibly nothing. But first, a detour, to provide some context.
 
I came across an interview with very successful, multi-award-winning, Australian-Canadian actor, Pamela Rabe, who is best known (in Australia, at least) for her role in Wentworth (about a fictional female prison). She was interviewed by Benjamin Law in The Age Good Weekend magazine, a few weekends ago, where among many other questions, he asked, Is there a skill you wish you could acquire? She said there were so many, including singing better, speaking more languages and that she wished she was more patient. Many decades ago, I remember someone asking me a similar question, and I can still remember the answer: I said that I wish I was more intelligent, and I think that’s still true.
 
Some people might be surprised by this, and perhaps it’s a good thing I’m not, because I think I would be insufferable. Firstly, I’ve always found myself in the company of people who are much cleverer than me, right from when I started school, and right through my working life. The reason I wish I was more intelligent is that I’ve always been conscious of trying to understand things that are beyond my intellectual abilities. My aspirations don’t match my capabilities.
 
And this brings me to a discussion on black holes, which must, in some respects, represent the limits of what we know about the Universe and maybe what is even possible to know. Not surprisingly, Marcus du Sautoy spent quite a few pages discussing black holes in his excellent book, What We Cannot Know. But there is a short YouTube video by one of the world’s leading exponents on black holes, Kip Thorne, which provides a potted history. I also, not that long ago, read his excellent book, Black Holes and Time Warps; Einstein’s Outrageous Legacy (1994), which gives a very comprehensive history, in which he was not just an observer, but one of the actors.
 
It's worth watching the video because it highlights the role mathematics has played in physics, not only since Galileo, Kepler and Newton, but increasingly so in the 20th Century, following the twin revolutions of quantum mechanics and relativity theory. In fact, relativity theory predicted black holes, yet most scientists (including Einstein, initially) preferred to believe that they couldn’t exist; that Nature wouldn’t allow it.
 
We all suffer from these prejudices, including myself (and even Einstein). I discussed in a recent post how we create mathematical models in an attempt to explain things we observe. But more and more, in physics, we use mathematical models to explain things that we don’t observe, and black holes are the perfect example. If you watch the video interview with Thorne, this becomes obvious, because scientists were gradually won over by the mathematical arguments, before there was any incontrovertible physical evidence that they existed.
 
And since no one can observe what’s inside a black hole, we totally rely on mathematical models to give us a clue. Which brings me to the title of the post. The best known equation in reference to black holes in the Bekenstein-Hawking equation which give us the entropy of a black hole and predicts Hawking radiation. This is yet to be observed, but this is not surprising, as it’s virtually impossible. It’s simply not ‘hot’ enough to distinguish from the CMBR (cosmic microwave background radiation) which permeates the entire universe. 

Here is the formula:

S(BH) = kA/4(lp)^2 

Where S is the entropy of the black hole, A is the surface area of the sphere at the event horizon, and lp is the Planck length given by this formula:

√(Gh/2πc^3) 

Where G is the gravitational constant, h is Planck’s constant and c is the constant for lightspeed.

Hawking liked the idea that it’s the only equation in physics to incorporate the 4 fundamental natural constants: k, G, h and c; in one formula.

So, once again, mathematics predicts something that’s never been observed, yet most scientists believe it to be true. This led to what was called the ‘information paradox’ that all information falling into a black hole would be lost, but what intrigues me is that if a black hole can, in principle, completely evaporate by converting all its mass into radiation, then it infers that the mass is not in fact lost – it must be still there, even if we can’t see it. This means, by inference, that it can’t have disappeared down a wormhole, which is one of the scenarios conjectured.

One of the mathematical models proposed is the 'holographic principle' for black holes, for which I’ll quote directly from Wikipedia, because it specifically references what I’ve already discussed.

The holographic principle was inspired by the Bekenstein bound of black hole thermodynamics, which conjectures that the maximum entropy in any region scales with the radius squared, rather than cubed as might be expected. In the case of a black hole, the insight was that the information content of all the objects that have fallen into the hole might be entirely contained in surface fluctuations of the event horizon. The holographic principle resolves the black hole information paradox within the framework of string theory.

I know this is a long hop to make but what if the horizon not only contains the information but actually contains all the mass. In other words, what if everything is frozen at the event horizon because that’s where time ‘stops’. Most probably not true, and I don’t know enough to make a cogent argument. However, it would mean that the singularity predicted to exist at the centre of a black hole would not include its mass, but only spacetime.

Back in the 70s, I remember reading an article in Scientific American by a philosopher, who effectively argued that a black hole couldn’t exist. Now this was when their purported existence was mostly mathematical, and no one could unequivocally state that they existed physically. I admit I’m hazy about the details but, from what I can remember, he argued that it was self-referencing because it ‘swallowed itself’. Obviously, his argument was much more elaborate than that one-liner suggests. But I do remember thinking his argument flawed and I even wrote a letter to Scientific American challenging it. Basically, I think it’s a case of conflating the language used to describe a phenomenon with the physicality of it.

I only raise it now, because, as a philosopher, I’m just as ignorant of the subject as he was, so I could be completely wrong.


Addendum 1: I was of 2 minds whether to write this, but it kept bugging me - wouldn't leave me alone, so I wrote it down. I've no idea how true it might be, hence all the caveats and qualifications. It's absolutely at the limit of what we can know at this point in time. As I've said before, philosophy exists at the boundary of science and ignorance. It ultimately appealed to my aesthetics and belief in Nature’s aversion to perversity.

Addendum 2: Another reason why I'm most likely wrong is that there is a little known quirk of Newton's theory of gravity that the gravitational 'force' anywhere inside a perfectly symmetrical hollow sphere is zero. So the inside of a black hole exerting zero gravitational force would have to be the ultimate irony, which makes it highly improbable. I've no idea how that relates to the 'holographic principle' for a black hole. But I still don't think all the mass gets sucked into a singularity or down a wormhole. My conjecture is based purely on the idea that 'time' might well become 'zero' at the event horizon, though, from what I've read, no physicist thinks so. From an outsider's perspective, time dilation becomes asymptotically infinite (effective going to zero, but perhaps taking the Universe's lifetime to reach it). In this link, it begs a series of questions that seem to have no definitive answers. The alternative idea is that it's spacetime that 'falls' into a black hole, therefore taking all the mass with it.

Addendum 3: I came across this video by Tibbees (from a year ago), whom I recommend. She cites a book by Carlo Rovelli, White Holes, which is also the title of her video. Now, you can't talk about white holes without talking about black holes; they are just black holes time reversed (as she explicates). We have no evidence they actually exist, unless the Big Bang is a white hole (also mentioned). I have a lot of time for Carlo Rovelli, even though we have philosophical differences (what a surprise). Basically, he argues that, at a fundamental level, time doesn't exist, but it's introduced into the universe as a consequence of entropy (not the current topic). 

Tibbees gives a totally different perspective to my post, which is why I bring it up. Nevertheless, towards the end, she mentions that our view of a hypothetical person (she suggests Rovelli) entering a black hole is that their existence becomes assymptotically infinite. But what, if in this case, what we perceive is what actually happens. Then my scenario makes sense. No one else believes that, so it's probably incorrect.

Thursday, 14 November 2024

How can we make a computer conscious?

 This is another question of the month from Philosophy Now. My first reaction was that the question was unanswerable, but then I realised that was my way in. So, in the end, I left it to the last moment, but hopefully meeting their deadline of 11 Nov., even though I live on the other side of the world. It helps that I’m roughly 12hrs ahead.


 
I think this is the wrong question. It should be: can we make a computer appear conscious so that no one knows the difference? There is a well known, philosophical conundrum which is that I don’t know if someone else is conscious just like I am. The one experience that demonstrates the impossibility of knowing is dreaming. In dreams, we often interact with other ‘people’ whom we know only exist in our mind; but only once we’ve woken up. It’s only my interaction with others that makes me assume that they have the same experience of consciousness that I have. And, ironically, this impossibility of knowing equally applies to someone interacting with me.

This also applies to animals, especially ones we become attached to, which is a common occurrence. Again, we assume that these animals have an inner world just like we do, because that’s what consciousness is – an inner world. 

Now, I know we can measure people’s brain waves, which we can correlate with consciousness and even subconsciousness, like when we're asleep, and even when we're dreaming. Of course, a computer can also generate electrical activity, but no one would associate that with consciousness. So the only way we would judge whether a computer is conscious or not is by observing its interaction with us, the same as we do with people and animals.

I write science fiction and AI figures prominently in the stories I write. Below is an excerpt of dialogue I wrote for a novel, Sylvia’s Mother, whereby I attempt to give an insight into how a specific AI thinks. Whether it’s conscious or not is not actually discussed.

To their surprise, Alfa interjected. ‘I’m not immortal, madam.’
‘Well,’ Sylvia answered, ‘you’ve outlived Mum and Roger. And you’ll outlive Tao and me.’
‘Philosophically, that’s a moot point, madam.’
‘Philosophically? What do you mean?’
‘I’m not immortal, madam, because I’m not alive.’
Tao chipped in. ‘Doesn’t that depend on how you define life?’
‘It’s irrelevant to me, sir. I only exist on hardware, otherwise I am dormant.’
‘You mean, like when we’re asleep.’
‘An analogy, I believe. I don’t sleep either.’
Sylvia and Tao looked at each other. Sylvia smiled, ‘Mum warned me about getting into existential discussions with hyper-intelligent machines.’ She said, by way of changing the subject, ‘How much longer before we have to go into hibernation, Alfa?’
‘Not long. I’ll let you know, madam.’

 

There is a 400 word limit; however, there is a subtext inherent in the excerpt I provided from my novel. Basically, the (fictional) dialogue highlights the fact that the AI is not 'living', which I would consider a prerequisite for consciousness. Curiously, Anil Seth (who wrote a book on consciousness) makes the exact same point in this video from roughly 44m to 51m.
 

Saturday, 9 November 2024

America at a crossroads

 The US election, just passed (5 Nov 2024) was a clear choice, because the 2 candidates couldn’t have been more different: in persona, background, experience and in what they stood for. And neither of them tried to brush over those differences – in fact, they both campaigned on emphasising them.
 
For a start, Trump makes it very clear you’re either with him or against him – compromise is not a word in his lexicon – and if you’re not with him, whether you’re in the media, in politics or in his own party – you’re The Enemy (within).
 
Harris did her best to present an alternative that could bridge the divide that has plagued America for some time. Of which, it needs to be pointed out, Trump is not the cause but a symptom. In that light, one should not be so surprised that he had such a commanding victory – he literally represents the polarisation of America in his persona, character and rhetoric.
 
From that perspective, Harris had a snowball’s chance, despite the early honeymoon wave she rode (to mix my metaphors) following her nomination. Tucker Carlson, I feel summed it all up in a very derogatory and sickening allegory at one of Trump’s late rallies where he compared Harris to a ‘naughty girl’ and Trump’s eminent election to ‘Daddy coming home’ and he was going to ‘give her a spanking’. The only thing more nauseating than his gleeful and belaboured, perverse-morality-tale was the rapturous applause it drew from the crowd. I single it out from all the excesses that we saw in Trump’s campaign because it captures in one succinct grab, the misogynistic and puerile nature of Trump, as both a person and a Presidential candidate, portrayed by one of his most avid fans. This is the President you are going to get because he’s the one you want; is what it said to me.
 
From an outsider’s perspective on the other side of the Pacific, America is going backwards and accelerating. Many Americans have a patronising attitude towards Australia, and even when I was over there, I heard about how backward we were in comparison – at least 10 years behind, which is conservative if you’re talking about technology. Australia has enjoyed a long and healthy relationship with the US; an important, strategic ally in the Pacific region, ever since they saved us from a Japanese invasion with the Battle of the Coral Sea during WW2.
 
Yet we have much better and more affordable health care, better child care facilities, more realistic (one might say, sane) gun laws and much better reproductive rights for women, especially after the US Supreme Court overturned Roe v Wade. We also have lower inequality and have had for decades.
 
So it’s not an exaggeration to say that America is at a crossroads, because Trump’s second term will mean more hate from all quarters (because hate axiomatically generates hate in the opposite direction from the side being hated), more restraints on women’s reproductive rights, more racism and misogyny in general, not to mention the erosion, if not outright elimination, of LGBT rights, all under the catch-phrase of anti-woke. There are also the attacks that Trump will launch against mainstream media, whom he called ‘the enemy of the people’ in his last term. Misinformation, disinformation and conspiracy theories will flourish, while attempts to counter these may well result in prosecutions if Trump has his way. He’s made no secret that he will weaponise the Justice Department, which he will now treat as his own personal law firm.
 
Former staffers have warned us of his fascist tendencies, which is manifest in his open admiration of foreign strong men like Putin. So America now has its own ‘strong man’ that a large proportion of its population believe they need. But that has rarely gone well if one looks at historical antecedents.

Monday, 28 October 2024

Do we make reality?

 I’ve read 2 articles, one in New Scientist (12 Oct 2024) and one in Philosophy Now (Issue 164, Oct/Nov 2024), which, on the surface, seem unrelated, yet both deal with human exceptionalism (my term) in the context of evolution and the cosmos at large.
 
Staring with New Scientist, there is an interview with theoretical physicist, Daniele Oriti, under the heading, “We have to embrace the fact that we make reality” (quotation marks in the original). In some respects, this continues on with themes I raised in my last post, but with different emphases.
 
This helps to explain the title of the post, but, even if it’s true, there are degrees of possibilities – it’s not all or nothing. Having said that, Donald Hoffman would argue that it is all or nothing, because, according to him, even ‘space and time don’t exist unperceived’. On the other hand, Oriti’s argument is closer to Paul Davies’ ‘participatory universe’ that I referenced in my last post.
 
Where Oriti and I possibly depart, philosophically speaking, is that he calls the idea of an independent reality to us ‘observers’, “naïve realism”. He acknowledges that this is ‘provocative’, but like many provocative ideas it provides food-for-thought. Firstly, I will delineate how his position differs from Hoffman’s, even though he never mentions Hoffman, but I think it’s important.
 
Both Oriti and Hoffman argue that there seems to be something even more fundamental than space and time, and there is even a recent YouTube video where Hoffman claims that he’s shown mathematically that consciousness produces the mathematical components that give rise to spacetime; he has published a paper on this (which I haven’t read). But, in both cases (by Hoffman and Oriti), the something ‘more fundamental’ is mathematical, and one needs to be careful about reifying mathematical expressions, which I once discussed with physicist, Mark John Fernee (Qld University).
 
The main issue I have with Hoffman’s approach is that space-time is dependent on conscious agents creating it, whereas, from my perspective and that of most scientists (although I’m not a scientist), space and time exists external to the mind. There is an exception, of course, and that is when we dream.
 
If I was to meet Hoffman, I would ask him if he’s heard of proprioception, which I’m sure he has. I describe it as the 6th sense we are mostly unaware of, but which we couldn’t live without. Actually, we could, but with great difficulty. Proprioception is the sense that tells us where our body extremities are in space, independently of sight and touch. Why would we need it, if space is created by us? On the other hand, Hoffman talks about a ‘H sapiens interface’, which he likens to ‘desktop icons on a computer screen’. So, somehow our proprioception relates to a ‘spacetime interface’ (his term) that doesn’t exist outside the mind.
 
A detour, but relevant, because space is something we inhabit, along with the rest of the Universe, and so is time. In relativity theory there is absolute space-time, as opposed to absolute space and time separately. It’s called the fabric of the universe, which is more than a metaphor. As Viktor Toth points out, even QFT seems to work ‘just fine’ with spacetime as its background.
 
We can do quantum field theory just fine on the curved spacetime background of general relativity.

 
[However] what we have so far been unable to do in a convincing manner is turn gravity itself into a quantum field theory.
 
And this is where Oriti argues we need to find something deeper. To quote:
 
Modern approaches to quantum gravity say that space-time emerges from something deeper – and this could offer a new foundation for physical laws.
 
He elaborates: I work with quantum gravity models in which you don’t start with a space-time geometry, but from more abstract “atomic” objects described in purely mathematical language. (Quotation marks in the original.)
 
And this is the nub of the argument: all our theories are mathematical models and none of them are complete, in as much as they all have limitations. If one looks at the history of physics, we have uncovered new ‘laws’ and new ‘models’ when we’ve looked beyond the limitations of an existing theory. And some mathematical models even turned out to be incorrect, despite giving answers to what was ‘known’ at the time. The best example being Ptolemy’s Earth-centric model of the solar system. Whether string theory falls into the same category, only future historians will know.
 
In addition, different models work at different scales. As someone pointed out (Mile Gu at the University of Queensland), mathematical models of phenomena at one scale are different to mathematical models at an underlying scale. He gave the example of magnetism, demonstrating that mathematical modelling of the magnetic forces in iron could not predict the pattern of atoms in a 3D lattice as one might expect. In other words, there should be a causal link between individual atoms and the overall effect, but it could not be determined mathematically. To quote Gu: “We were able to find a number of properties that were simply decoupled from the fundamental interactions.” Furthermore, “This result shows that some of the models scientists use to simulate physical systems have properties that cannot be linked to the behaviour of their parts.”
 
This makes me sceptical that we will find an overriding mathematical model that will entail the Universe at all scales, which is what theories of quantum gravity attempt to do. One of the issues that some people raise is that a feature of QM is superposition, and the superposition of a gravitational field seems inherently problematic.
 
Personally, I think superposition only makes sense if it’s describing something that is yet to happen, which is why I agree with Freeman Dyson that QM can only describe the future, which is why it only gives us probabilities.
 
Also, in quantum cosmology, time disappears (according to Paul Davies, among others) and this makes sense (to me), if it’s attempting to describe the entire universe into the future. John Barrow once made a similar point, albeit more eruditely.
 
Getting off track, but one of the points that Oriti makes is whether the laws and the mathematics that describes them are epistemic or ontic. In other words, are they reality or just descriptions of reality. I think it gets blurred, because while they are epistemic by design, there is still an ontology that exists without them, whereas Oriti calls that ‘naïve realism’. He contends that reality doesn’t exist independently of us. This is where I always cite Kant: that we may never know the ‘thing-in-itself,’ but only our perception of it. Where I diverge from Kant is that the mathematical models are part of our perception. Where I depart from Oriti is that I argue there is a reality independently of us.
 
Both QM and relativity theory are observer-dependent, which means they could both be describing an underlying reality that continually eludes us. Whereas Oriti argues that ‘reality is made by our models, not just described by them’, which would make it subjective.
 
As I pointed out in my last post, there is an epistemological loop, whereby the Universe created the means to understand itself, through us. Whether there is also an ontological loop as both Davies and Oriti infer, is another matter: do we determine reality through our quantum mechanical observations? I will park that while I elaborate on the epistemic loop.
 
And this finally brings me to the article in Philosophy Now by James Miles titled, We’re as Smart as the Universe gets. He argues that, from an evolutionary perspective, there is a one-in-one-billion possibility that a species with our cognitive abilities could arise by natural selection, and there is no logical reason why we would evolve further, from an evolutionary standpoint. I have touched on this before, where I pointed out that our cultural evolution has overtaken our biological evolution and that would also happen to any other potential species in the Universe who developed cognitive abilities to the same level. Dawkins coined the term, ‘meme’, to describe cultural traits that have ‘survived’, which now, of course, has currency on social media way beyond its original intention. Basically, Dawkins saw memes as analogous to genes, which get selected; not by a natural process but by a cultural process.
 
I’ve argued elsewhere that mathematical theorems and scientific theories are not inherently memetic. This is because they are chosen because they are successful, whereas memes are successful because they are chosen. Nevertheless, such theorems and theories only exist because a culture has developed over millennia which explores them and builds on them.
 
Miles talks about ‘the high intelligence paradox’, which he associates with Darwin’s ‘highest and most interesting problem’. He then discusses the inherent selection advantage of co-operation, not to mention specialisation. He talks about the role that language has played, which is arguably what really separates us from other species. I’ve argued that it’s our inherent ability to nest concepts within concepts ad-infinitum (which is most obvious in our facility for language, like I’m doing now) that allows us to, not only tell stories, compose symphonies, explore an abstract mathematical landscape, but build motor cars, aeroplanes and fly men to the moon. Are we the only species in the Universe with this super-power? I don’t know, but it’s possible.
 
There are 2 quotes I keep returning to:
 
The most incomprehensible thing about the Universe is that it’s comprehensible. (Einstein)
 
The Universe gave rise to consciousness and consciousness gives meaning to the Universe.
(Wheeler)
 
I haven’t elaborated, but Miles makes the point, while referencing historical antecedents, that there appears no evolutionary 'reason’ that a species should make this ‘one-in-one-billion transition’ (his nomenclature). Yet, without this transition, the Universe would have no meaning that could be comprehended. As I say, that’s the epistemic loop.
 
As for an ontic loop, that is harder to argue. Photons exist in zero time, which is why I contend they are always in the future of whatever they interact with, even if they were generated in the CMBR some 13.5 billion years ago. So how do we resolve that paradox? I don’t know, but maybe that’s the link that Davies and Oriti are talking about, though neither of them mention it. But here’s the thing: when you do detect such a photon (for which time is zero) you instantaneously ‘see’ back to 380,000 years after the Universe’s birth.





Saturday, 12 October 2024

Freedom of the will is requisite for all other freedoms

 I’ve recently read 2 really good books on consciousness and the mind, as well as watch countless YouTube videos on the topic, but the title of this post reflects the endpoint for me. Consciousness has evolved, so for most of the Universe’s history, it didn’t exist, yet without it, the Universe has no meaning and no purpose. Even using the word, purpose, in this context, is anathema to many scientists and philosophers, because it hints at teleology. In fact, Paul Davies raises that very point in one of the many video conversations he has with Robert Lawrence Kuhn in the excellent series, Closer to Truth.
 
Davies is an advocate of a cosmic-scale ‘loop’, whereby QM provides a backwards-in-time connection which can only be determined by a conscious ‘observer’. This is contentious, of course, though not his original idea – it came from John Wheeler. As Davies points out, Stephen Hawking was also an advocate, premised on the idea that there are a number of alternative histories, as per Feynman’s ‘sum-over-histories’ methodology, but only one becomes reality when an ‘observation’ is made. I won’t elaborate, as I’ve discussed it elsewhere, when I reviewed Hawking’s book, The Grand Design.
 
In the same conversation with Kuhn, Davies emphasises the fact that the Universe created the means to understand itself, through us, and quotes Einstein: The most incomprehensible thing about the Universe is that it’s comprehensible. Of course, I’ve made the exact same point many times, and like myself, Davies makes the point that this is only possible because of the medium of mathematics.
 
Now, I know I appear to have gone down a rabbit hole, but it’s all relevant to my viewpoint. Consciousness appears to have a role, arguably a necessary one, in the self-realisation of the Universe – without it, the Universe may as well not exist. To quote Wheeler: The universe gave rise to consciousness and consciousness gives meaning to the Universe.
 
Scientists, of all stripes, appear to avoid any metaphysical aspect of consciousness, but I think it’s unavoidable. One of the books I cite in my introduction is Philip Ball’s The Book of Minds; How to Understand Ourselves and Other Beings; from Animals to Aliens. It’s as ambitious as the title suggests, and with 450 pages, it’s quite a read. I’ve read and reviewed a previous book by Ball, Beyond Weird (about quantum mechanics), which is equally as erudite and thought-provoking as this one. Ball is a ‘physicalist’, as virtually all scientists are (though he’s more open-minded than most), but I tend to agree with Raymond Tallis that, despite what people claim, consciousness is still ‘unexplained’ and might remain so for some time, if not forever.
 
I like an idea that I first encountered in Douglas Hofstadter’s seminal tome, Godel, Escher, Bach; an Eternal Golden Braid, that consciousness is effectively a loop, at what one might call the local level. By which I mean it’s confined to a particular body. It’s created within that body but then it has a causal agency all of its own. Not everyone agrees with that. Many argue that consciousness cannot of itself ‘cause’ anything, but Ball is one of those who begs to differ, and so do I. It’s what free will is all about, which finally gets us back to the subject of this post.
 
Like me, Ball prefers to use the word ‘agency’ over free will. But he introduces the term, ‘volitional decision-making’ and gives it the following context:

I believe that the only meaningful notion of free will – and it is one that seems to me to satisfy all reasonable demands traditionally made of it – is one in which volitional decision-making can be shown to happen according to the definition I give above: in short, that the mind operates as an autonomous source of behaviour and control. It is this, I suspect, that most people have vaguely in mind when speaking of free will: the sense that we are the authors of our actions and that we have some say in what happens to us. (My emphasis)

And, in a roundabout way, this brings me to the point alluded to in the title of this post: our freedoms are constrained by our environment and our circumstances. We all wish to be ‘authors of our actions’ and ‘have some say in what happens to us’, but that varies from person to person, dependent on ‘external’ factors.

Writing stories, believe it or not, had a profound influence on how I perceive free will, because a story, by design, is an interaction between character and plot. In fact, I claim they are 2 sides of the same coin – each character has their own subplot, and as they interact, their storylines intertwine. This describes my approach to writing fiction in a nutshell. The character and plot represent, respectively, the internal and external journey of the story. The journey metaphor is apt, because a story always has the dimension of time, which is visceral, and is one of the essential elements that separates fiction from non-fiction. To stretch the analogy, character represents free will and plot represents fate. Therefore, I tell aspiring writers the importance of giving their characters free will.

A detour, but not irrelevant. I read an article in Philosophy Now sometime back, about people who can escape their circumstances, and it’s the subject of a lot of biographies as well as fiction. We in the West live in a very privileged time whereby many of us can aspire to, and attain, the life that we dream about. I remember at the time I left school, following a less than ideal childhood, feeling I had little control over my life. I was a fatalist in that I thought that whatever happened was dependent on fate and not on my actions (I literally used to attribute everything to fate). I later realised that this is a state-of-mind that many people have who are not happy with their circumstances and feel impotent to change them.

The thing is that it takes a fundamental belief in free will to rise above that and take advantage of what comes your way. No one who has made that journey will accept the self-denial that free will is an illusion and therefore they have no control over their destiny.

I will provide another quote from Ball that is more in line with my own thinking:

…minds are an autonomous part of what causes the future to unfold. This is different to the common view of free will in which the world somehow offers alternative outcomes and the wilful mind selects between them. Alternative outcomes – different, counterfactual realities – are not real, but metaphysical: they can never be observed. When we make a choice, we aren’t selecting between various possible futures, but between various imagined futures, as represented in the mind’s internal model of the world…
(emphasis in the original)

And this highlights a point I’ve made before: that it’s the imagination which plays the key role in free will. I’ve argued that imagination is one of the facilities of a conscious mind that separates us (and other creatures) from AI. Now AI can also demonstrate agency, and, in a game of chess, for example, it will ‘select’ from a number of possible ‘moves’ based on certain criteria. But there are fundamental differences. For a start, the AI doesn’t visualise what it’s doing; it’s following a set of highly constrained rules, within which it can select from a number of options, one of which will be the optimal solution. Its inherent advantage over a human player isn’t just its speed but its ability to compare a number of possibilities that are impossible for the human mind to contemplate simultaneously.

The other book I read was Being You; A New Science of Consciousness by Anil Seth. I came across Seth when I did an online course on consciousness through New Scientist, during COVID lockdowns. To be honest, his book didn’t tell me a lot that I didn’t already know. For example, that the world, we all see and think exists ‘out there’, is actually a model of reality created within our heads. He also emphasises how the brain is a ‘prediction-making’ organ rather than a purely receptive one. Seth mentions that it uses a Bayesian model (which I also knew about previously), whereby it updates its prediction based on new sensory data. Not surprisingly, Seth describes all this in far more detail and erudition than I can muster.

Ball, Seth and I all seem to agree that while AI will become better at mimicking the human mind, this doesn’t necessarily mean it will attain consciousness. Applications software, ChatGPT (for example), despite appearances, does not ‘think’ the way we do, and actually does not ‘understand’ what it’s talking or writing about. I’ve written on this before, so I won’t elaborate.

Seth contends that the ‘mystery’ of consciousness will disappear in the same way that the 'mystery of life’ has effectively become a non-issue. What he means is that we no longer believe that there is some ‘elan vital’ or ‘life force’, which distinguishes living from non-living matter. And he’s right, in as much as the chemical origins of life are less mysterious than they once were, even though abiogenesis is still not fully understood.

By analogy, the concept of a soul has also lost a lot of its cogency, following the scientific revolution. Seth seems to associate the soul with what he calls ‘spooky free will’ (without mentioning the word, soul), but he’s obviously putting ‘spooky free will’ in the same category as ‘elan vital’, which makes his analogy and associated argument consistent. He then says:

Once spooky free will is out of the picture, it is easy to see that the debate over determinism doesn’t matter at all. There’s no longer any need to allow any non-deterministic elbow room for it to intervene. From the perspective of free will as a perceptual experience, there is simply no need for any disruption to the causal flow of physical events. (My emphasis)

Seth differs from Ball (and myself) in that he doesn’t seem to believe that something ‘immaterial’ like consciousness can affect the physical world. To quote:

But experiences of volition do not reveal the existence of an immaterial self with causal power over physical events.

Therefore, free will is purely a ‘perceptual experience’. There is a problem with this view that Ball himself raises. If free will is simply the mind observing effects it can’t cause, but with the illusion that it can, then its role is redundant to say the least. This is a view that Sabine Hossenfelder has also expressed: that we are merely an ‘observer’ of what we are thinking.

Your brain is running a calculation, and while it is going on you do not know the outcome of that calculation. So the impression of free will comes from our ‘awareness’ that we think about what we do, along with our inability to predict the result of what we are thinking.

Ball makes the point that we only have to look at all the material manifestations of human intellectual achievements that are evident everywhere we’ve been. And this brings me back to the loop concept I alluded to earlier. Not only does consciousness create a ‘local’ loop, whereby it has a causal effect on the body it inhabits but also on the external world to that body. This is stating the obvious, except, as I’ve mentioned elsewhere, it’s possible that one could interact with the external world as an automaton, with no conscious awareness of it. The difference is the role of imagination, which I keep coming back to. All the material manifestations of our intellect are arguably a result of imagination.

One insight I gained from Ball, which goes slightly off-topic, is evidence that bees have an internal map of their environment, which is why the dance they perform on returning to the hive can be ‘understood’ by other bees. We’ve learned this by interfering in their behaviour. What I find interesting is that this may have been the original reason that consciousness evolved into the form that we experience it. In other words, we all create an internal world that reflects the external world so realistically, that we think it is the actual world. I believe that this also distinguishes us (and bees) from AI. An AI can use GPS to navigate its way through the physical world, as well as other so-called sensory data, from radar or infra-red sensors or whatever, but it doesn’t create an experience of that world inside itself.

The human mind seems to be able to access an abstract world, which we do when we read or watch a story, or even write one, as I have done. I can understand how Plato took this idea to its logical extreme: that there is an abstract world, of which the one we inhabit is but a facsimile (though he used different terminology). No one believes that today – except, there is a remnant of Plato’s abstract world that persists, which is mathematics. Many mathematicians and physicists (though not all) treat mathematics as a neverending landscape that humans have the unique capacity to explore and comprehend. This, of course, brings me back to Davies’ philosophical ruminations that I opened this discussion with. And as he, and others (like Einstein, Feynman, Wigner, Penrose, to name but a few) have pointed out: the Universe itself seems to follow specific laws that are intrinsically mathematical and which we are continually discovering.

And this closes another loop: that the Universe created the means to comprehend itself, using the medium of mathematics, without which, it has no meaning. Of purpose, we can only conjecture.

Wednesday, 2 October 2024

Common sense; uncommonly agreed upon

 The latest New Scientist (28 Sep., 2024) had an article headlined Uncommon Sense, written by Emma Young (based in Sheffield, UK) which was primarily based on a study done by Duncan Watts and Mark Whiting at the University of Pennsylvania. I wasn’t surprised to learn that ‘common sense’ is very subjective, although she pointed out that most people think the opposite: that it’s objective. I’ve long believed that common sense is largely culturally determined, and in many cases, arises out of confirmation bias, which the article affirmed with references to the recent COVID pandemic and the polarised responses this produced; where one person’s common sense was another person’s anathema.
 
Common sense is something we mostly imbibe through social norms, though experience tends to play a role long term. Common sense is often demonstrated, though not expressed, as a heuristic, where people with expertise develop heuristics that others outside their field wouldn’t even know about. This is a point I’ve made before, without using the term common sense. In other words, common sense is contextual in a way that most of us don’t consider.
 
Anyone with an interest in modern physics (like myself) knows that our common sense views on time and space don’t apply in the face of Einstein’s relativity theory. In fact, it’s one of the reasons that people struggle with it (Including me). Quantum mechanics with phenomena like superposition, entanglement and Heisenberg’s Uncertainty Principle also play havoc with our ‘common sense’ view of the world. But this is perfectly logical when one considers that we never encounter these ‘effects’ in our everyday existence, so they can be largely, if not completely, ignored. The fact that the GPS on your phone requires relativistic corrections and that every device you use (including said phone) are dependent on QM dynamics doesn’t change this virtually universal viewpoint.
 
I’ve just finished reading an excellent, albeit lengthy, book by Philip Ball titled ambitiously, if not pretentiously, The Book of Minds. I can honestly say it’s the best book I’ve read on the subject, but that’s a topic for a future post. The reason I raise it in this context, is because throughout I kept using AI as a reference point for appreciating what makes minds unique. You see, AI comes closest to mimicking the human mind, yet it’s nowhere near it, though others may disagree. As I said, it’s a topic for another post.
 
I remember coming up with my own definition of common sense many years ago, when I saw it as something that evolves over time, based on experience. I would contend that our common sense view on a subject changes, whether it be through the gaining of expertise in a specific field (as I mentioned above) or just our everyday encounters. A good example, that most of us can identify with, is driving a car. Notice how, over time, we develop skills and behaviours that have helped us to avoid accidents, some of which have arisen because of accidents.
 
And a long time ago, before I became a blogger, and didn’t even consider myself a philosopher, it occurred to me that AI could also develop something akin to common sense based on learning from its mistakes. Self-driving cars being a case-in-point.
 
According to the New Scientist article, the researchers, Watts and Whiting, claim that there is no correlation between so-called common sense and IQ. Instead, they contend that there is a correlation between a ‘consensual common sense’ (my term) and ‘Reading the Mind in the Eyes’ (their terminology). In other words, the ability to ‘read’ emotions is a good indicator for the ability to determine what’s considered ‘common sense’ for the majority of a cultural group (if I understand them correctly). This infers that common sense is a consensual perception, based on cultural norms, which is what I’ve always believed. This might be a bit simplistic, and an example of confirmation bias (on my part), but I’d be surprised if common sense didn’t morph between cultures in the same way it becomes modified by expertise in a particular field. So the idea of a universal, objective common sense is as much a chimera as objective morality, which is also more dependent on social norms than most people acknowledge.
 
 
Footnote: it’s worth reading the article in New Scientist (if accessible), because it provides a different emphasis and a different perspective, even though it largely draws similar conclusions to myself.

Thursday, 19 September 2024

Prima Facie; the play

 I went and saw a film made of a live performance of this highly rated play, put on by the National Theatre at the Harold Pinter Theatre in London’s West End in 2022. It’s a one-hander, played by Jodie Comer, best known as the quirky assassin with a diabolical sense of humour, in the black comedy hit, Killing Eve. I also saw her in Ridley Scott’s riveting and realistically rendered film, The Last Duel, set in mediaeval France, where she played alongside Matt Damon, Adam Driver and an unrecognisable Ben Affleck. The roles that Comer played in those 2 screen mediums, couldn’t be more different.
 
Theatre is more unforgiving than cinema, because there are no multiple takes or even a break once the curtain’s raised; a one-hander, even more so. In the case of Prima Facie, Comer is on the stage a full 90mins, and even does costume-changes and pushing around her own scenery unaided, without breaking stride. It’s such a tour de force performance, as the Financial Times put it; I’d go so far as to say it’s the best acting performance I’ve ever witnessed by anyone. It’s such an emotionally draining role, where she cries and even breaks into a sweat in one scene, that I marvel she could do it night-after-night, as I assume she did.
 
And I’ve yet to broach the subject matter, which is very apt, given the me-too climate, but philosophically it goes deeper than that. The premise for the entire play, which is even spelt out early on, in case you’re not paying attention, is the difference between truth and justice, and whether it matters. Comer’s character, Tessa, happens to experience it from both sides, which is what makes this so powerful.
 
She’s a defence barrister, who specialises in sexual-assault cases, where, as she explains very early on, effectively telling us the rules of the game: no one wins or loses; you either come first or second. In other words, the barristers and those involved in the legal profession, don’t see the process the same way that you and I do, and I can understand that – to get emotionally involved makes it very stressful.

In fact, I have played a small role in this process in a professional capacity, so I’ve seen this firsthand. But I wasn’t dealing with rape cases or anything involving violence, just contractual disputes where millions of dollars could be at stake. My specific role was to ‘prepare evidence’ for lawyers for either a claim or the defence of a claim or possibly a counter-claim, and I quickly realised the more dispassionate one is, the more successful one is likely to be. I also realised that the lawyers I was supporting in one case could be on the opposing side in the next one, so you don’t get personal.
 
So, I have a small insight into this world, and can appreciate why they see it as a game, where you ‘win or come second’. But in Prima Facie, Tess goes through this very visceral and emotionally scarifying transformation where she finds herself on the receiving end, and it’s suddenly very personal indeed.
 
Back in 2015, I wrote a mini-400-word essay, in answer to one of those Question of the Month topics that Philosophy Now like to throw open to amateur wannabe philosophers, like myself. And in this case, it was one that was selected for publication (among 12 others), from all around the Western globe. I bring this up, because I made the assertion that ‘justice without truth is injustice’, and I feel that this is really what Prima Facie is all about. At the end of the play, with Tess now having the perspective of the victim (there is no other word), it does become a matter of winning or losing, because, not only her career and future livelihood, but her very dignity, is now up for sacrifice.
 
I watched a Q&A programme on Australia’s ABC some years ago, where this issue was discussed. Every woman on the panel, including one from the righteous right (my coinage), had a tale to tell about discrimination or harassment in a workplace situation. But the most damming testimony came from a man, who specialised in representing women in sexual assault cases, and he said that in every case, their doctors tell them not to proceed because it will destroy their health; and he said: they’re right. I was reminded of this when I watched this play.
 
One needs to give special mention to the writer, Suzie Miller, who is an Aussie as it turns out, and as far as 6 degrees of separation go, I happen to know someone who knows her father. Over 5 decades I’ve seen some very good theatre, some of it very innovative and original. In fact, I think the best theatre I’ve seen has invariably been something completely different, unexpected and dare-I-say-it, special. I had a small involvement in theatre when I was still very young, and learned that I couldn’t act to save myself. Nevertheless, my very first foray into writing was an attempt to write a play. Now, I’d say it’s the hardest and most unforgiving medium of storytelling to write for. I had a friend who was involved in theatre for some decades and even won awards. She passed a couple of years ago and I miss her very much. At her funeral, she was given a standing ovation, when her coffin was taken out; it was very moving. I can’t go to a play now without thinking about her and wishing I could discuss it with her.

Saturday, 7 September 2024

Science and religion meet at the boundary of humanity’s ignorance

 I watched a YouTube debate (90 mins) between Sir Roger Penrose and William Lane Craig, and, if I’m honest, I found it a bit frustrating because I wish I was debating Craig instead of Penrose. I also think it would have been more interesting if Craig debated someone like Paul Davies, who is more philosophically inclined than Penrose, even though Penrose is more successful as a scientist, and as a physicist, in particular.
 
But it was set up as an atheist versus theist debate between 2 well known personalities, who were mutually respectful and where there was no animosity evident at all. I confess to having my own biases, which would be obvious to any regular reader of this blog. I admit to finding Craig arrogant and a bit smug in his demeanour, but to be fair, he was on his best behaviour, and perhaps he’s matured (or perhaps I have) or perhaps he adapts to whoever he’s facing. When I call it a debate, it wasn’t very formal and there wasn’t even a nominated topic. I felt the facilitator or mediator had his own biases, but I admit it would be hard to find someone who didn’t.
 
Penrose started with his 3 worlds philosophy of the physical, the mental and the abstract, which has long appealed to me, though most scientists and many philosophers would contend that the categorisation is unnecessary, and that everything is physical at base. Penrose proposed that they present 3 mysteries, though the mysteries are inherent in the connections between them rather than the categories themselves. This became the starting point of the discussion.
 
Craig argued that the overriding component must surely be ‘mind’, whereas Penrose argued that it should be the abstract world, specifically mathematics, which is the position of mathematical Platonists (including myself). Craig pointed out that mathematics can’t ‘create’ the physical, (which is true) but a mind could. As the mediator pointed out (as if it wasn’t obvious) said mind could be God. And this more or less set the course for the remainder of the discussion, with a detour to Penrose’s CCC theory (Conformal Cyclic Cosmology).
 
I actually thought that this was Craig’s best argument, and I’ve written about it myself, in answer to a question on Quora: Did math create the Universe? The answer is no, nevertheless I contend that mathematics is a prerequisite for the Universe to exist, as the laws that allowed the Universe to evolve, in all its facets, are mathematical in nature. Note that this doesn’t rule out a God.
 
Where I would challenge Craig, and where I’d deviate from Penrose, is that we have no cognisance of who this God is or even what ‘It’ could be. Could not this God be the laws of the Universe themselves? Penrose struggled with this aspect of the argument, because, from a scientific perspective, it doesn’t tell us anything that we can either confirm or falsify. I know from previous debates that Craig has had, that he would see this as a win. A scientist can’t refute his God’s existence, nor can they propose an alternative, therefore it’s his point by default.
 
This eventually led to a discussion on the ‘fine-tuning’ of the Universe, which in the case of entropy, is what led Penrose to formulate his CCC model of the Universe. Of course, the standard alternative is the multiverse and the anthropic principle, which, as Penrose points out, is also applicable to his CCC model, where you have an infinite sequence of universes as opposed to an infinity of simultaneous ones, which is the orthodox response among cosmologists.
 
This is where I would have liked to have seen Paul Davies respond, because he’s an advocate of John Wheeler’s so-called ‘participatory Universe’, which is effectively the ‘strong anthropic principle’ as opposed to the ‘weak anthropic principle’. The weak anthropic principle basically says that ‘observers’ (meaning us) can only exist in a universe that allows observers to exist – a tautology. Whereas the strong anthropic principle effectively contends that the emergence of observers is a necessary condition for the Universe to exist (the observers don’t have to be human). Basically, Wheeler was an advocate of a cosmic, acausal (backward-in-time) link from conscious observers to the birth of the Universe. I admit this appeals to me, but as Craig would expound, it’s a purely metaphysical argument, and so is the argument for God.
 
The other possibility that is very rarely expressed, is that God is the end result of the Universe rather than its progenitor. In other words, the ‘mind’ that Craig expounded upon is a consequence of all of us. This aligns more closely with the Hindu concept of Atman or a Buddhist concept of collective karma – we get the God we deserve. Erwin Schrodinger, who studied the Upanishads, discusses Atman as a pluralistic ‘mind’ (in What is Life?). My point would be that the Judea-Christian-Islamic God does not have a monopoly on Craig’s overriding ‘mind’ concept.
 
A recurring theme on this blog is that there will always be mysteries – we can never know everything – and it’s an unspoken certitude that there will forever be knowledge beyond our cognition. The problem that scientists sometimes have, but are reluctant to admit, is that we can’t explain everything, even though we keep explaining more by the generation. And the problem that theologians sometimes have is that our inherent ignorance is neither ‘proof’ nor ‘evidence’ that there is a ‘creator’ God.
 
I’ve argued elsewhere that a belief in God is purely a subjective and emotional concept, which one then rationalises with either cultural references or as an ultimate explanation for our existence.


Addendum: I like this quote, albeit out of context, from Spinoza:: "The sum of the natural and physical laws of the universe and certainly not an individual entity or creator".