Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Saturday, 22 March 2025

Truth, trust and lies; can we tell the difference?

 I’ve written on this topic before, more than once, but one could write a book on it, and Yuval Noah Harari has come very close with his latest tome, Nexus; A Brief History of Information Networks from the Stone Age to AI. As the subtitle suggests, it’s ostensibly about the role of AI, both currently and in the foreseeable future, but he provides an historical context, which is also alluded to in the subtitle. Like a lot of book titles, the subtitle tells us more than the title, which, while being succinct and punchy, is also nebulous and vague, possibly deliberately. AI is almost a separate topic, but I find it interesting that it has become its own philosophical category (even on this blog) when it was not even a concept a century ago. I might return to this point later.
 
The other trigger was an essay in Philosophy Now (Issue 166, Feb/Mar 2025) with the theme articulated on the cover: Political Philosophy for our time (they always have a theme). This issue also published my letter on Plato’s cave and social media, which is not irrelevant. In particular, was an essay containing the 2 key words in my own title: Trust, Truth & Political Conversations; by Adrian Brockless, who was Head of Philosophy at Sutton Grammar School and has taught at a number of universities and schools: Heythrop College, London; the University of Hertfordshire; Roedean School; Glyn School; and now teaches philosophy online at adrianbrockless.com. I attempted to contact him via his website but he hasn’t responded.
 
Where to start? Brockless starts with ‘the relationship between trust and truth’, which seems appropriate, because there is a direct relationship and it helps to explain why there is such a wide dispersion, even polarisation, within the media, political apparatuses and the general public. Your version of the truth is heavily dependent on where you source it, and where you source it depends on whom you trust. And whom you trust depends on whether their political and ideological views align with yours or not. Confirmation bias has never been stronger or more salient to how we perceive the world and make decisions about its future.
 
And yes, I’m as guilty as the next person, but history can teach us lessons, which is a theme running throughout Harari’s book – not surprising, given that’s his particular field or discipline. All of Harari’s books (that I’ve read) are an attempt to project history into the future, partially based on what we know about the past. What comes across, in both Harari’s book and Brockless’s essay, is that truth is subjective and so is history to a large extent.
 
Possibly the most important lessons can be learned from examining authoritarian regimes. All politicians, irrespective of their persuasion or nationality, know the importance of ‘controlling the narrative’ as we like to say in the West, but authoritarian dictatorships take this to the extreme. Russia, for example, assassinates journalists, because Putin knows that the pen is mightier than the sword, but only if the sword is sheathed. Both Brockless and Harari give examples of revising history or even eliminating it, because we all know how certain figures have maintained an almost deistic persistence in the collective psyche. In some cases, like Jesus, Buddha, Confucius and Mohammed, it’s overt and has been maintained and exported into other cultures, so they have become global. In all cases, they had political origins, where they were iconoclasts. I’m not sure that any of them would have expected to be well known some 2 centuries later when worldwide communication would become a reality. I tend to think there is a strong element of chance involved rather than divine-interceded destiny, as many believe and wish to believe. In fact, what we want to believe determines to a much greater extent than we care to admit, what we perceive as truth.
 
Both authors make references to Trump, which is unavoidable, given the subject matter, because he’s almost a unique phenomenon and arguably one who could only arise in today’s so-called ‘post-truth’ world. It’s quite astute of Trump to call his own social media platform, Truth Social, because he actively promotes his own version of the truth in the belief that it can replace all other versions, and he’s so successful that his opponents struggle to keep up.
 
All politicians know the value (I wouldn’t use the word, virtue) of telling the public the lies they want to hear. Brockless gives the example that ‘on July 17, 1900, both The Times and The Daily Mail published a false story about the slaughter of Europeans in the British Embassy in Peking (the incident never happened)’. His point being that ‘fake news’ is a new term but an old concept. In Australia, we had the notorious ‘children thrown overboard affair’ in 2001, regarding the behaviour of asylum seekers intercepted at sea, which helped the then Howard government to win an election, but was later revealed to be completely false.
 
However, I think Trump provides the best demonstration of the ability to create a version of truth that many people would prefer to believe, and even maintain it over a period of years so that it grows stronger, not weaker, with time; to the point that it becomes the dominant version in some media, be it online or mainstream. The fact that FOX News was forced to go to court and pay out to a company that they libelled in the 2020 election as a direct consequence of unfaltering loyalty to Trump, did nothing to stem the lie that Biden stole the election from Trump. Murdoch even sacked the head of FOX’s own election-reporting team for correctly calling the election result; such was his dedication to Trump’s version of the truth.
 
And the reason I can call that particular instance a lie, as opposed to the truth, as many people maintain, is because it was tested in court. I’ve had some experience with testing different versions of truth in courts and mediation: specifically, contractual disputes, whereby I did analyses of historical data and prepared evidence in the form of written reports for lawyers to argue in court or at hearings. This is not to say that the person who wins is necessarily right, but there is a limitation on what can be called truth, which is the evidence that is presented. And, in those cases, the evidence is always in the form of documents: plans, minutes of meetings, date-stamped photos, site diaries, schedules (both projected and actual). I learned not to get emotional, which was relatively easy given I never had a personal stake in it; meaning it wasn’t going to cost me financially or reputationally. I also took the approach that I would get the same result no matter which side I was on. In other words, I tried to be as objective as possible. I found this had the advantage of giving me credibility and being believed. But it was also done in the belief that trying to support a lie invariably did you more harm than good, and I sometimes had to argue that case against my own client; I wouldn’t want to be a lawyer for Trump.
 
And of course, all this ties to trust. My client knew they could trust my judgement – if I wasn’t going to lie for them, I wasn’t going to lie to them. I make myself sound very important, but in reality, I was just a small cog in a much larger machine. I was a specialist who did analysis and provided evidence, which sometimes was pertinent to arguments. As part of this role, I oftentimes had to provide counter-arguments to other plaintiff’s claims – I’ve worked on both sides.
 
Anyway, I think it gives me an insight into truth that most people, including philosophers, don’t experience. Like most of my posts, I’ve gone off on a tangent, yet it’s relevant.
 
Brockless brings another dimension into the discussion, when he says:
 
Having an inbuilt desire to know and tell the truth matters because this attitude underpins genuine love, grief and other human experiences: authentic love and grief etc cannot be separated from truthfulness.
 
I’ve made the point before that trust underpins so many of our relationships, both professional and social, without which we can’t function, either as individuals or as a society.
 
Brockless makes a similar point when he says: Truthfullness is tied to how we view others as moral beings.
 
He then goes on to distinguish this from our love for animals and pets: Moral descriptions apply fully to human beings, not to inanimate objects, or even to animals… If we fail to see the difference between love for a pet and love for a person, then our concept of humanity has been corrupted by sentimentality.
 
I’m not sure I fully agree with him on this. Even before I read this passage, I was thinking of how the love and trust that some animals show to us is uncorrupted and close to unconditional. Animals can get attached to us in a way that we tend NOT to see as abnormal, even though an objective analysis might tell us it’s ‘unnatural’. I’ve had a lot of relationships with animals over many years, and I know that they become completely dependent on us; not just for material needs, but for emotional needs, and they try to give it back. The thing is that they do this despite an inability to directly communicate with us except through emotions. I can’t help but think that this is a form of honesty that many, if not most of us, have experienced, yet we rarely give it a second thought.
 
A recurring theme on this blog is existentialism and living authentically, which is tied to a requisite for self-honesty, and as bizarre as it may sound, I think we can learn from the animals in our lives, because they can’t lie at an emotional level. They have the advantage that they don’t intellectualise what they feel – they simply act accordingly.
 
Not so much a recurring theme, as a persistent one, in Harari’s book, is that more knowledge doesn’t equate to more truth. Nowhere is this more relevant than in the modern world of social media. Harari argues that this mismatch could increase with AI, because of how it’s ‘trained’ and he may have a point. We are already finding ‘biases’, and people within the tech industry have already tried to warn those of us outside the industry.
 
In another post, I referenced an article in New Scientist (23 July 2022), by Annalee Newitz who reported on a Google employee, Timnit Gebru, who, as ‘co-lead of Google’s ethical AI team’, expressed concerns that LLM (Large Language Model) algorithms pick up racial and other social biases, because they’re trained on the internet. She wrote a paper about the implications for AI applications using internet trained LLMs in areas like policing, health care and bank lending. She was subsequently fired by Google, though one doesn’t know how much the ‘paper’ played a role in that decision (quoting directly from my post).
 
Of course, I’ve explored the role of AI in science fiction, which borders on fantasy, but basically, I see a future where humans will have a symbiotic relationship with AI far beyond what we have today. I can see AI agents that become ‘attached’ to us in a way that animals do, not dissimilar to what I described above, but not the same either, as I don’t expect them to be sentient. But, even without sentience, they could pick up our biases and prejudices and amplify them, which some might argue (like Harari) is already happening.
 
As you can see, after close to 2,000 words, I haven’t really addressed the question in the tail of my title. I recently had a discussion with someone on Quora about Trump, whom I argued lived in the alternative universe that Trump had created. It turned out he has family, including grandchildren, living in Australia, because one of their parents is on a 2 year assignment (details unknown and not relevant). According to him, they hate it here, and I responded that if they lived in Trumpworld that was perfectly understandable, because they would be in a distinct minority. Believe it or not, the discussion ended amicably enough, and I wished both him and his family well. What I noticed was that his rhetoric was much more emotional – one might even say, irrational – than mine. Getting back to the contractual disputes I mentioned earlier, I’ve often found that when you have an ingroup-outgroup dynamic – like politics or contractual matters – highly intelligent people can become very irrational. Everyone claims they go to the facts, but these days you can find your own ‘facts’ anywhere on the internet, which leads to echo-chambers.
 
People look for truth in different places. Some find it in the Bible or some other religious text. I look for it in mathematics, despite a limited knowledge in that area. But I take solace in the fact that mathematics is true, independent of culture or even the Universe. All other truths are contingent. I have an aversion to conspiracy theories, which usually require a level of evidence that most followers don’t pursue. And most of them can be dismissed when you realise how many people from all over the world need to be involved just to keep it secret from the rest of us.
 
A good example is climate change, which I’ve been told many times over, is a worldwide hoax maintained for no other purpose than to keep climatologists in their jobs. But here’s the thing: the one lesson I learned from over 4 decades working on engineering projects is that if there is a risk, and especially an unknown risk, the worst strategy is to ignore it and hope it’s not true.


Addendum: It would be remiss of me not to mention that there was a feature article in the Good Weekend magazine that came out the same day I wrote this: on the increasing role of chatbot avatars in virtual relationships, including relationships with erotic content. If you can access the article, you'll see that the 'conversations' using LLM (large language models) AI are very realistic. I wrote about this phenomena on another post fairly recently (the end of last year), because it actually goes back to 1966 with Joseph Weizenbaum's ELIZA, who was a 'virtual therapist' that many people took seriously. So not really new, but now more ubiquitous and realistic.

Thursday, 6 March 2025

Have we forgotten what ‘mind’ means?

 There is an obvious rejoinder to this, which is, did we ever know what ‘mind’ means? Maybe that’s the real question I wanted to ask, but I think it’s better if it comes from you. The thing is that we have always thought that ‘mind’ means something, but now we are tending to think, because we have no idea where it comes from, that it has no meaning at all. In other words, if it can’t be explained by science, it has no meaning. And from that perspective, the question is perfectly valid.
 
I’ve been watching a number of videos hosted by Curt Jaimungal, whom I assume has a physics background. For a start, he’s posted a number of video interviews with a ‘Harvard scientist’ on quantum mechanics, and he provided a link (to me) of an almost 2hr video he did with Sabine Hossenfelder, and they talked like they were old friends. I found it very stimulating and I left a fairly long comment that probably no one will read.
 
Totally off-topic, but Sabine’s written a paper proposing a thought-experiment that would effectively test if QM and GR (gravity) are compatible at higher energies. She calculated the energy range and if there is no difference to the low energy experiments already conducted, it effectively rules out a quantum field for gravity (assuming I understand her correctly). I expressed my enthusiasm for a real version to be carried out, and my personal, totally unfounded prediction that it would be negative (there would be no difference).
 
But there are 2 videos that are relevant to this topic and they both involve Stephen Wolfram (who invented Mathematica). I’ve referenced him in previous posts, but always second-hand, so it was good to hear him first-hand. In another video, also hosted by Jaimungal, Wolfram has an exchange with Donald Hoffman, whom I’ve been very critical of in the past, even saying that I found it hard to take him seriously. But to be fair, I need to acknowledge that he’s willing to put his ideas out there and have them challenged by people like Stephen Wolfram (and Anil Seth in another video), which is what philosophy is all about. And the truth is that all of these people know much more about their fields than me. I’ll get to the exchange with Hoffman later.
 
I have the impression from Gregory Chaitin, in particular, that Wolfram argues that the Universe is computable; a philosophical position I’ve argued against, mainly because of chaos theory. I’ve never known Wolfram to mention chaos theory, and he certainly doesn’t in the 2 videos I reference here, and I’ve watched them a few times.
 
Jaimungal introduces the first video (with Wolfram alone) by asking him about his ‘observer theory’ and ‘what if he’s right about the discreteness of space-time’ and ‘computation underlying the fundament?’ I think it’s this last point which goes to the heart of their discussion. Wolfram introduces a term called the Ruliad, which I had to look up. I came across 2 definitions, both of which seem relevant to the discussion.
 
A concept that describes all possible computations and rule-based systems, including our physical universe, mathematics, and everything we experience.
 
A meta-structural domain that encompasses every possible rule-based system, or computational eventuality, that can describe any universe or mathematical structure.

 
Wolfram confused me when he talked about ‘computational irreducibility’, which infers that there are some things that are not computable, to which I agree. But then later he seemed to argue that everything we can know is computable, and things we don’t know now are only unknowable because we’re yet to find their computable foundation. He argues that there are ‘slices of reducible computability’ within the ‘computational irreducibility’, which is how we do mathematical physics.
 
Towards the end of the video, he talks specifically about biology, saying, ‘there is no grand theory of biology’, like we attempt in physics. He has a point. I’ve long argued that natural selection is not the whole story, and there is a mystery inherent in DNA, in as much as it’s a code whose origin and evolvement is still unknown. Paul Davies attempted to tackle this in his book, The Demon in the Machine, because it’s analogous to software code and it’s information based. This means that it could, in principle, be mathematical, which means it could lead to a biological ‘theory of everything’, which I assume is what Wolfram is claiming is lacking.
 
However, I’m getting off-track again. At the start of the video, Wolfram specifically references the Copernican revolution, because it was not just a mathematical reformulation, but it changed our entire perspective of the Universe (we are not at the centre) without changing how we experience it (we are standing still, with the sky rotating around us). At the end of the day, we have mathematical models, and some are more accurate than others, and they all have limitations – there is no all-encompassing mathematical TOE (Theory of Everything). There is no Ruliad, as per the above definitions, and Wolfram acknowledges that while apparently arguing that everything is computable.
 
I find it necessary to bring Kant into this, and his concept of the ‘thing-in-itself’ which we may never know, but only have a perception of. My argument, which I’ve never seen anyone else employ, is that mathematics is one of our instruments of perception, just like our telescopes and particle accelerators and now, our gravitational wave detectors. Our mathematical models, be they GR (general relativity), QFT or String Theory, are perceptual and conceptual tools, whose veracity are ultimately determined by empirical evidence, which means they can only be applied to things that can be measured. And I think this leads to an unstated principle that if something can’t be measured it doesn’t exist. I would put ‘mind’ in that category.
 
And this allows me to segue into the second video, involving Donald Hoffman, because he seems to argue that mind is all that there is, and it has a mathematical foundation. He put forward his argument (which I wrote about recently) that, using Markovian matrices, he’s developed probabilities that apparently predict ‘qualia’, which some argue are the fundaments of consciousness. Wolfram, unlike the rest of us, actually knows what Hoffman is talking about and immediately had a problem that his ‘mathematical model’ led to probabilities and not direct concrete predictions. Wolfram seemed to argue that it breaks the predictive chain (my terminology), but I confess I struggled to follow his argument. I would have liked to ask: what happens with QM, which can only give us probabilities? In that case, the probabilities, generated by the Born Rule, are the only link between QM and classical physics – a point made by Mark John Fernee, among others.
 
But going back to my argument invoking Kant, it’s a mathematical model and not necessarily the thing-in-itself. There is an irony here, because Kant argued that space and time are a priori in the mind, so a projection, which, as I understand it, lies at the centre of Hoffman’s entire thesis. Hoffman argues that ‘spacetime is doomed’ since Nima Arkani-Hamed and his work on amplituhedrons, because (to quote Arkani-Hamed): This is a concrete example of a way in which the physics we normally associate with space-time and quantum mechanics arises from something more basic. In other words, Arkani-Hamed has found a mathematical substructure or foundation to spacetime itself, and Hoffman claims that he’s found a way to link that same mathematical substructure to consciousness, via Markovian matrices and his probabilities.
 
Hoffman analogises spacetime to wearing a VR headset and objects in spacetime to icons on a computer desktop, which seems to infer that the Universe is a simulation, though he’s never specifically argued that. I won’t reiterate my objections to Hoffman’s fundamental idealism philosophy, but if you have a mathematical model, however it’s formulated, its veracity can only be determined empirically, meaning we need to measure something. So, what is he going to measure? Is it qualia? Is it what people report what they think?
 
No. According to Hoffman, they can do empirical tests on spacetime (so not consciousness per se) that will determine if his mathematical model of consciousness is correct, which seems a very roundabout way of doing things. From what I can gather, he’s using a mathematical model of consciousness that’s already been developed (independently) to underpin reality, and then testing it on reality, thereby implying that consciousness is an intermediate step between the mathematical model and the reality. His ambition is to demonstrate that there is a causal relationship between consciousness and reality, when most argue that it’s the other way around. I return to this point below, with Wolfram’s response.
 
Wolfram starts off in his interaction with Hoffman by defining the subjective experience of consciousness that Hoffman has mathematically modelled and asking, can he apply that to an LLM (like ChatGPT, though he doesn’t specify) and therefore show that an LLM must be conscious? Wolfram argues that such a demonstration would categorically determine the ‘success’ (his term) of Hoffman’s theory, and Hoffman agreed.
 
I won’t go into detail (watch the video) but Hoffman concludes, quite emphatically, that ‘It’s not logically possible to start with non-conscious entities and have conscious agents emerge’ (my emphasis, obviously). Wolfram immediately responded (very good-naturedly), ‘That’s not my intuition’. He then goes on to say how that’s a Leibnizian approach, which he rejected back in the 1980s. I gather that it was around that time that Wolfram adopted and solidified (for want of a better word) his philosophical position that everything is ultimately computable. So they both see mathematics as part of the ‘solution’, but in different ways and with different conclusions.
 
To return to the point I raised in my introduction, Wolfram starts off in the first video (without Hoffman), that we have adopted a position that if something can’t be explained by science, then there is no other explanation – we axiomatically rule everything else out - and he seems to argue that this is a mistake. But then he adopts a position which is the exact opposite: that everything is “computational all the way down”, including concepts like free will. He argues: “If we can accept that everything is computational all the way down, we can stop searching for that.” And by ‘that’ he means all other explanations like mysticism or QM or whatever.
 
My own position is that mathematics, consciousness and physical reality form a triumvirate similar to Roger Penrose’s view. There is an interconnection, but I’m unsure if there is a hierarchy. I’ve argued that mathematics can transcend the Universe, which is known as mathematical Platonism, a view held by many mathematicians and physicists, which I’ve written about before.
 
I’m not averse to the view that consciousness may also exist beyond the physical universe, but it’s not something that can be observed (by definition). So far, I’ve attempted to discuss ‘mind’ in a scientific context, referencing 2 scientists with different points of view, though they both emphasise the role of mathematics in positing their views.
 
Before science attempted to analyse and put mind into an ontological box, we knew it as a purely subjective experience. But we also knew that it exists in others and even other creatures. And it’s the last point that actually triggered me to write this post and not the ruminations of Wolfram and Hoffman. When I interact with another animal, I’m conscious that it has a mind, and I believe that’s what we’ve lost. If there is a collective consciousness arising from planet Earth, it’s not just humans. This is something that I’m acutely aware of, and it has even affected my fiction.
 
The thing about mind is that it stimulates empathy, and I think that’s the key to the long-term survival of, not just humanity, but the entire ecosystem we inhabit. Is there a mind beyond the Universe? We don’t know, but I would like to think there is. In another recent post, I alluded to the Hindu concept of Brahman, which appealed to Erwin Schrodinger. You’d be surprised how many famous physicists were attracted to the mystical. I can think of Pauli, Einstein, Bohr, Oppenheimer – they all thought outside the box, as we like to say.
 
Physicists have no problem mentally conceiving 6 or more dimensions in String Theory that are ‘curled up’ so miniscule we can’t observe them. But there is also the possibility that there is a dimension beyond the universe that we can’t see. Anyone familiar with Flatland by Edwin Abbott (a story about social strata as much as dimensions), would know it expounds on our inherent inability to interact with higher dimensions. It’s occurred to me that consciousness may exist in another dimension, and we might ‘feel’ it occasionally when we interact with people who have died. I have experienced this, though it proves nothing. I’m a creative and a neurotic, so such testimony can be taken with a grain of salt.
 
I’ve gone completely off-track, but I think that both Wolfram and Hoffman may be missing the point, when, like many scientists, they are attempting to incorporate the subjective experience of mind into a scientific framework. Maybe it just doesn’t fit.