I’ve written on this topic before, more than once, but one could write a book on it, and Yuval Noah Harari has come very close with his latest tome, Nexus; A Brief History of Information Networks from the Stone Age to AI. As the subtitle suggests, it’s ostensibly about the role of AI, both currently and in the foreseeable future, but he provides an historical context, which is also alluded to in the subtitle. Like a lot of book titles, the subtitle tells us more than the title, which, while being succinct and punchy, is also nebulous and vague, possibly deliberately. AI is almost a separate topic, but I find it interesting that it has become its own philosophical category (even on this blog) when it was not even a concept a century ago. I might return to this point later.
The other trigger was an essay in Philosophy Now (Issue 166, Feb/Mar 2025) with the theme articulated on the cover: Political Philosophy for our time (they always have a theme). This issue also published my letter on Plato’s cave and social media, which is not irrelevant. In particular, was an essay containing the 2 key words in my own title: Trust, Truth & Political Conversations; by Adrian Brockless, who was Head of Philosophy at Sutton Grammar School and has taught at a number of universities and schools: Heythrop College, London; the University of Hertfordshire; Roedean School; Glyn School; and now teaches philosophy online at adrianbrockless.com. I attempted to contact him via his website but he hasn’t responded.
Where to start? Brockless starts with ‘the relationship between trust and truth’, which seems appropriate, because there is a direct relationship and it helps to explain why there is such a wide dispersion, even polarisation, within the media, political apparatuses and the general public. Your version of the truth is heavily dependent on where you source it, and where you source it depends on whom you trust. And whom you trust depends on whether their political and ideological views align with yours or not. Confirmation bias has never been stronger or more salient to how we perceive the world and make decisions about its future.
And yes, I’m as guilty as the next person, but history can teach us lessons, which is a theme running throughout Harari’s book – not surprising, given that’s his particular field or discipline. All of Harari’s books (that I’ve read) are an attempt to project history into the future, partially based on what we know about the past. What comes across, in both Harari’s book and Brockless’s essay, is that truth is subjective and so is history to a large extent.
Possibly the most important lessons can be learned from examining authoritarian regimes. All politicians, irrespective of their persuasion or nationality, know the importance of ‘controlling the narrative’ as we like to say in the West, but authoritarian dictatorships take this to the extreme. Russia, for example, assassinates journalists, because Putin knows that the pen is mightier than the sword, but only if the sword is sheathed. Both Brockless and Harari give examples of revising history or even eliminating it, because we all know how certain figures have maintained an almost deistic persistence in the collective psyche. In some cases, like Jesus, Buddha, Confucius and Mohammed, it’s overt and has been maintained and exported into other cultures, so they have become global. In all cases, they had political origins, where they were iconoclasts. I’m not sure that any of them would have expected to be well known some 2 centuries later when worldwide communication would become a reality. I tend to think there is a strong element of chance involved rather than divine-interceded destiny, as many believe and wish to believe. In fact, what we want to believe determines to a much greater extent than we care to admit, what we perceive as truth.
Both authors make references to Trump, which is unavoidable, given the subject matter, because he’s almost a unique phenomenon and arguably one who could only arise in today’s so-called ‘post-truth’ world. It’s quite astute of Trump to call his own social media platform, Truth Social, because he actively promotes his own version of the truth in the belief that it can replace all other versions, and he’s so successful that his opponents struggle to keep up.
All politicians know the value (I wouldn’t use the word, virtue) of telling the public the lies they want to hear. Brockless gives the example that ‘on July 17, 1900, both The Times and The Daily Mail published a false story about the slaughter of Europeans in the British Embassy in Peking (the incident never happened)’. His point being that ‘fake news’ is a new term but an old concept. In Australia, we had the notorious ‘children thrown overboard affair’ in 2001, regarding the behaviour of asylum seekers intercepted at sea, which helped the then Howard government to win an election, but was later revealed to be completely false.
However, I think Trump provides the best demonstration of the ability to create a version of truth that many people would prefer to believe, and even maintain it over a period of years so that it grows stronger, not weaker, with time; to the point that it becomes the dominant version in some media, be it online or mainstream. The fact that FOX News was forced to go to court and pay out to a company that they libelled in the 2020 election as a direct consequence of unfaltering loyalty to Trump, did nothing to stem the lie that Biden stole the election from Trump. Murdoch even sacked the head of FOX’s own election-reporting team for correctly calling the election result; such was his dedication to Trump’s version of the truth.
And the reason I can call that particular instance a lie, as opposed to the truth, as many people maintain, is because it was tested in court. I’ve had some experience with testing different versions of truth in courts and mediation: specifically, contractual disputes, whereby I did analyses of historical data and prepared evidence in the form of written reports for lawyers to argue in court or at hearings. This is not to say that the person who wins is necessarily right, but there is a limitation on what can be called truth, which is the evidence that is presented. And, in those cases, the evidence is always in the form of documents: plans, minutes of meetings, date-stamped photos, site diaries, schedules (both projected and actual). I learned not to get emotional, which was relatively easy given I never had a personal stake in it; meaning it wasn’t going to cost me financially or reputationally. I also took the approach that I would get the same result no matter which side I was on. In other words, I tried to be as objective as possible. I found this had the advantage of giving me credibility and being believed. But it was also done in the belief that trying to support a lie invariably did you more harm than good, and I sometimes had to argue that case against my own client; I wouldn’t want to be a lawyer for Trump.
And of course, all this ties to trust. My client knew they could trust my judgement – if I wasn’t going to lie for them, I wasn’t going to lie to them. I make myself sound very important, but in reality, I was just a small cog in a much larger machine. I was a specialist who did analysis and provided evidence, which sometimes was pertinent to arguments. As part of this role, I oftentimes had to provide counter-arguments to other plaintiff’s claims – I’ve worked on both sides.
Anyway, I think it gives me an insight into truth that most people, including philosophers, don’t experience. Like most of my posts, I’ve gone off on a tangent, yet it’s relevant.
Brockless brings another dimension into the discussion, when he says:
Having an inbuilt desire to know and tell the truth matters because this attitude underpins genuine love, grief and other human experiences: authentic love and grief etc cannot be separated from truthfulness.
I’ve made the point before that trust underpins so many of our relationships, both professional and social, without which we can’t function, either as individuals or as a society.
Brockless makes a similar point when he says: Truthfullness is tied to how we view others as moral beings.
He then goes on to distinguish this from our love for animals and pets: Moral descriptions apply fully to human beings, not to inanimate objects, or even to animals… If we fail to see the difference between love for a pet and love for a person, then our concept of humanity has been corrupted by sentimentality.
I’m not sure I fully agree with him on this. Even before I read this passage, I was thinking of how the love and trust that some animals show to us is uncorrupted and close to unconditional. Animals can get attached to us in a way that we tend NOT to see as abnormal, even though an objective analysis might tell us it’s ‘unnatural’. I’ve had a lot of relationships with animals over many years, and I know that they become completely dependent on us; not just for material needs, but for emotional needs, and they try to give it back. The thing is that they do this despite an inability to directly communicate with us except through emotions. I can’t help but think that this is a form of honesty that many, if not most of us, have experienced, yet we rarely give it a second thought.
A recurring theme on this blog is existentialism and living authentically, which is tied to a requisite for self-honesty, and as bizarre as it may sound, I think we can learn from the animals in our lives, because they can’t lie at an emotional level. They have the advantage that they don’t intellectualise what they feel – they simply act accordingly.
Not so much a recurring theme, as a persistent one, in Harari’s book, is that more knowledge doesn’t equate to more truth. Nowhere is this more relevant than in the modern world of social media. Harari argues that this mismatch could increase with AI, because of how it’s ‘trained’ and he may have a point. We are already finding ‘biases’, and people within the tech industry have already tried to warn those of us outside the industry.
In another post, I referenced an article in New Scientist (23 July 2022), by Annalee Newitz who reported on a Google employee, Timnit Gebru, who, as ‘co-lead of Google’s ethical AI team’, expressed concerns that LLM (Large Language Model) algorithms pick up racial and other social biases, because they’re trained on the internet. She wrote a paper about the implications for AI applications using internet trained LLMs in areas like policing, health care and bank lending. She was subsequently fired by Google, though one doesn’t know how much the ‘paper’ played a role in that decision (quoting directly from my post).
Of course, I’ve explored the role of AI in science fiction, which borders on fantasy, but basically, I see a future where humans will have a symbiotic relationship with AI far beyond what we have today. I can see AI agents that become ‘attached’ to us in a way that animals do, not dissimilar to what I described above, but not the same either, as I don’t expect them to be sentient. But, even without sentience, they could pick up our biases and prejudices and amplify them, which some might argue (like Harari) is already happening.
As you can see, after close to 2,000 words, I haven’t really addressed the question in the tail of my title. I recently had a discussion with someone on Quora about Trump, whom I argued lived in the alternative universe that Trump had created. It turned out he has family, including grandchildren, living in Australia, because one of their parents is on a 2 year assignment (details unknown and not relevant). According to him, they hate it here, and I responded that if they lived in Trumpworld that was perfectly understandable, because they would be in a distinct minority. Believe it or not, the discussion ended amicably enough, and I wished both him and his family well. What I noticed was that his rhetoric was much more emotional – one might even say, irrational – than mine. Getting back to the contractual disputes I mentioned earlier, I’ve often found that when you have an ingroup-outgroup dynamic – like politics or contractual matters – highly intelligent people can become very irrational. Everyone claims they go to the facts, but these days you can find your own ‘facts’ anywhere on the internet, which leads to echo-chambers.
People look for truth in different places. Some find it in the Bible or some other religious text. I look for it in mathematics, despite a limited knowledge in that area. But I take solace in the fact that mathematics is true, independent of culture or even the Universe. All other truths are contingent. I have an aversion to conspiracy theories, which usually require a level of evidence that most followers don’t pursue. And most of them can be dismissed when you realise how many people from all over the world need to be involved just to keep it secret from the rest of us.
A good example is climate change, which I’ve been told many times over, is a worldwide hoax maintained for no other purpose than to keep climatologists in their jobs. But here’s the thing: the one lesson I learned from over 4 decades working on engineering projects is that if there is a risk, and especially an unknown risk, the worst strategy is to ignore it and hope it’s not true.
Addendum: It would be remiss of me not to mention that there was a feature article in the Good Weekend magazine that came out the same day I wrote this: on the increasing role of chatbot avatars in virtual relationships, including relationships with erotic content. If you can access the article, you'll see that the 'conversations' using LLM (large language models) AI are very realistic. I wrote about this phenomena on another post fairly recently (the end of last year), because it actually goes back to 1966 with Joseph Weizenbaum's ELIZA, who was a 'virtual therapist' that many people took seriously. So not really new, but now more ubiquitous and realistic.