This is the latest Question of the Month from Philosophy Now (Issue 137 April/May 2020), so answers will appear in Issue 139 (Aug/Sep 2020). It just occurred to me that I may have misread the question and the question I've answered is: How CAN we understand each other? Whatever, it's still worthy of a post, and below is what I wrote: definitely philosophical with psychological underpinnings and political overtones. There’s a thinly veiled reference to my not-so-recent post on Plato, and the conclusion was unexpected.
This is possibly the most difficult question I’ve encountered on Question of the Month, and I’m not sure I have the answer. If there is one characteristic that defines humans, it’s that we are tribal to the extent that it can define us. In almost every facet of our lives we create ingroups and outgroups, and it starts in childhood. If one watches the so-called debates that occur in parliament (at least in Australia) it can remind one of their childhood experiences at school. In current political discourse, if someone proposes an action or a policy, it is reflexively countered by the opposition, irrespective of its merit.
But I’ve also observed this is in the workplace, working on complex engineering projects, where contractual relationships can create similar divisions; where differences of opinion and perspective can escalate to irrational opposition that invariably leads to paralysis.
We’ve observed worldwide (at least in the West) divisions becoming stronger, reinforced by social media that is increasingly being used as a political weapon. We have situations where groups holding extreme yet strongly opposing views will both resist and subvert a compromise position proposed by the middle, which logically results in stalemate.
Staying with Australia (where I’ve lived since birth), we observed this stalemate in energy policy for over a decade. Every time a compromise was about to be reached, either someone from the left side or someone from the right side would scuttle it, because they would not accept a compromise on principle.
But recently, two events occurred in Australia that changed the physical, social and political landscape. In the summer of 2019/2020, we witnessed the worst bushfire season, not only in my lifetime, but in recorded history since European settlement. And although there was some political sniping and blame-calling, all the governments, both Federal and States, deferred to the experts in wildfire and forestry management. What’s more, the whole community came together and helped out irrespective of political and cultural differences. And then, the same thing happened with the COVID-19 crisis. There was broad bipartisan agreement on formulating a response, and the medical experts were not only allowed to do their job but to dictate policy.
Plato was critical of democracies and argued for a ‘philosopher-king’. We don’t have philosopher-kings, but we have non-ideological institutions with diverse scientific and technical expertise. I would contend that ‘understanding each other’ starts with acknowledging one’s own ignorance.
I wrote a post on Louisa Gilder’s well researched book, The Age of Entanglement, 10 years ago, when I acquired it (copyright 2008). I started rereading it after someone on Quora, with more knowledge than me, challenged the veracity of Bell’s theorem, also known as Bell’s Inequality, which really changed our perception of quantum phenomena at its foundations. Gilder’s book is really all about Bell’s theorem and its consequences, whilst covering the history of quantum mechanics over most of the 20th Century, from Bohr through to Feynman and beyond.
Gilder is not a physicist, from what I can tell, yet the book is very well researched with copious notes and references, and she garnered accolades from science publications as well as literary reviewers. Her exposition on Bell’s theorem is technically correct to the best of my knowledge, which she provides very early in the book.
She goes to some length to explain that the resolution of Bell’s theorem is not the obvious intuitive answer that entangled particles are like a pair of shoes separated in space and time, so that if you find the right-handed shoe you automatically know that the other one must be left-handed. This is what my interlocutor on Quora was effectively claiming. No, according to Gilder, and everything else I’ve read on this subject, Bell’s theorem is akin to finding too many coincidences than one would expect to find by chance. The inequality means that if results are found on one side of the inequality then the intuitive scenario is correct, and if they are on the other side, then the QM world obeys rules not found in classical physics.
The result is called ‘non-local’, which is the opposite of ‘local’, a term with a specific meaning in QM. Local means that objects are only affected by ‘signals’ that travel at the speed of light. Non-local means that objects show a connectivity that is not dependent on lightspeed communication or linkage.
It was Schrodinger who coined the term ‘entanglement’, claiming that it was the defining characteristic of QM.
I would not call that ‘one’ but rather ‘the’ characteristic trait of quantum mechanics. The one that enforces its entire departure from classical lines of thought.
I’ve also recently read an e-book called An Intuitive Approach to Quantum FieldTheory by Toni Semantana (only available in e-book, 2019), so it’s very recent. It’s very good in that Semantana obviously knows what he’s talking about, but, even though it has minimum mathematical formulae, it’s not easy to follow. Nevertheless, he covers esoteric topics like the Higgs field, gauge theories, Noether’s theorem (very erudite) and Feynman diagrams. It made me realise how little I know. It’s relevance to this topic is that he doesn’t discuss entanglement at all.
Back to Gilder, and it’s obvious that you can’t discuss entanglement and locality (or non-locality) without talking about time. If I can digress, someone else on Quora provided a link to an essay by J.C.N. Smith called Time – Illusion and Reality. Smith said you won’t find a definition of time that doesn’t include clocks or things that move. In fact, I’ve come across a few people who claim that, without motion, time has no reality.
However, I have a definition that involves light. Basically, time is the separation between events as measured by light. This stems from the startling yet obvious fact, that if lightspeed was not finite (instantaneous) then everything would happen at once. And, because lightspeed is the same for all observers, it determines the time difference between events, even though the time measured may differ for different observers, as per Einstein’s special theory of relativity. (Spacetime between events for all observers is the same.)
When I was in primary school at the impressionable age of 10 or 11, I was introduced to relativity theory, without being told that is what it was. Nevertheless, it had such an impact on my still-developing brain that I’ve never forgotten it. I can’t remember the context, but the teacher (Mr Hinton) told us that if you travel fast enough clocks will slow down and so will your heart. I distinctly remember trying to mentally grasp the concept and I found that I could if time was a dimension and as you sped up the seconds, or whatever time was measured in, they became more frequent between each heartbeat, so, by comparison, your heart slowed down. One of the other students made the comment that ‘if a plane could fly fast enough it would come back to land before it took off’. I’m unsure if that was a product of his imagination or if he’d come across it somewhere else, which was the impression he gave at the time. Then, thinking aloud, I said, It’s impossible to go faster than time, as if time and speed were interdependent. And someone near me turned, in a light-bulb moment, and said, You’re right.
My attempt at conceptually grasping the concept was flawed but my comment was prescient. You can’t travel faster than time because you can’t travel faster than light. For a photon of light, time is zero. The link between time and light is an intrinsic feature of the Universe, and was a major revelation of Einstein’s theory of relativity.
J.C.N. Smith argues in his essay that we have the wrong definition of time by referring to local events like the rotation of the planet or its orbit about the sun, or, even more locally, the motions of a pendulum or an atomic clock. He argues that the definition of time should be the configuration of the entire universe, because at any point in time it has a unique configuration, and, even though we can’t observe it completely, it must exist.
There is a serious problem with this because every observer of that configuration would see something completely different, even without relativistic effects. If you take the Magellanic Clouds, which you can see in the southern hemisphere with the naked eye on a cloudless, moonless night, you are looking 150,000 to 190,000 years into the past (there are 2 of them), which is roughly when homo sapiens emerged from Africa. So an observer on a world in the Magellanic Clouds, looking at the Milky Way galaxy, would see us 150,000 to 190,000 years in the past. In other words, no observer in the Universe could possibly see the same thing at the same time if they are far enough apart.
However, Smith is right in the sense that the age of the Universe infers that there is a universal ‘now’, which is the edge of the Big Bang (because it’s still in progress). The Cosmic Microwave Background Radiation is the earliest light we can see (from 380,000 years after the Big Bang) yet our observation of it is part of our ‘now’.
This has implications for entanglement if it’s non-local. If Freeman Dyson is correct that QM describes the future and classical physics describes the past, then the collapse or decoherence of the wave function represents ‘now’. So ‘now’ for an observer is when a photon hits your retina and you immediately see into the past, whether the photon is part of a reflection in a mirror or it comes from the Cosmic Background Radiation. It’s also the point when an entangled quantum particle (which could be a photon or something else) ‘fixes’ the outcome of its entangled partner wherever in the Universe it may be.
If entangled particles are in the future until one of them is observed then they infer a universal now. Or does it mean that it creates a link back in time across the Universe?
John Wheeler believed that there was a possibility of a connection between an observer and the distant past across the Universe, but he wasn’t thinking of entanglement. He proposed a thought experiment involving the famous double-slit experiment, whereby one makes an observation after the particle (electron or photon) has passed through the slit but before it hits the target (where we observe the outcome). He predicted that this would change the pattern from a wave going through both slits to a particle going through one. He was later vindicated (after his death). Wheeler argued that this would imply that there is a ‘backwards-in-time’ signal or acausal connection to the source. He argued that this could equally apply to photons from a distant quasar, gravitationally lensed by an intervening galaxy.
Wheeler’s thought experiment makes sense if the wave function of the particle exists in the future until it is detected, meaning before it interacts with a classical physics object. Entanglement also becomes ‘known’ after one of the entangled particles interacts with a classical physics object. Signals into the so-called past are not so mysterious if everything is happening in the future of the ‘observer’. Even microwaves from the Cosmic Background Radiation exist in our future until we ‘detect’ them.
Einstein’s special theory of relativity tells us that simultaneity can’t be determined, which seems to contradict the non-locality of entanglement according to Bell’s theorem. According to Einstein, ‘now’ is subjective, dependent on the observer’s frame of reference. This implies that someone’s future could be another person’s past, but this has implications for causality. No matter where an observer is in the Universe, everywhere they look is in their past. Now, as I explained earlier, their past maybe different to your past but, because all observations are dependent on electromagnetic radiation, everything they ‘see’ has already happened.
The exception is the event horizon of a black hole. According to Viktor T Toth (a regular contributor to Quora), the event horizon is always in your future. This creates a paradox, because it is believed you could cross an event horizon without knowing it. On the other hand, an external observer would see you frozen in time. Kip Thorne argues there is no matter in a black hole, only warped spacetime. Most significantly, once you pass the event horizon, space effectively becomes uni-directional like time – you can’t go backwards the way you came.
As Toth has pointed out a number of times, Einstein’s theory of gravity (the general theory of relativity) is mathematically a geometrical theory. Toth also points out that We can do quantum field theory just fine on the curved spacetime background of general relativity. Another contributor, Terry Bollinger, explains why general relativity is not quantum:
GR is a purely geometric theory, which in turn means that the gravity force that it describes is also specified purely in terms of geometry. There are no particles in gravity itself, and in fact nothing even slightly quantum.
In effect, Bollinger argues that quantum phenomena ‘sit’ on top of general relativity. I contend that gravity ultimately determines the rate of time, and QM uses a ‘clock’ that exists outside of Hilbert space where QM ‘sits’ (according to Roger Penrose, as well as Anil Ananthaswamy, who writes for New Scientist).
So what happens inside a black hole, which requires a theory of quantum gravity? As Freeman Dyson observed, no one can get inside a black hole to report or perform an experiment. But, if it’s always in one’s future, then maybe quantum gravity has no time. John Wheeler and Bryce de-Witt famously attempted to formulate Einstein’s theory of general relativity (gravity) in the same form as electromagnetism, and time (denoted as t) simply disappeared. And as Paul Davies pointed out in The Goldilocks Enigma, in quantum cosmology (as per the Wheeler de-Witt equation), time vanishes. But, if quantum cosmology is attempting to describe the future, then maybe one should expect time to disappear.
Another thought experiment: if you take an entangled particle to the other side of the visible universe (which would take something like the age of the Universe) and then they instantly ‘link’ or ‘connect’ non-locally, it still requires less than lightspeed to separate them. So you won’t achieve instantaneous transmission, even in principle, because you have to wait until its entangled ‘partner’ arrives at its destination. Or, as explained in the video below, the 'correlation' can only be checked in classical physics.
Addendum: This is the best explanation of QM entanglement and Bell's Theorem (for laypeople) that I've seen:
I just read a very interesting true story about an android built in the early 2000s based on the renowned sci-fi author, Philip K Dick, both in personality and physical appearance. It was displayed in public at a few prominent events where it interacted with the public in 2005, then was lost on a flight between Dallas and Las Vegas in 2006, and has never been seen since. The book is called Lost In Transit; The Strange Story of the Philip K Dick Android by David F Duffy.
You have to read the back cover to know it’s non-fiction published by Melbourne University Press in 2011, so surprisingly a local publication. I bought it from my local bookstore at a 30% discount price as they were closing down for good. They were planning to close by Good Friday but the COVID-19 pandemic forced them to close a good 2 weeks earlier and I acquired it at the 11th hour, looking for anything I might find interesting.
To quote the back cover:
David F Duffy was a postdoctoral fellow at the University of Memphis at the time the android was being developed... David completed a psychology degree with honours at the University of Newcastle [Australia] and a PhD in psychology at Macquarie University, before his fellowship at the University of Memphis, Tennessee. He returned to Australia in 2007 and lives in Canberra with his wife and son.
The book is written chronologically and is based on extensive interviews with the team of scientists involved, as well as Duffy’s own personal interaction with the android. He had an insider’s perspective as a cognitive psychologist who had access to members of the team while the project was active. Like everyone else involved, he is a bit of a sci-fi nerd with a particular affinity and knowledge of the works of Philip K Dick.
My specific interest is in the technical development of the android and how its creators attempted to simulate human intelligence. As a cognitive psychologist, with professionally respected access to the team, Duffy is well placed to provide some esoteric knowledge to an interested bystander like myself.
There were effectively 2 people responsible (or 2 team leaders), David Hanson and Andrew Olney, who were brought together by Professor Art Greasser, head of the Institute of Intelligent Systems, a research lab in the psychology building at the University of Memphis (hence the connection with the author).
Hanson is actually an artist, and his specialty was building ‘heads’ with humanlike features and humanlike abilities to express facial emotions. His heads included mini-motors that pulled on a ‘skin’, which could mimic a range of facial movements, including talking.
Olney developed the ‘brains’ of the android that actually resided on a laptop and was connected by wires going into the back of the android’s head. Hanson’s objective was to make an android head that was so humanlike that people would interact with it on an emotional and intellectual level. For him, the goal was to achieve ‘empathy’. He had made at least 2 heads before the Philip K Dick project.
Even though the project got the ‘blessing’ of Dick’s daughters, Laura and Isa, and access to an inordinate amount of material, including transcripts of extensive interviews, they had mixed feelings about the end result, and, tellingly, they were ‘relieved’ when the head disappeared. It suggests that it’s not the way they wanted him to be remembered.
In a chapter called Life Inside a Laptop, Duffy gives a potted history of AI, specifically in relation to the Turing test, which challenges someone to distinguish an AI from a human. He also explains the 3 levels of processing that were used to create the android’s ‘brain’. The first level was what Olney called ‘canned’ answers, which were pre-recorded answers to obvious questions and interactions, like ‘Hi’, ‘What’s your name?’, ‘What are you?’ and so on. Another level was ‘Latent Semantic Analysis’ (LSA), which was originally developed in a lab in Colorado, with close ties to Graesser’s lab in Memphis, and was the basis of Grasser’s pet project, ‘AutoTutor’ with Olney as its ‘chief programmer’. AutoTutor was an AI designed to answer technical questions as a ‘tutor’ for students in subjects like physics.
To create the Philip K Dick database, Olney downloaded all of Dick’s opus, plus a vast collection of transcribed interviews from later in his life. The Author conjectures that ‘There is probably more dialogue in print of interviews with Philip K Dick than any other person, alive or dead.’
The third layer ‘broke the input (the interlocutor’s side of the dialogue) into sections and looked for fragments in the dialogue database that seemed relevant’ (to paraphrase Duffy). Duffy gives a cursory explanation of how LSA works – a mathematical matrix using vector algebra – that’s probably a little too esoteric for the content of this post.
In practice, this search and synthesise approach could create a self-referencing loop, where the android would endlessly riff on a subject, going off on tangents, that sounded cogent but never stopped. To overcome this, Olney developed a ‘kill switch’ that removed the ‘buffer’ he could see building up on his laptop. At one display at ComicCon (July 2005) as part of the promotion for A Scanner Darkly(a rotoscope movie by Richard Linklater, starring Keanu Reeves), Hanson had to present the android without Olney, and he couldn’t get the kill switch to work, so Hanson stopped the audio with the mouth still working and asked for the next question. The android simply continued with its monolithic monologue which had no relevance to any question at all. I think it was its last public appearance before it was lost. Dick’s daughters, Laura and Isa, were in the audience and they were not impressed.
It’s a very informative and insightful book, presented like a documentary without video, capturing a very quirky, unique and intellectually curious project. There is a lot of discussion about whether we can produce an AI that can truly mimic human intelligence. For me, the pertinent word in that phrase is ‘mimic’, because I believe that’s the best we can do, as opposed to having an AI that actually ‘thinks’ like a human.
In many parts of the book, Duffy compares what Graesser’s team is trying to do with LSA with how we learn language as children, where we create a memory store of words, phrases and stock responses, based on our interaction with others and the world at large. It’s a personal prejudice of mine, but I think that words and phrases have a ‘meaning’ to us that an AI can never capture.
I’ve contended before that language for humans is like ‘software’ in that it is ‘downloaded’ from generation to generation. I believe that this is unique to the human species and it goes further than communication, which is its obvious genesis. It’s what we literally think in. The human brain can connect and manipulate concepts in all sorts of contexts that go far beyond the simple need to tell someone what they want them to do in a given situation, or ask what they did with their time the day before or last year or whenever. We can relate concepts that have a spiritual connection or are mathematical or are stories. In other words, we can converse in topics that relate not just to physical objects, but are products of pure imagination.
Any android follows a set of algorithms that are designed to respond to human generated dialogue, but, despite appearances, the android has no idea what it’s talking about. Some of the sample dialogue that Duffy presented in his book, drifted into gibberish as far as I could tell, and that didn’t surprise me.
I’ve explored the idea of a very advanced AI in my own fiction, where ‘he’ became a prominent character in the narrative. But he (yes, I gave him a gender) was often restrained by rules. He can converse on virtually any topic because he has a Google-like database and he makes logical sense of someone’s vocalisations. If they are not logical, he’s quick to point it out. I play cognitive games with him and his main interlocutor because they have a symbiotic relationship. They spend so much time together that they develop a psychological interdependence that’s central to the narrative. It’s fiction, but even in my fiction I see a subtle difference: he thinks and talks so well, he almost passes for human, but he is a piece of software that can make logical deductions based on inputs and past experiences. Of course, we do that as well, and we do it so well it separates us from other species. But we also have empathy, not only with other humans, but other species. Even in my fiction, the AI doesn’t display empathy, though he’s been programmed to be ‘loyal’.
Duffy also talks about the ‘uncanny valley’, which I’ve discussed before. Apparently, Hanson believed it was a ‘myth’ and that there was no scientific data to support it. Duffy appears to agree. But according to a New Scientist article I read in Jan 2013 (by Joe Kloc, a New York correspondent), MRI studies tell another story. Neuroscientists believe the symptom is real and is caused by a cognitive dissonance between 3 types of empathy: cognitive, motor and emotional. Apparently, it’s emotional empathy that breaks the spell of suspended disbelief.
Hanson claims that he never saw evidence of the ‘uncanny valley’ with any of his androids. On YouTube you can watch a celebrity android called Sophie and I didn’t see any evidence of the phenomenon with her either. But I think the reason is that none of these androids appear human enough to evoke the response. The uncanny valley is a sense of unease and disassociation we would feel because it’s unnatural; similar to seeing a ghost - a human in all respects except actually being flesh and blood.
I expect, as androids, like the Philip K Dick simulation and Sophie, become more commonplace, the sense of ‘unnaturalness’ would dissipate - a natural consequence of habituation. Androids in movies don’t have this effect, but then a story is a medium of suspended disbelief already.
I’ve talked before about the apparent divide between arts and humanities, and science and technology. Someone once called me a polymath, but I don’t think I’m expert enough in any field to qualify. However, I will admit that, for most of my life, I’ve had a foot in both camps, to use a well-worn metaphor. At the risk of being self-indulgent, I’m going to discuss this dichotomy in reference to my own experiences.
I’ve worked in the engineering/construction industry most of my adult life, yet I have no technical expertise there either. Mostly, I worked as a planning and cost control engineer, which is a niche activity that I found I was good at. It also meant I got to work with accountants and lawyers as well as engineers of all disciplines, along with architects.
The reason I bring this up is because planning is all about logic – in fact, that’s really all it is. At its most basic, it’s a series of steps, some of which are sequential and some in parallel. I started doing this before computers did a lot of the work for you. But even with computers, you have to provide the logic; so if you can’t do that, you can’t do professional planning. I make that distinction because it was literally my profession.
In my leisure time, I write stories and that also requires a certain amount of planning, and I’ve found there are similarities, especially when you have multiple plot lines that interweave throughout the story. For me, plotting is the hardest part of storytelling; it’s a sequential process of solving puzzles. And science is also about solving puzzles, all of which are beyond my abilities, yet I love to try and understand them, especially the ones that defy our intuitive sense of logic. But science is on a different level to both my professional activities and my storytelling. I dabble at the fringes, taking ideas from people much cleverer than me and creating a philosophical pastiche.
Someone on Quora (a student) commented once that studying physics exercised his analytical skills, which he then adapted to other areas of his life. It occurred to me that I have an analytical mind and that is why I took an interest in physics rather than the other way round. Certainly, my work required an analytical approach and I believe I also take an analytical approach to philosophy. In fact, I’ve argued previously that analysis is what separates philosophy from dogma. Anyway, I don’t think it’s unusual for us, as individuals, to take a skill set from one activity and apply it to another apparently unrelated one.
I wrote a post once about the 3 essential writing skills, being character development, evoking emotion and creating narrative tension. The key to all of these is character and, if one was to distil out the most essential skill of all, it would be to write believable dialogue, as if it was spontaneous, meaning unpremeditated, yet not boring or irrelevant to the story. I’m not at all sure it can be taught. Someone once said (Don Burrows) that jazz can’t be taught, because it’s improvisation by its very nature, and I’d argue the same applies to writing dialogue. I’ve always felt that writing fiction has more in common with musical composition than writing non-fiction. In both cases they can come unbidden into one’s mind, sometimes when one is asleep, and they’re both essentially emotive mediums.
But science too has its moments of creativity, indeed sheer genius; being a combination of sometimes arduous analysis and inspired intuition.
A writer can get attached to characters, and it tends to sneak up on one (speaking for myself) meaning they are not necessarily the characters you expect to affect you.
All writers, who get past the ego phase, will tell you the characters feel like they exist separately to them. By the ego phase, I mean you’ve learned how to keep yourself out of the story, though you may suffer lapses – the best fiction is definitely not about you.
People will tell you that you use your own experience on which to base characters and events, and otherwise will base characters on people you know. I expect some writers might do that and I’ve even seen advice, if writing a screenplay, to imagine an actor you’ve seen playing the role. If I find myself doing that then I know I’ve lost the plot, literally rather than figuratively.
I borrow names from people I’ve known, but the characters don’t resemble them at all, except in ethnicity. For example, if I have an Indian character, I will use an Indian name of someone I knew. We know that a name is not unique, so we know more than one John, for example, and we also know they have nothing in common.
I worked with someone once, who had a very unusual name, Essayas Alfa, and I used both his names in the same story. Neither character was anything like the guy I knew, except the character called Essayas was African and so was my co-worker, but one was a sociopath and the other was a really nice bloke. A lot of names I make up, including all the Kiri names, and even Elvene. I was surprised to learn it was a real name; at least, I got the gender right.
The first female character I ever created, when I was learning my craft, was based on someone I knew, though they had little in common, except their age. It was like I was using her as an actor for the role. I’ve never done that since. A lot of my main characters are female, which is a bit of a gamble, I admit. Creating Elvene was liberating and I’ve never looked back.
If you have dreams occupied by strangers, then characters in fiction are no different. One can’t explain it if you haven’t experienced it. So how can you get attached to a character who is a figment of your mind? Well, not necessarily in the way you think – it’s not an infatuation. I can’t imagine falling in love with a character I created, though I can imagine falling in love with an actor playing that character, because she’s no longer mine (assuming the character is female).
And I’ve got attached to male characters as well. These are the characters who have surprised me. They’ve risen above themselves, achieved something that I never expected them to. They weren’t meant to be the hero of the story, yet they excel themselves, often by making a sacrifice. They go outside their comfort zone, as we like to say, and become courageous, not by overcoming an adversary but by overcoming a fear. And then I feel like I owe them, as illogical as that sounds, because of what I put them through. They are my secret hero of the story.
I’ve said this before, but it’s worth repeating: no one totally agrees with everything by someone else. In fact, we each of us change our views as we learn and progress and become exposed to new ideas. It’s okay to cherry-pick. In fact, it’s normal. All the giants in science and mathematics and literature and philosophy borrowed and built on the giants who went before them.
I’ve been reading about Plato in A.C. Grayling’s extensive chapter on him and his monumental status in Western philosophy (The History of Philosophy). According to Grayling, Plato was critical of his own ideas. His later writings challenged some of the tenets of his earlier writings. Plato is a seminal figure in Western culture; his famous Academy ran for almost 800 years, before the Christian Roman Emperor, Justininian, closed it down in 529 CE, because he considered it pagan. One must remember that it was during the Roman occupation of Alexandria in 414 that Hypatia was killed by a Christian mob, which many believe foreshadowed the so-called ‘Dark Ages’.
Hypatia had good relations with the Roman Prefect of her time, and even had correspondence with a Bishop (Synesius of Cyrene), who clearly respected, even adored her, as her former student. I’ve read the transcript of some of his letters, care of Michael Deakin’s scholarly biography. Deakin is Honorary Research Fellow at the School of Mathematical Sciences of Monash University (Melbourne, Australia). Hypatia also taught a Neo-Platonist philosophy, including the works of Euclid, a former Librarian of Alexandria. On the other hand, the Bishop who is historically held responsible for her death (Cyril) was canonised. It’s generally believed that her death was a ‘surrogate’ attack on the Prefect.
Returning to my theme, the Academy of course changed and evolved under various leaders, which led to what’s called Neoplatonism. It’s worth noting that Augustine was influenced by Neoplatonism as well as Aquinas, because Plato’s perfect world of ‘forms’ and his belief in an immaterial soul lend themselves to Christian concepts of Heaven and life after death.
But I would argue that the unique Western tradition that combines science, mathematics and epistemology into a unifying discipline called physics has its origins in Plato’s Academy. It was a pre-requisite, specified by Plato, that people entering the Academy required a knowledge of mathematics. The one remnant of Plato’s philosophy, which stubbornly resists being relegated to history as an anachronism, is mathematical Platonism, though it probably means something different to Plato’s original concept of ‘forms’.
In modern parlance, mathematical Platonism means that mathematics has an independent existence to the human mind and even the Universe. To quote Richard Feynman (who wasn’t a Platonist) from his book, The Character of Physical Law in the chapter titled The Relation of Mathematics to Physics.
...what turns out to be true is that the more we investigate, the more laws we find, and the deeper we penetrate nature, the more this disease persists. Every one of our laws is a purely mathematical statement in rather complex and abstruse mathematics... Why? I have not the slightest idea. It is only my purpose to tell you about this fact.
The ’disease’ he’s referring to and the ‘fact’ he can’t explain is best expressed in his own words:
The strange thing about physics is that for the fundamental laws we still need mathematics.
To put this into context, he argues that when you take a physical phenomenon that you describe mathematically, like the collision between billiard balls, the fundaments are not numbers or formulae but the actual billiard balls themselves (my mundane example, not his). But when it comes to fundaments of fundamental laws, like the wave function in Schrodinger’s equation (again, my example), the fundaments remain mathematical and not physical objects per se.
In his conclusion, towards the end of a lengthy chapter, he says:
Physicists cannot make a conversation in any other language. If you want to learn about nature, to appreciate nature, it is necessary to understand the language that she speaks in. She offers her information only in one form.
I’m not aware of any physicist who would disagree with that last statement, but there is strong disagreement whether mathematical language is simply the only language to describe nature, or it’s somehow intrinsic to nature. Mathematical Platonism is unequivocally the latter.
Grayling’s account of Plato says almost nothing about the mathematical and science aspect of his legacy. On the other hand, he contends that Plato formulated and attempted to address three pertinent questions:
What is the right kind of life, and the best kind of society? What is knowledge and how do we get it? What is the fundamental nature of reality?
In the next paragraph he puts these questions into perspective for Western culture.
Almost the whole of philosophy consists in approaches to the related set of questions addressed by Plato.
Grayling argues that the questions need to be addressed in reverse order. To some extent, I’ve already addressed the last two. Knowledge of the natural world has become increasingly dependent on a knowledge of mathematics. Grayling doesn’t mention that Plato based his Academy on Pythagoras’s quadrivium: arithmetic, geometry, astronomy and music; after Plato deliberately sought out Pythagoras’s best student, Archytas of Terentum. Pythagoras is remembered for contending that ‘all is number’, though his ideas were more religiously motivated than scientific.
But the first question is the one that was taken up by subsequent philosophers, including his most famous student, Aristotle, who arguably had a greater and longer lasting influence on Western thought than his teacher. But Aristotle is a whole other chapter in Grayling’s book, as you’d expect, so I’ll stick to Plato.
Plato argued for an ‘aristocracy’ government run by a ‘philosopher-king’, but based on a meritocracy rather than hereditary rulership. In fact, if one goes into details, he effectively argued for leadership on a eugenics basis, where prospective leaders were selected from early childhood and educated to rule.
Plato was famously critical of democracy (in his time) because it was instrumental in the execution of his friend and mentor, Socrates. Plato predicted that democracy led to either anarchy or the rule of the wealthy over the poor. In the case of anarchy, a strongman would logically take over and you'd have 'tyranny', which is the worst form of government (according to Plato). The former (anarchy) is what we’ve recently witnessed in so-called 'Arab spring' uprisings.
The latter (rule by the wealthy) is what has arguably occurred in America, where lobbying by corporate interests increasingly shapes policies. This is happening in other ‘democracies’, including Australia. To give an example, our so-called ‘water policy’ is driven by prioritising the sale of ‘water rights’ to overseas investors over ecological and community needs; despite Australia being the driest continent in the world (after Antarctica). Keeping people employed is the mantra of all parties. In other words, as long as the populace is gainfully employed, earning money and servicing the economy, policy deliberations don’t need to take them into account.
As Clive James once pointed out, democracy is the exception, not the norm. Democracies in the modern world have evolved from a feudalistic model, predominant in Europe up to the industrial revolution, when social engineering ideologies like fascism and communism took over from monarchism. It arguably took 2 world wars before we gave up traditional colonial exploitation, and now we have exploitation of a different kind, which is run by corporations rather than nations.
I acknowledge that democracy is the best model for government that we have, but those of us lucky enough to live in one tend to take if for granted. In Athens, in the original democracy (in Plato’s time) which was only open to males and excluded slaves, there was a broad separation between the aristocracy and the people who provided all the goods and services, including the army. One can see parallels to today’s world, where the aristocracy have been replaced by corporate leaders, and the interdependence and political friction between these broad categories remain. In the Athens Senate (according to historian, Philip Matyszak) if you weren’t an expert in the field you pontificated on, like ship building (his example) you were generally given short thrift by the Assembly.
I sometimes think that this is the missing link in today’s governance, which has been further eroded by social media. There are experts in today’s world on topics like climate change and species extinction and water conservation (to provide a parochial example) but they are often ignored or sidelined or censored. As recently as a couple of decades ago, scientists at CSIRO (Australia’s internationally renowned, scientific research organisation) were censored from talking about climate change, because they were bound by their conditions of employment not to publicly comment on political issues. And climate change was deemed a political issue, not a scientific one, by the then Cabinet, who were predominantly climate change deniers (including the incumbent PM).
In contrast, the recent bush fire crisis and the current COVID-19 crisis have seen government bodies, at both the Federal and State level, defer to expertise in their relevant fields. To return to my opening paragraph, I think we can cherry-pick some of Plato’s ideas in the context of a modern democracy. I would like to see governments focus more on expertise and long-term planning beyond a government’s term in office. We can’t have ‘philosopher kings’, but we do have ‘elite’ research institutions that can work with private industries in creating more eco-friendly policies that aren’t necessarily governed by the sole criterion of increasing GDP in the short term. I would like to see more bipartisanship rather than a reflex opposition to every idea that is proposed, irrespective of its merits.