Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Sunday 10 July 2022

Creative and analytic thinking

I recently completed an online course with a similar title, How to Think Critically and Creatively. It must be the 8th or 9th course I’ve done through New Scientist, on a variety of topics, from cosmology and quantum mechanics to immunology and sustainable living; so quite diverse subjects. I started doing them during COVID, as they helped to pass the time and stimulate the brain at the same time.
 
All these courses rely on experts in their relevant fields from various parts of the globe, so not just UK based, as you might expect. This course was no exception with just 2 experts, both from America. Denise D Cummins is described as a ‘cognitive scientist, author and elected Fellow of the Association for Psychological Science, and she’s held faculty at Yale, UC, University of Illinois and the Centre of Adaptive Behaviours at the Max Planck Institute in Berlin’. Gerard J Puccio is ‘Department Chair and Professor at the International Centre for Studies on Creativity, Buffalo State; a unique academic department that offers the world’s only Master of Science degree in creativity’.
 
I admit to being sceptical that ‘creativity’ can be taught, but that depends on what one means by creativity. If creativity means using your imagination, then yes, I think it can, because imagination is something that we all have, and it’s probably a valid comment that we don’t make enough use of it in our everyday lives. If creativity means artistic endeavour then I think that’s another topic, even though it puts imagination centre stage, so to speak.
 
I grew up in a family where one side was obviously artistic and the other side wasn’t, which strongly suggests there’s a genetic component. The other side excelled at sport, and I was rubbish at sport. However, both sides were obviously intelligent, despite a notable lack of formal education; in my parents’ case, both leaving school in their early teens. In fact, my mother did most of her schooling by correspondence, and my father left school in the midst of the great depression, shortly followed by active duty in WW2.
 
Puccio (mentioned above) argues that creativity isn’t taught in our education system because it’s too hard. Instead, he says that we teach by memorising facts and by ‘understanding’ problems. I would suggest that there is a hierarchy, where you need some basics before you can ‘graduate’ to ‘creative thinking’, and I use the term here in the way he intends it. I spent most of my working lifetime on engineering projects, with diverse and often complex elements. I need to point out that I wasn’t one of the technical experts involved, but I worked with them, in all their variety, because my job was to effectively co-ordinate all their activities towards a common goal, by providing a plan and then keeping it on the rails.
 
Engineering is all about problem solving, and I’m not sure one can do that without being creative, as well as analytical. In fact, one could argue that there is a dialectical relationship between them, but maybe I’m getting ahead of myself.
 
Back to Puccio, who introduced 2 terms I hadn’t come across before: ‘divergent’ and ‘convergent’ thinking, arguing they should be done in that order. In a nutshell, divergent thinking is brainstorming where one thinks up as many options as possible, and convergent thinking is where one narrows in on the best solution. He argues that we tend to do the second one without doing the first one. But this is related to something else that was raised in the course, which is ‘Type 1 thinking’ and ‘Type 2 thinking’.
 
Type 1 thinking is what most of us would call ‘intuition’, because basically it’s taking a cognitive shortcut to arrive at an answer to a problem, which we all do all the time, especially when time is a premium. Type 2 thinking is when we analyse the problem, which is not only time consuming but takes up brain resources that we’d prefer not to use, because we’re basically lazy, and I’m no exception. These 2 cognitive behaviours are clinically established, so it’s not pop-science.
 
However, something that was not discussed in the course, is that type 2 thinking can become type 1 thinking when we develop expertise in something, like learning a musical instrument, or writing a story, or designing a building. In other words, we develop heuristics based on our experience, which is why we sometimes jump to convergent thinking without going through the divergent part.
 
The course also dealt with ‘critical thinking’, as per its title, but I won’t dwell on that, because critical thinking arises from being analytical, and separating true expertise from bogus expertise, which is really a separate topic.
 
How does one teach these skills? I’m not a teacher, so I’m probably not best qualified to say. But I have a lot of experience in a profession that requires analytical thinking and problem-solving as part of its job description. The one thing I’ve learned from my professional life is the more I’m restrained by ‘rules’, the worse job I’ll do. I require the freedom and trust to do things my own way, and I can’t really explain that, but it’s also what I provide to others. And maybe that’s what people mean by ‘creative thinking’; we break the rules.
 
Artistic endeavour is something different again, because it requires spontaneity. But there is ‘divergent thinking’ involved, as Puccio pointed out, giving the example of Hemingway writing countless endings to Farewell to Arms, before settling on the final version. I’m reminded of the reported difference between Beethoven and Mozart, two of the greatest composers in the history of Western classical music. Beethoven would try many different versions of something (in his head and on paper) before choosing what he considered the best. He was extraordinarily prolific but he wrote only 9 symphonies and 5 piano concertos plus one violin concerto, because he workshopped them to death. Mozart, on the other hand, apparently wrote down whatever came into his head and hardly revised it. One was very analytical in their approach and the other was almost completely spontaneous.
 
I write stories and the one area where I’ve changed type 2 thinking into type 1 thinking is in creating characters – I hardly give it a thought. A character comes into my head almost fully formed, as if I just met them in the street. Over time I learn more about them and they sometimes surprise me, which is always a good thing. I once compared writing dialogue to playing jazz, because they both require spontaneity and extemporisation. Don Burrows once said you can’t teach someone to play jazz, and I’ve argued that you can’t teach someone to write dialogue.
 
Having said that, I once taught a creative writing class, and I gave the class exercises where they were forced to write dialogue, without telling them that that was the point of the exercise. In other words, I got them to teach themselves.
 
The hard part of storytelling for me is the plot, because it’s a neverending exercise in problem-solving. How did I get back to here? Analytical thinking is very hard to avoid, at least for me.
 
As I mentioned earlier, I think there is a dialectic between analytical thinking and creativity, and the best examples are not artists but genii in physics. To look at just two: Einstein and Schrodinger, because they exemplify both. But what came first: the analysis or the creativity? Well, I’m not sure it matters, because they couldn’t have done one without the other. Einstein had an epiphany (one of many) where he realised that an object in free fall didn’t experience a force, which apparently contradicted Newton. Was that analysis or creativity or both? Anyway, he not only changed how we think about gravity, he changed the way we think about the entire cosmos.
 
Schrodinger, borrowed an idea from de Broglie that particles could behave like waves and changed how we think about quantum mechanics. As Richard Feynman once said, ‘No one knows where Schrodinger’s equation comes from. It came out of Schrodinger’s head. You can’t derive it from anything we know.’
 

Saturday 11 June 2022

Does the "unreasonable effectiveness of Mathematics" suggest we are in a simulation?

 This was a question on Quora, and I provided 2 responses: one being a comment on someone else’s post (whom I follow); and the other being my own answer.

Some years ago, I wrote a post on this topic, but this is a different perspective, or 2 different perspectives. Also, in the last year, I saw a talk given by David Chalmers on the effects of virtual reality. He pointed out that when we’re in a virtual reality using a visor, we trick our brains into treating it as if it’s real. I don’t find this surprising, though I’ve never had the experience. As a sci-fi writer, I’ve imagined future theme parks that were completely, fully immersive simulations. But I don’t believe that provides an argument that we live in a simulation, for reasons I provide in my Quora responses, given below.

 

Comment:

 

Actually, we create a ‘simulacrum’ of the ‘observable’ world in our heads, which is different to what other species might have. For example, most birds have 300 degree vision, plus they see the world in slow motion compared to us.

 

And this simulacrum is so fantastic it actually ‘feels’ like it exists outside your head. How good is that? 

 

But here’s the thing: in all these cases (including other species) that simulacrum must have a certain degree of faithfulness or accuracy with ‘reality’, because we interact with it on a daily basis, and, guess what? It can kill you.

 

But there is a solipsist version of this, which happens when we dream, but it won’t kill you, as far as we can tell, because we usually wake up.

 

Maybe I should write this as a separate answer.

 

And I did:

 

One word answer: No.

 

But having said that, there are 2 parts to this question, the first part being the famous quote from the title of Eugene Wigner’s famous essay. But I prefer this quote from the essay itself, because it succinctly captures what the essay is all about.

 

It is difficult to avoid the impression that a miracle confronts us here… or the two miracles of the existence of laws of nature and of the human mind’s capacity to divine them.

 

This should be read in conjunction with another famous quote; this time from Einstein:

 

The most incomprehensible thing about the Universe is that it’s comprehensible.

 

And it’s comprehensible because its laws can be rendered in the language of mathematics and humans have the unique ability (at least on Earth) to comprehend that language even though it appears to be neverending.

 

And this leads into the philosophical debate going as far back as Plato and Aristotle: is mathematics invented or discovered?

 

The answer to that question is dependent on how you look at mathematics. Cosmologist and Fellow of the Royal Society, John Barrow, wrote a very good book on this very topic, called Pi in the Sky. In it, he makes the pertinent point that mathematics is not so much about numbers as the relationships between numbers. He goes further and observes that once you make this leap of cognitive insight, a whole new world opens up.

 

But here’s the thing: we have invented a system of numbers, most commonly to base 10, (but other systems as well), along with specific operators and notations that provide a language to describe and mentally manipulate these relationships. But the relationships themselves are not created by us: they become manifest in our explorations. To give an extremely basic example: prime numbers. You cannot create a prime number, they simply exist, and you can’t change one into a non-prime number or vice versa. And this is very basic, because primes are called the atoms of mathematics, because all the other ‘natural’ numbers can be derived from them.

 

An interest in the stars started early among humans, and eventually some very bright people, mainly Kepler and Newton, came to realise that the movement of the planets could be described very precisely by mathematics. And then Einstein, using Riemann geometry, vectors, calculus and matrices and something called the Lorenz transformation, was able to describe the planets even more accurately and even provide very accurate models of the entire observable universe, though recently we’ve come to the limits of this and we now need new theories and possibly new mathematics.


But there is something else that Einstein’s theories don’t tell us and that is that the planetary orbits are chaotic, which means they are unpredictable and that means eventually they could actually unravel. But here’s another thing: to calculate chaotic phenomena requires a computation to infinite decimal places. Therefore I contend the Universe can’t be a computer simulation. So that’s the long version of NO.

 

 

Footnote: Both my comment and my answer were ‘upvoted’ by Eric Platt, who has a PhD in mathematics (from University of Houston) and was a former software engineer at UCAR (University Corporation for Atmospheric Research).


Saturday 4 June 2022

An impossible thought experiment

I recently watched a discussion between Roger Penrose and Jordan Peterson, which was really a question and answer session, with Peterson asking the questions and Penrose providing the answers. There was a third person involved as moderator, but I’ve forgotten his name and his interaction was minimal. It was mostly about consciousness, but also touched on quantum mechanics and Godel’s theorem.

 

I can’t remember the context, but (at point 1.06) Penrose trotted out the well-worn thought experiment of 2 people crossing a street in opposite directions, and somewhere, in some far-flung part of the cosmos, an armada of spaceships is departing for a journey to Earth. Now, according to Einstein’s theory of relativity, one of these ‘observers’ will 'say' the fleet left 100s of years in the past and the other will 'say', no, they're leaving 100s of years in the future.


I’ve always had a problem with this ‘scenario’, and I’ve discussed it previously. The thing is that neither of them can ‘observe’ anything at all, because the ‘event’ (space fleet departing) is outside the light cone of influence of Earth (in either the future or the past). So neither of them receive a signal telling them that this is what they ‘observe’. In other words, it’s something they’ve worked out with equations or a space-time diagram. Brian Greene illustrates it graphically in a YouTube video.

 


Of course, my interpretation is considered ‘naïve’ and completely wrong by Penrose and every other physicist I know of.

 

Now, some thought experiments, like the famous EPR experiment, in combination with Bell’s Theorem, can be done in the real world, and was done after Einstein’s death and effectively proved Einstein wrong (on that particular point). Another example is John Wheeler’s delayed decision thought experiment for the double-slit experiment, which was also physically done after Wheeler’s death.

 

But this thought experiment is impossible to do, even in principle. My interpretation is that you have a clear contradiction, and where you have a contradiction there is usually something wrong with one or more of your premises. My proposed resolution is that what they 'perceive' is not reality, because the event is outside the cones-of-influence (past and future) of the observers.


But let’s take the thought experiment to its logical conclusion. Let’s say the observers record their deduced ‘time of departure’ with respect to their frame of reference, and it can be looked up centuries later when the fleet actually arrives on Earth. Now, when the fleet arrives, its trajectory through spacetime is within Earth’s past light cone. The fleet has its own time record of their journey and we know how far they’ve travelled. In fact, this is no different to the return leg of the famous twin paradox thought experiment. Now observers can apply relativistic corrections to the fleet’s recorded elapsed time, and deduce a time of departure based on Earth’s frame of reference. This will give a ‘time’ which I expect will fall somewhere between the 2 times recorded by the original observers.

 

Of course, this is still an impossible thought experiment because there is no way the 2 pedestrians could know when the fleet was departing. But if a fleet of spaceships did arrive on Earth from somewhere ‘far, far away’, we could calculate exactly when it left (ref. Earth time) and there would be no contradiction.

 

 

Footnote: I know that this stems from Einstein’s discovery that simultaneity varies according to an observer’s frame of reference, and there is an excellent video that explains the maths behind it. But here’s the thing: if the observer is equi-distant from the 2 signals in the same frame of reference you’ll get ‘true’ simultaneity (watch the video). On the other hand, an observer moving with respect to the sources will not see simultaneity. A little known fact is that you have to allow for length contraction as well as time dilation to get the right answer. But here’s another thing: on a cosmic scale, 2 observers can see 2 events in opposite sequence even if they’re not moving relative to each other. BUT, if the events have a causal relationship, then all observers see the same sequence, irrespective of relative motion. (Refer Addendum 2)*

 

 

Addendum 1: I’ve given this more thought, by having imaginary dialogue with a physicist, who would tell me that my ideas are inconsistent with relativity. Naturally, I would disagree with them. I would say it’s consistent with relativity, because, for the thought experiment to actually work would require instantaneous communication, which is as contradictory with relativity theory as one can get. For the 2 hypothetical observers to ‘know’ when the far-flung event took place would require them to observe it in their ‘now’, which is impossible. So my response is strictly a philosophical one: you can’t apply relativistic theory to this situation because the 'observation' would appear to 'violate' a tenet of relativity theory. And that’s because the event is outside the observers’ light cone. Am I missing something here? 

 


*Addendum 2: I watched a video by Sabine Hossenfelder, who addresses the last sentence in the footnote of this post. She says, in fact, that 2 events that have a causal relationship in our frame of reference could appear ‘independently’ simultaneous to ‘different’ observers in another part of the Universe (watch the video). This is a variation on the thought experiment that I discuss. In practice, because there’s no possible causal relationship between the events and the far-off observers, they wouldn’t observe anything. And it doesn’t change the sequence of causal events.

 

But she’s arguing that there are a multitude of ‘nows’, in accordance with one of Einstein’s premises that all observers have the same validity. That may be correct, but why does it apply to events they can’t even observe? To be fair to Sabine, she does try to address this at the end of the video. I’ve long argued that different observers see a different ‘now’, even without relativity, but if they see a different sequence of events, at least one of them has to be wrong.

 

I want to emphasise that I don’t think Einstein’s theories of relativity are wrong, as some people do. My point is purely a philosophical one: if you have a multitude of perspectives with different versions of events, they can’t all be right. I’m simply arguing that there is an objective reality. A case in point is the twin paradox, where one twin’s clock does run slower, irrespective of what each twin ‘observes’. Mark John Fernee gives a synoptic exposition here. As he says: they each have their own ‘true time’, and one is always slower.

 

If you go far enough into the future, where the events in question fall within the observer’s past light cone, then a history can be observed. We do this with the Universe itself, right back to the CMBR, 380,000 years after the Big Bang, which is 13.8 billion years ago; both of which we claim to know with some confidence.

 

I still maintain my core point, stated explicitly in the title of this post, that, as a hypothetical, the thought experiment described is impossible to do, simply because the event can’t be observed.

 

Sunday 22 May 2022

We are metaphysical animals

 I’m reading a book called Metaphysical Animals (How Four Women Brought Philosophy Back To Life). The four women were Mary Midgley, Iris Murdoch, Philippa Foot and Elizabeth Anscombe. The first two I’m acquainted with and the last two, not. They were all at Oxford during the War (WW2) at a time when women were barely tolerated in academia and had to be ‘chaperoned’ to attend lectures. Also a time when some women students ended up marrying their tutors. 

The book is authored by Clare Mac Cumhaill and Rachael Wiseman, both philosophy lecturers who became friends with Mary Midgley in her final years (Mary died in 2018, aged 99). The book is part biographical of all 4 women and part discussion of the philosophical ideas they explored.

 

Bringing ‘philosophy back to life’ is an allusion to the response (backlash is too strong a word) to the empiricism, logical positivism and general rejection of metaphysics that had taken hold of English philosophy, also known as analytical philosophy. Iris spent time in postwar Paris where she was heavily influenced by existentialism and Jean-Paul Sartre, in particular, whom she met and conversed with. 

 

If I was to categorise myself, I’m a combination of analytical philosopher and existentialist, which I suspect many would see as a contradiction. But this isn’t deliberate on my part – more a consequence of pursuing my interests, which are science on one hand (with a liberal dose of mathematical Platonism) and how-to-live a ‘good life’ (to paraphrase Aristotle) on the other.

 

Iris was intellectually seduced by Sartre’s exhortation: “Man is nothing else but that which he makes of himself”. But as her own love life fell apart along with all its inherent dreams and promises, she found putting Sartre’s implicit doctrine, of standing solitarily and independently of one’s milieu, difficult to do in practice. I’m not sure if Iris was already a budding novelist at this stage of her life, but anyone who writes fiction knows that this is what it’s all about: the protagonist sailing their lone ship on a sea full of icebergs and other vessels, all of which are outside their control. Life, like the best fiction, is an interaction between the individual and everyone else they meet. Your moral compass, in particular, is often tested. Existentialism can be seen as an attempt to arise above this, but most of us don’t. 

 

Not surprisingly, Wittgenstein looms large in many of the pages, and at least one of the women, Elizabeth Anscombe, had significant interaction with him. With Wittgenstein comes an emphasis on language, which has arguably determined the path of philosophy since. I’m not a scholar of Wittgenstein by any stretch of the imagination, but one thing he taught, or that people took from him, was that the meaning we give to words is a consequence of how they are used in ordinary discourse. Language requires a widespread consensus to actually work. It’s something we rarely think about but we all take for granted, otherwise there would be no social discourse or interaction at all. There is an assumption that when I write these words, they have the same meaning for you as they do for me, otherwise I am wasting my time.

 

But there is a way in which language is truly powerful, and I have done this myself. I can write a passage that creates a scene inside your mind complete with characters who interact and can cause you to laugh or cry, or pretty much any other emotion, as if you were present; as if you were in a dream.

 

There are a couple of specific examples in the book which illustrate Wittgenstein’s influence on Elizabeth and how she used them in debate. They are both topics I have discussed myself without knowing of these previous discourses.

 

In 1947, so just after the war, Elizabeth presented a paper to the Cambridge Moral Sciences Club, which she began with the following disclosure:

 

Everywhere in this paper I have imitated Dr Wittgenstein’s ideas and methods of discussion. The best that I have written is a weak copy of some features of the original, and its value depends only on my capacity to understand and use Dr Wittgenstein’s work.

 

The subject of her talk was whether one can truly talk about the past, which goes back to the pre-Socratic philosopher, Parmenides. In her own words, paraphrasing Parmenides, ‘To speak of something past’ would then to ‘point our thought’ at ‘something there’, but out of reach. Bringing Wittgenstein into the discussion, she claimed that Parmenides specific paradox about the past arose ‘from the way that thought and language connect to the world’.

 

We apply language to objects by naming them, but, in the case of the past, the objects no longer exist. She attempts to resolve this epistemological dilemma by discussing the nature of time as we experience it, which is like a series of pictures that move on a timeline while we stay in the present. This is analogous to my analysis that everything we observe becomes the past as soon as it happens, which is exemplified every time someone takes a photo, but we remain in the present – the time for us is always ‘now’.

 

She explains that the past is a collective recollection, documented in documents and photos, so it’s dependent on a shared memory. I would say that this is what separates our recollection of a real event from a dream, which is solipsistic and not shared with anyone else. But it doesn’t explain why the past appears fixed and the future unknown, which she also attempted to address. But I don’t think this can be addressed without discussing physics.

 

Most physicists will tell you that the asymmetry between the past and future can only be explained by the second law of thermodynamics, but I disagree. I think it is described, if not explained, by quantum mechanics (QM) where the future is probabilistic with an infinitude of possible paths and classical physics is a probability of ONE because it’s already happened and been ‘observed’. In QM, the wave function that gives the probabilities and superpositional states is NEVER observed. The alternative is that all the futures are realised in alternative universes. Of course, Elizabeth Anscombe would know nothing of these conjectures.

 

But I would make the point that language alone does not resolve this. Language can only describe these paradoxes and dilemmas but not explain them.

 

Of course, there is a psychological perspective to this, which many people claim, including physicists, gives the only sense of time passing. According to them, it’s fixed: past, present and future; and our minds create this distinction. I think our minds create the distinction because only consciousness creates a reference point for the present. Everything non-sentient is in a causal relationship that doesn’t sense time. Photons of light, for example, exist in zero time, yet they determine causality. Only light separates everything in time as well as space. I’ve gone off-topic.

 

Elizabeth touched on the psychological aspect, possibly unintentionally (I’ve never read her paper, so I could be wrong) that our memories of the past are actually imagined. We use the same part of the brain to imagine the past as we do to imagine the future, but again, Elizabeth wouldn’t have known this. Nevertheless, she understood that our (only) knowledge of the past is a thought that we turn into language in order to describe it.

 

The other point I wish to discuss is a famous debate she had with C.S. Lewis. This is quite something, because back then, C.S. Lewis was a formidable intellectual figure. Elizabeth’s challenge was all the more remarkable because Lewis’s argument appeared on the surface to be very sound. Lewis argued that the ‘naturalist’ position was self-refuting if it was dependent on ‘reason’, because reason by definition (not his terminology) is based on the premise of cause and effect and human reason has no cause. That’s a simplification, nevertheless it’s the gist of it. Elizabeth’s retort:

 

What I shall discuss is this argument’s central claim that a belief in the validity of reason is inconsistent with the idea that human thought can be fully explained as the product of non-rational causes.

 

In effect, she argued that reason is what humans do perfectly naturally, even if the underlying ‘cause’ is unknown. Not knowing the cause does not make the reasoning irrational nor unnatural. Elizabeth specifically cited the language that Lewis used. She accused him of confusing the concepts of “reason”, “cause” and “explanation”.

 

My argument would be subtly different. For a start, I would contend that by ‘reason’, he meant ‘logic’, because drawing conclusions based on cause and effect is logic, even if the causal relations (under consideration) are assumed or implied rather than observed. And here I contend that logic is not a ‘thing’ – it’s not an entity; it’s an action - something we do. In the modern age, machines perform logic; sometimes better than we do.

 

Secondly, I would ask Lewis, does he think reason only happens in humans and not other animals? I would contend that animals also use logic, though without language. I imagine they’d visualise their logic rather than express it in vocal calls. The difference with humans is that we can perform logic at a whole different level, but the underpinnings in our brains are surely the same. Elizabeth was right: not knowing its physical origins does not make it irrational; they are separate issues.

 

Elizabeth had a strong connection to Wittgenstein right up to his death. She worked with him on a translation and edit of Philosophical Investigations, and he bequeathed her a third of his estate and a third of his copyright.

 

It’s apparent from Iris’s diaries and other sources that Elizabeth and Iris fell in love at one point in their friendship, which caused them both a lot of angst and guilt because of their Catholicism. Despite marrying, Iris later had an affair with Pip (Philippa).

 

Despite my discussion of just 2 of Elizabeth’s arguments, I don’t have the level of erudition necessary to address most of the topics that these 4 philosophers published in. Just reading the 4 page Afterwards, it’s clear that I haven’t even brushed the surface of what they achieved. Nevertheless, I have a philosophical perspective that I think finds some resonance with their mutual ideas. 

 

I’ve consistently contended that the starting point for my philosophy is that for each of us individually, there is an inner and outer world. It even dictates the way I approach fiction. 

 

In the latest issue of Philosophy Now (Issue 149, April/May 2022), Richard Oxenberg, who teaches philosophy at Endicott College in Beverly, Massachusetts, wrote an article titled, What Is Truth? wherein he describes an interaction between 2 people, but only from a purely biological and mechanical perspective, and asks, ‘What is missing?’ Well, even though he doesn’t spell it out, what is missing is the emotional aspect. Our inner world is dominated by emotional content and one suspects that this is not unique to humans. I’m pretty sure that other creatures feel emotions like fear, affection and attachment. What’s more I contend that this is what separates, not just us, but the majority of the animal kingdom, from artificial intelligence.

 

But humans are unique, even among other creatures, in our ability to create an inner world every bit as rich as the one we inhabit. And this creates a dichotomy that is reflected in our division of arts and science. There is a passage on page 230 (where the authors discuss R.G. Collingwood’s influence on Mary), and provide an unexpected definition.

 

Poetry, art, religion, history, literature and comedy are all metaphysical tools. They are how metaphysical animals explore, discover and describe what is real (and beautiful and good). (My emphasis.)

 

I thought this summed up what they mean with their coinage, metaphysical animals, which titles the book, and arguably describes humanity’s most unique quality. Descriptions of metaphysics vary and elude precise definition but the word, ‘transcendent’, comes to mind. By which I mean it’s knowledge or experience that transcends the physical world and is most evident in art, music and storytelling, but also includes mathematics in my Platonic worldview.


 

Footnote: I should point out that certain chapters in the book give considerable emphasis to moral philosophy, which I haven’t even touched on, so another reader might well discuss other perspectives.


Wednesday 27 April 2022

Is infinity real?

 In some respects, I think infinity is what delineates mathematics from the ‘Real’ world, meaning the world we can all see and touch and otherwise ‘sense’ through an ever-expanding collection of instruments. To give an obvious example, calculus is used extensively in engineering and physics to determine physical parameters to great accuracy, yet the method requires the abstraction of infinitesimals at its foundation.

Sabine Hossenfelder, whom I’ve cited before, provides a good argument that infinity doesn’t exist in the real world, and Norman Wildberger even argues it doesn’t exist in mathematics because, according to his worldview, mathematics is defined only by what is computable. I won’t elaborate on his arguments but you can find them on YouTube.

 

I was prompted to write about this after reading the cover feature article in last week’s New Scientist by Timothy Revell, who is New Scientist’s deputy US editor. The article was effectively a discussion about the ‘continuum hypothesis’, which, following its conjecture by Georg Cantor, is still in the ‘undecidable’ category (proved neither true nor false). Basically, there are countable infinities and uncountable infinities, which was proven by Cantor and is uncontentious (with the exception of mathematical fringe-dwellers like Wildberger). The continuum hypothesis effectively says that there is no category of infinity in between, which I won’t go into because I don’t know enough about it. 

 

But I do understand Cantor’s arguments that demonstrate how the rational numbers are ‘countably infinite’ and how the Real numbers are not. To appreciate the extent of the mathematical universe (in numbers) to date, I recommend this video by Matt Parker. Sabine Hossenfelder, whom I’ve already referenced, gives a very good exposition on countable and uncountable infinities in the video linked above. She also explains how infinities are dealt with in physics, particularly in quantum mechanics, where they effectively cancel each other out. 

 

Sabine argues that ‘reality’ can only be determined by what can be ‘measured’, which axiomatically rules out infinity. She even acknowledges that the Universe could be physically infinite, but we wouldn’t know. Marcus du Sautoy, in his book, What We Cannot Know, argues that it might remain forever unknowable, if that’s the case. 

 

Nevertheless, Sabine argues that infinity is ‘real’ in mathematics, and I would agree. She points out that infinity is a concept that we encounter early, because it’s implicit in our counting numbers. No matter how big a number is, there is always a bigger one. Infinities are intrinsic to many of the unsolved problems in mathematics, and not just Cantor’s continuum hypothesis. There are 3 involving primes that are well known: the Goldbach conjecture, the twin prime conjecture and Riemann’s hypothesis, which is the most famous unsolved problem in mathematics, at the time of writing. In all these cases, it’s unknown if they’re true to infinity.

 

Without getting too far off the track, the Riemann hypothesis argues that all the non-trivial zeros of the Riemann Zeta function lie on a line in the complex plane which is 1/2i. In other words, all the zeros are of the form, a + 1/2i, which is a complex number with imaginary part 1/2. The thing is that we already know there are an infinite number of them, we just don’t know if there are any that break that rule. The curious thing about infinities is that we are relatively comfortable with them, even though we can’t relate to them in the physical world, and they can never be computed. As I said in my opening paragraph, it’s what separates mathematics from reality.

 

And this leads one to consider what mathematics is, if it’s not reality. Not so recently, I had a discussion with someone on Quora who argued that mathematics is ‘fiction’. Specifically, they argued that any mathematics with no role in the physical universe is fiction. There is an immediate problem with this perspective, because we often don’t find a role in the ‘real world’ for mathematical discoveries, until decades, or even centuries later.

 

I’ve argued in another post that there is a fundamental difference between a physics equation and a purely mathematical equation that many people are not aware of. Basically, physics equations, like Einstein’s most famous, E = mc2, have no meaning outside the physical universe; they deal with physical parameters like mass, energy, time and space.

 

On the other hand, there are mathematical relationships like Euler’s famous identity, e + 1 = 0, which has no meaning in the physical world, unless you represent it graphically, where it is a point on a circle in the complex plane. Talking about infinity, π famously has an infinite number of digits, and Euler’s equation, from which the identity is derived, comes from the sum of two infinite power series.

 

And this is why many mathematicians and physicists treat mathematics as a realm that already exists independently of us, known as mathematical Platonism. John Barrow made this point in his excellent book, Pi in the Sky, where he acknowledges it has quasi-religious connotations. Paul Davies invokes an imaginative metaphor of there being a ‘mathematical warehouse’ where ‘Mother Nature’, or God (if you like), selects the mathematical relationships which make up the ‘laws of the Universe’. And this is the curious thing about mathematics: that it’s ‘unreasonably effective in describing the natural world’, which Eugene Wigner wrote an entire essay on in the 1960s.

 

Marcus du Sautoy, whom I’ve already mentioned, points out that infinity is associated with God, and both he and John Barrow have suggested that the traditional view of God could be replaced with mathematics. Epistemologically, I think mathematics has effectively replaced religion in describing both the origins of the Universe and its more extreme phenomena. 

 

If one looks at the video I cited by Matt Parker, it’s readily apparent that there is infinitely more mathematics that we don’t know compared to what we do know, and Gregory Chaitin has demonstrated that there are infinitely more incomputable Real numbers than computable Reals. This is consistent with Godel’s famous Incompleteness Theorem that counter-intuitively revealed that there is a mathematical distinction between ‘proof’ and ‘truth’. In other words, in any consistent, axiom-based system of mathematics there will always exist mathematical truths that can’t be proved within that system, which means we need to keep expanding the axioms to determine said truths. This implies that mathematics is a never-ending epistemological endeavour. And, if our knowledge of the physical world is dependent on our knowledge of mathematics, then it’s arguably a never-ending endeavour as well.

 

I cannot leave this topic without discussing the one area where infinity and the natural world seem to intersect, which literally has world-changing consequences. I’m talking about chaos theory, which is dependent on the sensitivity of initial conditions. Paul Davies, in his book, The Cosmic Blueprint, actually provides an example where he shows that, mathematically, you have to calculate the initial conditions to infinite decimal places to make a precise prediction. Sabine Hossenfelder has a video on chaos where she demonstrates how it’s impossible to predict the future of a chaotic event beyond a specific horizon. This horizon varies – for the weather it’s around 10 days and for the planetary orbits it’s 10s of millions of years. Despite this, Sabine argues that the Universe is deterministic, which I’ve discussed in another post.

 

Mark John Fernee (physicist with Queensland University and regular Quora contributor) also argues that the universe is deterministic and that chaotic events are unpredictable because we can’t measure the initial conditions accurately enough. He’s not alone among physicists, but I believe it’s in the mathematics.

 

I point to coin tossing, which is the most common and easily created example of chaos. Marcus du Sautoy uses the tossing of dice, which he discusses in his aforementioned book, and in this video. The thing about chaotic events is that if you were to rerun them, you’d get a different result and that goes for the whole universe. Tossing coins is also associated with probability theory, where the result of any individual toss is independent of any previous toss with the same coin. That could only be true if chaotic events weren’t repeatable.

 

There is even something called quantum chaos, which I don’t know a lot about, but it may have a connection to Riemann’s hypothesis (mentioned above). Certainly, Riemann’s hypothesis is linked to quantum mechanics via Hermitian matrices, supported by relevant data (John Derbyshire, Prime Obsession). So, mathematics is related to the natural world in ever-more subtle and unexpected ways.

 

Chaos drives the evolvement of the Universe on multiple scales, including biological evolution and the orbits of planets. If chaos determines our fates, then infinities may well play the ultimate role.


Wednesday 20 April 2022

How can I know when I am wrong?

 Simple answer: I can’t. But this goes to the heart of a dilemma that seems to plague the modern world. It’s even been given a name: the post-truth world.  

I’ve just read a book, The Psychology of Stupidity; explained by some of the world’s smartest people, which is a collection of essays by philosophers, psychologists and writers, edited by Jean-Francois Marmion. It was originally French, so translated into English; therefore, most of the contributors are French, but some are American. 

 

I grew up constantly being reminded of how stupid I was, so, logically, I withdrew into an inner world, often fuelled by comic-book fiction. I also took refuge in books, which turned me into a know-it-all; a habit I’ve continued to this day.

 

Philosophy is supposed to be about critical thinking, and I’ve argued elsewhere that critical analysis is what separates philosophy from dogma, but accusing people of not thinking critically does not make them wiser. You can’t convince someone that you’re right and they’re wrong: the very best you can do is make them think outside their own box. And, be aware, that that’s exactly what they’re simultaneously trying to do to you.

 

Where to start? I’m going to start with personal experience – specifically, preparing arguments (called evidence) for lawyers in contractual engineering disputes, in which I’ve had more than a little experience. Basically, I’ve either prepared a claim or defended a claim by analysing data in the form of records – diaries, minutes, photographs – and reached a conclusion that had a trail of logic and evidence to substantiate it. But here’s the thing: I always took the attitude that I’d come up with the same conclusion no matter which side I was on.

 

You’re not supposed to do that, but it has advantages. The client, whom I’m representing, knows I won’t bullshit them and I won’t prepare a case that I know is flawed. And, in some cases, I’ve even won the respect of the opposing side. But you probably won’t be surprised to learn how much pressure you can be put under to present a case based on falsehoods. In the end, it will bite you.

 

The other aspect to all this is that people can get very emotional, and when they get emotional they get irrational. Writing is an art I do well, and when it comes to preparing evidence, my prose is very dispassionate, laying out an argument based on dated documents; better still, if the documents belong to the opposition.

 

But this is doing analysis on mutually recognised data, even if different sides come to different conclusions. And in a legal hearing or mediation, it’s the documentation that wins the argument, not emotive rhetoric. Most debates these days take place on social media platforms where people on opposing sides have their own sources and their own facts and we both accuse each other of being brainwashed. 

 

And this leads me to the first lesson I’ve learned about the post-truth world. In an ingroup-outgroup environment – like politics – even the most intelligent people can become highly irrational. We see everyone on one side as being righteous and worthy of respect, while everyone on the other side is untrustworthy and deceitful. Many people know about the infamous Robbers Cave experiment in 1954, where 2 groups of teenage boys were manipulated into an ingroup-outgroup situation where tensions quickly escalated, though not violently. I’ve observed this in contractual situations many times over.

 

One of my own personal philosophical principles is that beliefs should be dependent on what you know and not the other way round. It seems to me that we do the opposite: we form a belief and then actively look for evidence that turns that belief into knowledge. And, in the current internet age, it’s possible to find evidence for any belief at all, like the Earth being flat.

 

And this has led to a world of alternate universes, where the exact opposite histories are being played out. The best known example is climate change, but there are others. Most recently, we’ve had a disputed presidential election in the USA and the efficaciousness of vaccines in combatting the coronavirus (SARS-Cov-2 or COVD-19). What all these have in common is that each side believes the other side has been duped.

 

You might think that something else these 3 specific examples have in common is left-wing, right-wing politics. But I’ve learned that’s not always the case. One thing I do believe they have in common is open disagreement between purported experts in combination with alleged conspiracy theories. It so happens that I’ve worked with technical experts for most of my working life, plus I read a lot of books and articles by people in scientific disciplines. 

 

I’m well aware that there are a number of people who have expertise that I don’t have and I admit to getting more than a little annoyed with politicians who criticise or dismiss people who obviously have much more expertise than they have in specific fields, like climatology or epidemiology. One only has to look to the US, where the previous POTUS, Donald Trump, was at the centre of all of these issues, where everything he disagreed with was called a ‘hoax’, and who was a serial promoter of conspiracy theories, including election fraud. Trump is responsible for one of those alternative universes where President Elect, Joe Biden, stole the election from him, even though there is ample testimony that Trump tried to steal the election from Biden.

 

So, in the end, it comes down to who do you trust. And you probably trust someone who aligns with your ideological position or who reinforces your beliefs. Of course, I also have political views and my own array of beliefs. So how do I navigate my way?

 

Firstly, I have a healthy scepticism about conspiracy theories, because they require a level of global collaboration that’s hard to maintain in the manner they are reported. They often read or sound like movie scripts, with politicians being blackmailed or having their lives threatened and health professionals involved in a global conspiracy to help an already highly successful leader in the corporate world take control of all of our lives. This came from a so-called ‘whistleblower’, previously associated with WHO.

 

The more emotive and sensationalist a point of view, the more traction it has. Media outlets have always known this, and now it’s exploited on social media, where rules about accountability and credibility are a lot less rigorous.

 

Secondly, there are certain trigger words that warn me that someone is talking bullshit. Like calling vaccines a ‘bio-weapon’ or that it’s the ‘death-jab’ (from different sources). However, I trust people who have a long history of credibility in their field; who have made it their life’s work, in fact. But we live in a world where they can be ridiculed by politicians, whom we are supposed to respect and follow.

 

At the end of the day, I go back to the same criteria I used in preparing arguments involved in contractual disputes, which is evidence. We’ve been living with COVID for 2 years now and it is easy to find statistical data tracking the disease in a variety of countries and the effect the vaccines have had. Of course, the conspiracy theorists will tell you that the data is fabricated. The same goes for evidence involving climate change. There was a famous encounter between physicist and television presenter, Brian Cox, and a little known Australian politician who claimed that the graphs Cox presented, produced by NASA, had been corrupted.

 

But, in both of these cases, the proof is in the eating of the pudding. I live in a country where we followed the medical advice, underwent lockdowns and got vaccinated, and we’re now effectively living with the virus. When I look overseas, at countries like America, it was a disaster overseen by an incompetent President, who advocated all sorts of ‘crank cures’, the most notorious being bleach, not to mention UV light. At one point, the US accounted for more than 20% of the world’s recorded deaths.

 

And it’s the same with climate change where, again, the country I live in faced record fires in 2019/20 and now floods, though this is happening all over the globe. The evidence is in our face, but people are still in denial. It takes a lot of cognitive dissonance to admit when we’re wrong, and that’s part of the problem.

 

Philosophy teaches you that you can have a range of views on a specific topic, and as I keep saying: only future generations know how ignorant the current generation is. That includes me, of course. I write a blog, which hopefully outlives me and one day people should be able to tell where I was wrong. I’m quite happy for that, because that’s how knowledge grows and progresses.