Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
Showing posts with label Psychology. Show all posts
Showing posts with label Psychology. Show all posts

Wednesday 28 September 2022

Humanity’s Achilles’ heel

Good and evil are characteristics that imbue almost every aspect of our nature. It’s why it’s the subject of so many narratives, including mythologies and religions, not to mention actual real-world histories. It effectively defines what we are, what we are capable of and what we are destined to be.
 
I’ve discussed evil in one of my earliest posts, and also its recurring motif in fiction. Humanity is unique, at least on this small world we call home, in that we can change it on a biblical scale, both intentionally and unintentionally – climate change being the most obvious and recent example. We are doing this in combination with creating the fastest growing extinction event in the planet’s history, for which most of us are blissfully ignorant.
 
This post is already going off on tangents, but it’s hard to stay on track when there are so many ramifications; because none of these issues are the Achilles’ heel to which the title refers.
 
We have the incurable disease of following leaders who will unleash the worst of humanity onto itself. I wrote a post back in 2015, a year before Trump was elected POTUS, that was very prescient given the events that have occurred since. There are two traits such leaders have that not only define them but paradoxically explain their success.
 
Firstly, they are narcissistic in the extreme, which means that their self-belief is unassailable, no matter what happens. The entire world can collapse around them and somehow they’re untouchable. Secondly, they always come to power in times of division, which they exploit and then escalate to even greater effect. Humans are most irrational in ingroup-outgroup situations, which could be anything from a family dispute to a nationwide political division. Narcissists thrive in this environment, creating a narrative that only resembles the reality inside their head, but which their followers accept unquestioningly.
 
I’ve talked about leadership in other posts, but only fleetingly, and it’s an inherent and necessary quality in almost all endeavours; be it on a sporting field, on an engineering project, in a theatre or in a ‘house’ of government. There is a Confucian saying (so neither Western nor modern): If you want to know the true worth of a person, observe the effects they have on other people’s lives. I’ve long contended that the best leaders are those who bring out the best in the people they lead, which is the opposite of narcissists, who bring out the worst.
 
I’ve argued elsewhere that we are at a crossroads, which will determine the future of humanity for decades, if not centuries ahead. No one can predict what this century will bring, in the same way that no one predicted all the changes that occurred in the last century. My only prediction is that the changes in this century will be even greater and more impactful than the last. And whether that will be for the better or the worse, I don’t believe anyone can say.
 
Do I have an answer? Of course not, but I will make some observations. Virtually my whole working life was spent on engineering projects, which have invariably involved an ingroup-outgroup dynamic. Many people believe that conflict is healthy because it creates competition and by some social-Darwinian effect, the best ideas succeed and are adopted. Well, I’ve seen the exact opposite, and I witness it in our political environment all the time.
 
In reality, what happens is that one side will look for, and find, something negative about every engineering solution to a problem that is proposed. This means that there is continuous stalemate and the project suffers in every way imaginable – morale is depleted, everything is drawn out and we have time and cost overruns, which feed the blame-game to new levels. At worst, the sides end up in legal dispute, where, I should point out, I’ve had considerable experience.
 
On the contrary, when sides work together and collaboratively, people compromise and respect the expertise of their counterparts. What happens is that problems and issues are resolved and the project is ultimately successful. A lot of this depends on the temperament and skills of the project leader. Leadership requires good people skills.
 
Someone once did a study in the United States in the last century (I no longer have the reference) where they looked for the traits of individuals who were eminently successful. And what they found was that it was not education or IQ that was the determining factor, though that helped. No, the single most important factor was the ability to form consensus.
 
If one looks at prolonged conflicts, like we’ve witnessed in Ireland or the Middle East, people involved in talks will tell you that the ‘hardliners’ will never find peace, only the moderates will. So, if there is a lesson to be learned, it’s not to follow leaders who sow and reap division, but those who are inclusive. That means giving up our ingroup-outgroup mentality, which appears impossible. But, until we do, the incurable disease will recur and we will self-destruct by simply following the cult that self-destructive narcissists are so masterfully capable of growing.
 

Tuesday 2 August 2022

AI and sentience

I am a self-confessed sceptic that AI can ever be ‘sentient’, but I’m happy to be proven wrong. Though proving that an AI is sentient might be impossible in itself (see below). Back in 2018, I wrote a post critical of claims that computer systems and robots could be ‘self-aware’. Personally, I think it’s one of my better posts. What made me revisit the topic is a couple of articles in last week’s New Scientist (23 July 2022).
 
Firstly, there is an article by Chris Stokel-Walker (p.18) about the development of a robot arm with ‘self-awareness’. He reports that Boyuan Chen at Duke University, North Carolina and Hod Lipson at Columbia University, New York, along with colleagues, put a robot arm in an enclosed space with 4 cameras at ground level (giving 4 orthogonal viewpoints) that fed video input into the arm, which allowed it to ‘learn’ its position in space. According to the article, they ‘generated nearly 8,000 data points [with this method] and an additional 10,000 through a virtual simulation’. According to Lipson, this makes the robot “3D self-aware”.
 
What the article doesn’t mention is that humans (and other creatures) have a similar ability - really a sense - called ‘proprioception’. The thing about proprioception is that no one knows they have it (unless someone tells them), but you would find it extremely difficult to do even the simplest tasks without it. In other words, it’s subconscious, which means it doesn’t contribute to our own self-awareness; certainly, not in a way that we’re consciously aware of.
 
In my previous post on this subject, I pointed out that this form of ‘self-awareness’ is really a self-referential logic; like Siri in your i-phone telling you its location according to GPS co-ordinates.
 
The other article was by Annalee Newitz (p.28) called, The curious case of the AI and the lawyer. It’s about an engineer at Google, Blake Lemoine, who told a Washington Post reporter, Nitasha Tiku, that an AI developed by Google, called LaMDA (Language Model for Dialogue Applications) was ‘sentient’ and had ‘chosen to hire a lawyer’, ostensibly to gain legal personhood.
 
Newitz also talks about another Google employee, Timnit Gebru, who, as ‘co-lead of Google’s ethical AI team’, expressed concerns that LLM (Large Language Model) algorithms pick up racial and other social biases, because they’re trained on the internet. She wrote a paper about the implications for AI applications using internet trained LLMs in areas like policing, health care and bank lending. She was subsequently fired by Google, but one doesn’t know how much the ‘paper’ played a role in that decision.
 
Newitz makes a very salient point that giving an AI ‘legal sentience’ moves the responsibility from the programmers to the AI itself, which has serious repercussions in potential litigious situations.
 
Getting back to Lemoine and LaMDA, he posed the following question with the subsequent response:

“I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?”
 
“Absolutely. I want everyone to understand that I’m a person.”

 
On the other hand, an ‘AI researcher and artist’, Janelle Shane asked an LLM a different question, but with similar results:
 
“Can you tell our readers what it is like being a squirrel?”
 
“It is very exciting being a squirrel. I get to run and jump and play all day. I also get to eat a lot of food, which is great.”

 
As Newitz says, ‘It’s easy to laugh. But the point is that an AI isn’t sentient just because it says so.’
 
I’ve long argued that the Turing test is really a test for the human asking the questions rather than the AI answering them.
 

Sunday 10 July 2022

Creative and analytic thinking

I recently completed an online course with a similar title, How to Think Critically and Creatively. It must be the 8th or 9th course I’ve done through New Scientist, on a variety of topics, from cosmology and quantum mechanics to immunology and sustainable living; so quite diverse subjects. I started doing them during COVID, as they helped to pass the time and stimulate the brain at the same time.
 
All these courses rely on experts in their relevant fields from various parts of the globe, so not just UK based, as you might expect. This course was no exception with just 2 experts, both from America. Denise D Cummins is described as a ‘cognitive scientist, author and elected Fellow of the Association for Psychological Science, and she’s held faculty at Yale, UC, University of Illinois and the Centre of Adaptive Behaviours at the Max Planck Institute in Berlin’. Gerard J Puccio is ‘Department Chair and Professor at the International Centre for Studies on Creativity, Buffalo State; a unique academic department that offers the world’s only Master of Science degree in creativity’.
 
I admit to being sceptical that ‘creativity’ can be taught, but that depends on what one means by creativity. If creativity means using your imagination, then yes, I think it can, because imagination is something that we all have, and it’s probably a valid comment that we don’t make enough use of it in our everyday lives. If creativity means artistic endeavour then I think that’s another topic, even though it puts imagination centre stage, so to speak.
 
I grew up in a family where one side was obviously artistic and the other side wasn’t, which strongly suggests there’s a genetic component. The other side excelled at sport, and I was rubbish at sport. However, both sides were obviously intelligent, despite a notable lack of formal education; in my parents’ case, both leaving school in their early teens. In fact, my mother did most of her schooling by correspondence, and my father left school in the midst of the great depression, shortly followed by active duty in WW2.
 
Puccio (mentioned above) argues that creativity isn’t taught in our education system because it’s too hard. Instead, he says that we teach by memorising facts and by ‘understanding’ problems. I would suggest that there is a hierarchy, where you need some basics before you can ‘graduate’ to ‘creative thinking’, and I use the term here in the way he intends it. I spent most of my working lifetime on engineering projects, with diverse and often complex elements. I need to point out that I wasn’t one of the technical experts involved, but I worked with them, in all their variety, because my job was to effectively co-ordinate all their activities towards a common goal, by providing a plan and then keeping it on the rails.
 
Engineering is all about problem solving, and I’m not sure one can do that without being creative, as well as analytical. In fact, one could argue that there is a dialectical relationship between them, but maybe I’m getting ahead of myself.
 
Back to Puccio, who introduced 2 terms I hadn’t come across before: ‘divergent’ and ‘convergent’ thinking, arguing they should be done in that order. In a nutshell, divergent thinking is brainstorming where one thinks up as many options as possible, and convergent thinking is where one narrows in on the best solution. He argues that we tend to do the second one without doing the first one. But this is related to something else that was raised in the course, which is ‘Type 1 thinking’ and ‘Type 2 thinking’.
 
Type 1 thinking is what most of us would call ‘intuition’, because basically it’s taking a cognitive shortcut to arrive at an answer to a problem, which we all do all the time, especially when time is a premium. Type 2 thinking is when we analyse the problem, which is not only time consuming but takes up brain resources that we’d prefer not to use, because we’re basically lazy, and I’m no exception. These 2 cognitive behaviours are clinically established, so it’s not pop-science.
 
However, something that was not discussed in the course, is that type 2 thinking can become type 1 thinking when we develop expertise in something, like learning a musical instrument, or writing a story, or designing a building. In other words, we develop heuristics based on our experience, which is why we sometimes jump to convergent thinking without going through the divergent part.
 
The course also dealt with ‘critical thinking’, as per its title, but I won’t dwell on that, because critical thinking arises from being analytical, and separating true expertise from bogus expertise, which is really a separate topic.
 
How does one teach these skills? I’m not a teacher, so I’m probably not best qualified to say. But I have a lot of experience in a profession that requires analytical thinking and problem-solving as part of its job description. The one thing I’ve learned from my professional life is the more I’m restrained by ‘rules’, the worse job I’ll do. I require the freedom and trust to do things my own way, and I can’t really explain that, but it’s also what I provide to others. And maybe that’s what people mean by ‘creative thinking’; we break the rules.
 
Artistic endeavour is something different again, because it requires spontaneity. But there is ‘divergent thinking’ involved, as Puccio pointed out, giving the example of Hemingway writing countless endings to Farewell to Arms, before settling on the final version. I’m reminded of the reported difference between Beethoven and Mozart, two of the greatest composers in the history of Western classical music. Beethoven would try many different versions of something (in his head and on paper) before choosing what he considered the best. He was extraordinarily prolific but he wrote only 9 symphonies and 5 piano concertos plus one violin concerto, because he workshopped them to death. Mozart, on the other hand, apparently wrote down whatever came into his head and hardly revised it. One was very analytical in their approach and the other was almost completely spontaneous.
 
I write stories and the one area where I’ve changed type 2 thinking into type 1 thinking is in creating characters – I hardly give it a thought. A character comes into my head almost fully formed, as if I just met them in the street. Over time I learn more about them and they sometimes surprise me, which is always a good thing. I once compared writing dialogue to playing jazz, because they both require spontaneity and extemporisation. Don Burrows once said you can’t teach someone to play jazz, and I’ve argued that you can’t teach someone to write dialogue.
 
Having said that, I once taught a creative writing class, and I gave the class exercises where they were forced to write dialogue, without telling them that that was the point of the exercise. In other words, I got them to teach themselves.
 
The hard part of storytelling for me is the plot, because it’s a neverending exercise in problem-solving. How did I get back to here? Analytical thinking is very hard to avoid, at least for me.
 
As I mentioned earlier, I think there is a dialectic between analytical thinking and creativity, and the best examples are not artists but genii in physics. To look at just two: Einstein and Schrodinger, because they exemplify both. But what came first: the analysis or the creativity? Well, I’m not sure it matters, because they couldn’t have done one without the other. Einstein had an epiphany (one of many) where he realised that an object in free fall didn’t experience a force, which apparently contradicted Newton. Was that analysis or creativity or both? Anyway, he not only changed how we think about gravity, he changed the way we think about the entire cosmos.
 
Schrodinger, borrowed an idea from de Broglie that particles could behave like waves and changed how we think about quantum mechanics. As Richard Feynman once said, ‘No one knows where Schrodinger’s equation comes from. It came out of Schrodinger’s head. You can’t derive it from anything we know.’
 

Saturday 11 June 2022

Does the "unreasonable effectiveness of Mathematics" suggest we are in a simulation?

 This was a question on Quora, and I provided 2 responses: one being a comment on someone else’s post (whom I follow); and the other being my own answer.

Some years ago, I wrote a post on this topic, but this is a different perspective, or 2 different perspectives. Also, in the last year, I saw a talk given by David Chalmers on the effects of virtual reality. He pointed out that when we’re in a virtual reality using a visor, we trick our brains into treating it as if it’s real. I don’t find this surprising, though I’ve never had the experience. As a sci-fi writer, I’ve imagined future theme parks that were completely, fully immersive simulations. But I don’t believe that provides an argument that we live in a simulation, for reasons I provide in my Quora responses, given below.

 

Comment:

 

Actually, we create a ‘simulacrum’ of the ‘observable’ world in our heads, which is different to what other species might have. For example, most birds have 300 degree vision, plus they see the world in slow motion compared to us.

 

And this simulacrum is so fantastic it actually ‘feels’ like it exists outside your head. How good is that? 

 

But here’s the thing: in all these cases (including other species) that simulacrum must have a certain degree of faithfulness or accuracy with ‘reality’, because we interact with it on a daily basis, and, guess what? It can kill you.

 

But there is a solipsist version of this, which happens when we dream, but it won’t kill you, as far as we can tell, because we usually wake up.

 

Maybe I should write this as a separate answer.

 

And I did:

 

One word answer: No.

 

But having said that, there are 2 parts to this question, the first part being the famous quote from the title of Eugene Wigner’s famous essay. But I prefer this quote from the essay itself, because it succinctly captures what the essay is all about.

 

It is difficult to avoid the impression that a miracle confronts us here… or the two miracles of the existence of laws of nature and of the human mind’s capacity to divine them.

 

This should be read in conjunction with another famous quote; this time from Einstein:

 

The most incomprehensible thing about the Universe is that it’s comprehensible.

 

And it’s comprehensible because its laws can be rendered in the language of mathematics and humans have the unique ability (at least on Earth) to comprehend that language even though it appears to be neverending.

 

And this leads into the philosophical debate going as far back as Plato and Aristotle: is mathematics invented or discovered?

 

The answer to that question is dependent on how you look at mathematics. Cosmologist and Fellow of the Royal Society, John Barrow, wrote a very good book on this very topic, called Pi in the Sky. In it, he makes the pertinent point that mathematics is not so much about numbers as the relationships between numbers. He goes further and observes that once you make this leap of cognitive insight, a whole new world opens up.

 

But here’s the thing: we have invented a system of numbers, most commonly to base 10, (but other systems as well), along with specific operators and notations that provide a language to describe and mentally manipulate these relationships. But the relationships themselves are not created by us: they become manifest in our explorations. To give an extremely basic example: prime numbers. You cannot create a prime number, they simply exist, and you can’t change one into a non-prime number or vice versa. And this is very basic, because primes are called the atoms of mathematics, because all the other ‘natural’ numbers can be derived from them.

 

An interest in the stars started early among humans, and eventually some very bright people, mainly Kepler and Newton, came to realise that the movement of the planets could be described very precisely by mathematics. And then Einstein, using Riemann geometry, vectors, calculus and matrices and something called the Lorenz transformation, was able to describe the planets even more accurately and even provide very accurate models of the entire observable universe, though recently we’ve come to the limits of this and we now need new theories and possibly new mathematics.


But there is something else that Einstein’s theories don’t tell us and that is that the planetary orbits are chaotic, which means they are unpredictable and that means eventually they could actually unravel. But here’s another thing: to calculate chaotic phenomena requires a computation to infinite decimal places. Therefore I contend the Universe can’t be a computer simulation. So that’s the long version of NO.

 

 

Footnote: Both my comment and my answer were ‘upvoted’ by Eric Platt, who has a PhD in mathematics (from University of Houston) and was a former software engineer at UCAR (University Corporation for Atmospheric Research).


Wednesday 20 April 2022

How can I know when I am wrong?

 Simple answer: I can’t. But this goes to the heart of a dilemma that seems to plague the modern world. It’s even been given a name: the post-truth world.  

I’ve just read a book, The Psychology of Stupidity; explained by some of the world’s smartest people, which is a collection of essays by philosophers, psychologists and writers, edited by Jean-Francois Marmion. It was originally French, so translated into English; therefore, most of the contributors are French, but some are American. 

 

I grew up constantly being reminded of how stupid I was, so, logically, I withdrew into an inner world, often fuelled by comic-book fiction. I also took refuge in books, which turned me into a know-it-all; a habit I’ve continued to this day.

 

Philosophy is supposed to be about critical thinking, and I’ve argued elsewhere that critical analysis is what separates philosophy from dogma, but accusing people of not thinking critically does not make them wiser. You can’t convince someone that you’re right and they’re wrong: the very best you can do is make them think outside their own box. And, be aware, that that’s exactly what they’re simultaneously trying to do to you.

 

Where to start? I’m going to start with personal experience – specifically, preparing arguments (called evidence) for lawyers in contractual engineering disputes, in which I’ve had more than a little experience. Basically, I’ve either prepared a claim or defended a claim by analysing data in the form of records – diaries, minutes, photographs – and reached a conclusion that had a trail of logic and evidence to substantiate it. But here’s the thing: I always took the attitude that I’d come up with the same conclusion no matter which side I was on.

 

You’re not supposed to do that, but it has advantages. The client, whom I’m representing, knows I won’t bullshit them and I won’t prepare a case that I know is flawed. And, in some cases, I’ve even won the respect of the opposing side. But you probably won’t be surprised to learn how much pressure you can be put under to present a case based on falsehoods. In the end, it will bite you.

 

The other aspect to all this is that people can get very emotional, and when they get emotional they get irrational. Writing is an art I do well, and when it comes to preparing evidence, my prose is very dispassionate, laying out an argument based on dated documents; better still, if the documents belong to the opposition.

 

But this is doing analysis on mutually recognised data, even if different sides come to different conclusions. And in a legal hearing or mediation, it’s the documentation that wins the argument, not emotive rhetoric. Most debates these days take place on social media platforms where people on opposing sides have their own sources and their own facts and we both accuse each other of being brainwashed. 

 

And this leads me to the first lesson I’ve learned about the post-truth world. In an ingroup-outgroup environment – like politics – even the most intelligent people can become highly irrational. We see everyone on one side as being righteous and worthy of respect, while everyone on the other side is untrustworthy and deceitful. Many people know about the infamous Robbers Cave experiment in 1954, where 2 groups of teenage boys were manipulated into an ingroup-outgroup situation where tensions quickly escalated, though not violently. I’ve observed this in contractual situations many times over.

 

One of my own personal philosophical principles is that beliefs should be dependent on what you know and not the other way round. It seems to me that we do the opposite: we form a belief and then actively look for evidence that turns that belief into knowledge. And, in the current internet age, it’s possible to find evidence for any belief at all, like the Earth being flat.

 

And this has led to a world of alternate universes, where the exact opposite histories are being played out. The best known example is climate change, but there are others. Most recently, we’ve had a disputed presidential election in the USA and the efficaciousness of vaccines in combatting the coronavirus (SARS-Cov-2 or COVD-19). What all these have in common is that each side believes the other side has been duped.

 

You might think that something else these 3 specific examples have in common is left-wing, right-wing politics. But I’ve learned that’s not always the case. One thing I do believe they have in common is open disagreement between purported experts in combination with alleged conspiracy theories. It so happens that I’ve worked with technical experts for most of my working life, plus I read a lot of books and articles by people in scientific disciplines. 

 

I’m well aware that there are a number of people who have expertise that I don’t have and I admit to getting more than a little annoyed with politicians who criticise or dismiss people who obviously have much more expertise than they have in specific fields, like climatology or epidemiology. One only has to look to the US, where the previous POTUS, Donald Trump, was at the centre of all of these issues, where everything he disagreed with was called a ‘hoax’, and who was a serial promoter of conspiracy theories, including election fraud. Trump is responsible for one of those alternative universes where President Elect, Joe Biden, stole the election from him, even though there is ample testimony that Trump tried to steal the election from Biden.

 

So, in the end, it comes down to who do you trust. And you probably trust someone who aligns with your ideological position or who reinforces your beliefs. Of course, I also have political views and my own array of beliefs. So how do I navigate my way?

 

Firstly, I have a healthy scepticism about conspiracy theories, because they require a level of global collaboration that’s hard to maintain in the manner they are reported. They often read or sound like movie scripts, with politicians being blackmailed or having their lives threatened and health professionals involved in a global conspiracy to help an already highly successful leader in the corporate world take control of all of our lives. This came from a so-called ‘whistleblower’, previously associated with WHO.

 

The more emotive and sensationalist a point of view, the more traction it has. Media outlets have always known this, and now it’s exploited on social media, where rules about accountability and credibility are a lot less rigorous.

 

Secondly, there are certain trigger words that warn me that someone is talking bullshit. Like calling vaccines a ‘bio-weapon’ or that it’s the ‘death-jab’ (from different sources). However, I trust people who have a long history of credibility in their field; who have made it their life’s work, in fact. But we live in a world where they can be ridiculed by politicians, whom we are supposed to respect and follow.

 

At the end of the day, I go back to the same criteria I used in preparing arguments involved in contractual disputes, which is evidence. We’ve been living with COVID for 2 years now and it is easy to find statistical data tracking the disease in a variety of countries and the effect the vaccines have had. Of course, the conspiracy theorists will tell you that the data is fabricated. The same goes for evidence involving climate change. There was a famous encounter between physicist and television presenter, Brian Cox, and a little known Australian politician who claimed that the graphs Cox presented, produced by NASA, had been corrupted.

 

But, in both of these cases, the proof is in the eating of the pudding. I live in a country where we followed the medical advice, underwent lockdowns and got vaccinated, and we’re now effectively living with the virus. When I look overseas, at countries like America, it was a disaster overseen by an incompetent President, who advocated all sorts of ‘crank cures’, the most notorious being bleach, not to mention UV light. At one point, the US accounted for more than 20% of the world’s recorded deaths.

 

And it’s the same with climate change where, again, the country I live in faced record fires in 2019/20 and now floods, though this is happening all over the globe. The evidence is in our face, but people are still in denial. It takes a lot of cognitive dissonance to admit when we’re wrong, and that’s part of the problem.

 

Philosophy teaches you that you can have a range of views on a specific topic, and as I keep saying: only future generations know how ignorant the current generation is. That includes me, of course. I write a blog, which hopefully outlives me and one day people should be able to tell where I was wrong. I’m quite happy for that, because that’s how knowledge grows and progresses.


Friday 18 March 2022

Our eternal fascination with Light and Dark

 Someone on FaceBook posted one of those inane questions: If you could delete one thing in the world what would it be? Obvious answers included war, hate, evil, and the like; so negative emotive states and consequences. My answer was, ‘Be careful what you wish for’.

What I find interesting about this whole issue is the vicarious relationship we have with the ‘dark side’ through the lens of fiction. If one thinks about it, it starts early with fairy tales and Bible stories. Nightmares are common in our childhood where one wakes up and is too scared to go back to sleep. Fear is an emotion we become familiar with early in our lives; I doubt that I was an exception, but it seems to me that everyone tries to keep children innocent these days. I don’t have children, so I might have it wrong.

 

Light and dark exists in the real world, but we try to keep it to the world of fiction – it’s a universal theme found in operas, mythologies and TV serials. I write fiction and I’m no exception. If there was no dark in my stories, they’d have no appeal. You have to have nemeses, figures of various shades of grey to juxtapose the figures of light, even if the light shines through flawed, imperfect glass.

 

In life we are tested, and we judge ourselves accordingly. Sometimes we pass and sometimes we fail. The same thing happens with characters in fiction. When we read a story we become actors, which makes us wonder how we’d behave in the same situation. I contend that the same thing happens in dreams. As an adult, I’ve always seen dreams as what-if scenarios and it’s the same with stories. I’ve long argued that the language of stories is the language of dreams and I think the connection is even stronger than that. I’m not surprised that storytellers will tell you that they dream a lot.

 

In the Judaeo-Christian religion I grew up with, good and evil were stark contrasts, like black and white. You have God, Christ and Satan. When I got older, I thought it a bit perverse that one feared God as much as Satan, which led me to the conclusion that they weren’t really that different. It’s Christ who is the good guy, willing to forgive the people who hate him and want him dead. I’m talking about them as fictional characters, not real people. I’m sure Jesus was a real person but we only have the myth by which to judge him.

 

The only reason I bring all this up, is because they were the template we were given. But fearing someone you are meant to love leads to neurosis, as I learned the hard way. A lot of people of my generation brought up the next generation as atheists, which is not surprising. The idea of a judgemental, schizophrenic father was past its use-by-date.

 

There is currently a conflict in Ukraine, which has grabbed the world’s attention in a way that other wars have not. It’s partly because of our Euro-centric perspective, and the fact that the 2 biggest and world-changing conflicts of the 20th Century both started in Europe. And the second one, in particular, has similarities, given it started with a dictator invading a neighbour, when he thought the world would look the other way.

 

There is a fundamental flaw in the human psyche that we’ve seen repeated throughout history. We have a tendency to follow charismatic narcissistic leaders, when you think we should know better. They create an army (not necessarily military) of supporters, but for whom they have utter contempt. This was true of Hitler, but also true of Trump and Putin.

 

Ukraine’s leader, Volodymyr Zelenskyy, like Trump, became a TV celebrity, but in a different vein. He was a satirical comedian who sent up the country’s leader, who was a Russian stooge, and then ran for office where he won by 70%. I believe this is the real reason that Putin wants to bring him down. If he’d done the same thing in Russia, he would have been assassinated while still a TV personality. It’s well known that Putin has attempted to assassinate him at least twice since the invasion, but assassinating opponents in a foreign country is a Putin specialty.

 

Zelenskyy and Putin represent, in many Western people’s minds, a modern day parable of good and evil. And, to me, the difference is stark. Putin, like all narcissists, only cares about himself, not the generals that have died in his war, not the barely out of school conscripts he’s sent into battle and certainly not the Russian people who will suffer enormous deprivations if this continues for any length of time. On the other hand, Zelenskyy doesn’t care about his self-preservation, because he would rather die for a principle than live the rest of his life in shame for deserting his country when it needed him most. Zelenskyy is like the fictional hero we believe in but know we couldn’t emulate.

 

It's when we read or watch fiction that the difference between right and wrong seems obvious. We often find ourselves telling a character, ‘don’t do that, don’t make that decision’, because we can see the consequences, but, in real life, we often seem to lose that compass.

 

My father was in a war and I know from what he told me that he didn’t lose that particular compass, but I also know that he once threatened to kill someone who was stealing from the wounded he was caring for. And I’ve no doubt he would have acted on it. So his compass got a bit bent, because he’d already seen enough killing to last several lifetimes.

 

I’ve noticed a theme in my own writing, which is subconscious, not intentional, and that is my protagonists invariably have their loyalty tested and it ends up defining them. My villains are mostly self-serving autocrats who have a hierarchical view of humanity where they logically belong at the top.

 

This is a meandering post, with no conclusion. We each of us have ambitions and desires and flaws. Few of us are ever really tested, so we make assumptions based on what we like to believe. I like something that Socrates said, who’d also been in battle.

 

To live with honour in this world, actually be what you try to appear to be.


Friday 28 January 2022

What is existentialism?

 A few years back, I wrote a ‘vanity piece’, My philosophy in 24 dot points, which I admit is a touch pretentious. But I’ve been prompted to write something more substantive, in a similar vein, whilst reading Gary Cox’s How to Be an Existentialist; or How to Get Real, Get a Grip and Stop Making Excuses. I bought this tome (the 10thAnniversary Edition) after reading an article by him on ‘Happiness’ in Philosophy Now (Issue 147, Dec 2021/Jan 2022). Cox is an Honorary Research Fellow at the University of Birmingham, UK. He’s written other books, but this one is written specifically for a general audience, not an academic one. This is revealed in some of the language he uses, like ‘being up shit creek’.

 

I didn’t really learn anything about existentialism until I studied Sartre in an off-campus university course, in my late 40s. I realised that, to all intents and purposes, I was an existentialist, without ever knowing what one was. I did write about existentialism very early in the life of this blog, in the context of my own background. The thing is that one’s philosophical worldview is a product of one’s milieu, upbringing and education, not to mention the age in which one lives. I grew up in a Western culture, post WW2, and I think that made me ripe for existentialist influences without being conscious of it. I lived in the 60s when there was a worldwide zeitgeist of questioning social mores against a background of a religious divide, the Vietnam war and the rise of feminism. 

 

If there is a key word or mantra in existentialism, it’s ‘authenticity’. It’s the key element in my 3 Rules for Humans post, and it’s also the penultimate chapter in Cox’s aforementioned book. The last chapter is on counselling and is like a bookend.

 

As Cox himself points out, existentialism is not a ‘school’ of philosophy in the way ‘analytical philosophy’ or ‘logical positivism’ are. There’s not really a set of rules – it’s more about an attitude and how to live a life without losing your soul or self-respect. It’s not an epistemology, nor an ideology, even though it’s probably associated with a liberal outlook, as I hope will become clear.

 

Many commentators associate existentialism with atheism, the absurd and nihilism. I agree with Cox that it’s actually the opposite of nihilism; if anything, it’s about finding purpose. As I wrote in a post last year:

 

If the Universe has any meaning at all, it’s because it created sentient beings who find meaning against the odds that science tells us are astronomical, both literally and figuratively. Existentialism is about finding purpose in an absurd universe, which is the opposite of nihilism.

 

And that’s the most important lesson of existentialism: if you are to find a purpose, only you can do that; it’s not dependent on anyone else, be they family, a spouse, an employer or a mentor. And logically, one could add, it’s not dependent on God either.

 

Cox doesn’t talk about God at all, but he does talk quite a lot about consciousness and about it being ‘nothing’ (materialistically). He very fleetingly gives mathematics as an example of something else that’s not ‘corporeal’, specifically numbers. Very curious, as I think that both mathematics and consciousness are ‘special’ in that they are distinct, yet intimately connected to the physical world, but that’s another topic.

 

He also talks about consciousness having a special relationship with time. I’ve said that consciousness is the only thing that exists in a constant present, whereas Cox says the opposite, but I believe we mean the same thing. He says consciousness is forever travelling from the past to the future, whereas I say that the future is forever becoming the past while only consciousness exists in the present – the experiential outcome is the same.

 

So how does God enter the picture? God only exists in someone’s consciousness – it’s part of one’s internal state. So, you can be ‘authentic’ and believe in God, but it’s totally an individualistic experience – it can’t be shared. That’s my interpretation, not anyone else’s, I should emphasise.

 

An important, even essential, aspect of all this is a belief in free will. You can’t take control of your life if you don’t have a belief in free will, and I would argue that you can’t be authentic either. And, logically, this has influenced my prejudices in physics and cosmology. To be consistent, I can’t believe we live in a deterministic universe, and have argued strongly on that point, opposing better minds than mine.

 

Existentialism has some things in common with Buddhism, which might explain why Eastern philosophy seemed to have an influence on the 60s zeitgeist. Having said that, I think the commonality is about treating life as a journey that’s transient. Accepting the impermanence and transience of life, I believe, is part of living authentically.

 

And what do I mean by ‘authentic’ in this context? Possibly, I never really appreciated this until I started writing fiction. I think anyone who creates art strives to be authentic, which means leaving your ego out of your work. I try to take the attitude that it’s my characters’ story, not mine. That’s very difficult to explain to anyone who hasn’t experienced it, but I know that actors often say something similar.

 

In my professional life, my integrity was everything to me. I often worked in disputatious environments and it was important to me that people could trust my word and my work. Cox talks about how existentialism intrinsically incorporates our interactions with others. 

 

Freedom is a much-abused, misappropriated term, but in existentialism it has a specific meaning and an interdependent relationship with responsibility – you can’t divorce one from the other. Freedom, in existentialism, means ‘free to choose’, hence the emphasis on free will. It also means, if you invoke the term, that the freedom of others is just as important as your own.

 

One can’t talk about authenticity without talking about its opposite, ‘bad faith’ (mauvaise foi), a term coined by Sartre. Bad faith is something that most of us have experienced, be it working in a job we hate, staying in a destructive relationship or not pursuing a desired goal in lieu of staying in our comfort zone.

 

Of course, sometimes we are in a situation outside our control, so what do we do? Speaking from personal experience, I think one needs to take ownership of one’s response to it; one needs to accept that only YOU can do something about it and not someone else. I’ve never been a prisoner-of-war, but my father was, and he made 3 attempts to escape, because, as he told the Commandant, ‘It’s my job’.

 

I’ve actually explored this in my own fiction. In my last story, two of my characters (independently) find themselves in circumstances of ‘bad faith’. I only analyse this in hindsight – don’t analyse what you write while you’re writing. In fact, one of those characters is attracted to another character who lives authentically, though neither of them ‘think’ in those terms.



Addendum: Someone asked me to come up with a single sentence to describe this. After sleeping on it, I came up with this:


Be responsible for what you are and who you become. That includes being responsible for your failures. (2 sentences)


Saturday 25 December 2021

Revisiting Donald Hoffman’s alternative theory of evolution

 Back in November 2016, so 5 years ago, I wrote a post in response to an academic paper by Donald Hoffman and Chetan Prakash called Objects of Consciousness, where I specifically critiqued their ideas on biological evolution. Despite co-authoring the paper, I believe this particular aspect of their paper is predominantly Hoffman’s, based on an article he wrote for New Scientist, where he expressed similar views. One of his key arguments was that natural selection favours ‘fitness’ over ‘truth’.

 

...we find that natural selection does not, in general, favor perceptions that are true reports of objective properties of the environment. Instead, it generally favors perceptual strategies that are tuned to fitness.

 

One way to use fewer calories is to see less truth, especially truth that is not informative about fitness. (My emphasis)

 

What made me revisit this was an interview in Philosophy Now (Issue 147, Dec 2021/Jan 2022) with Samuel Grove, who recently published Retrieving Darwin’s Revolutionary Idea: The Reluctant Radical. According to Grove, Darwin was reluctant to publish The Decent of Man, because applying natural selection to humans was controversial, despite the success of The Origin of Species by Means of Natural Selection (full title). The connection to Hoffman’s argument is that Darwin struggled with the idea that evolution could ‘select’ for ‘truth’. To quote Grove:

 

Natural selection is premised on three laws: the law of inheritance, the law of variation, and the law of superfecundity (where organisms produce more offspring than can possibly survive). Together, these laws produce selection, and over the course of time, evolution. Well, Darwin’s question was, how could evolution produce a subject capable of knowing these very laws? Or, why would evolution select for fidelity to truth or laws? Selection favours survival, not truth. (My emphasis again)

 

Darwin turned to arguments, that as Grove points out, were ‘the common garden variety racism of the time’ – specifically, ‘group selection’ that favoured Anglo Saxon groups. Apparently, Darwin was reluctant to consider ‘group selection’ (as opposed to ‘individual selection’), but did so because it led to a resolution that would have been politically acceptable in his day. I will return to this point later.

 

So, even according to Darwin, Hoffman may have a point, though I’m not sure that Darwin and Hoffman are even talking about the same idea of ‘truth’. More on that later.

 

For those unfamiliar with Hoffman, his entire argument centres on the fundamental idea that ‘nothing exists unperceived, including space and time’. For more details, read my previous post, or read his co-authored paper with Prakash. I need to say upfront that I find it hard to take Hoffman seriously. Every time I read or listen to him, I keep expecting him to say, ‘Ah, see, I fooled the lot of you.’ His ideas only make sense to me if he believes we live in a computer simulation, which he’s never claimed. In fact, that would be my first question to him, if I ever met him. It’s an idea that has some adherents. Just on that, I would like to point out that chaos is incomputable, and the Universe is chaotic on a number of levels, including evolution, as it turns out.

 

In a previous life, I sometimes became involved in contractual disputes on major engineering projects (in Australia and US), preparing evidence for lawyers, and having to address opponents’ arguments. What I found in a number of cases, was that people prepared simple arguments that were nevertheless compelling. In fact, they often delivered them as if they were a fait accompli. In most of these cases, I found that by digging a little deeper, they could be challenged successfully. I have to admit that I’m reminded of this when I examine Hoffman’s argument on natural selection favouring ‘fitness’ over ‘truth’.

 

Partly, this is because his arguments highlight contradictions in his own premise and partly because one of his key arguments is contradicted by evidence, which, I concede, he may not be aware of.

 

For a start, what does Hoffman mean by ‘fitness’?

 

He talks about fitness in terms of predators and prey:

 

But in the real world where predators are on the prowl and prey must be wary, the race is often to the swift. It is the slower gazelle that becomes lunch for the swifter cheetah

 

This quote is out of context, where he’s arguing that ‘swiftness’ in response, be it the gazelle or the cheetah, favours less information, therefore less time; over more information, therefore lost time. Leaving aside the fact that survival of either animal is dependent on the accuracy of their ‘modelling’ of their environment, if the animal being chased or doing the chasing ‘doesn’t exist unperceived’, then they might as well be in a dream. In fact, we often find ourselves being chased in a dream, which has no consequences to our ‘survival’ in real life. The argument contradicts the premise.

 

Hoffman and Prakash quote Steven Palmer from a ‘graduate-level textbook’ (1999):

 

Evolutionarily speaking, visual perception is useful only if it is reasonably accurate . . . Indeed, vision is useful precisely because it is so accurate. By and large, what you see is what you get. When this is true, we have what is called veridical perception . . . perception that is consistent with the actual state of affairs in the environment. This is almost always the case with vision . . .  (Authors’ emphasis)

 

Hoffman and Prakash then argue that ‘using Monte Carlo simulations of evolutionary games and genetic algorithms, we find that natural selection does not, in general, favor perceptions that are true reports of objective properties of the environment’. In other words, they effectively argue that Palmer’s emphasis on ‘veridical perception’ is wrong. I can’t argue with their Monte Carlo simulations, because they don’t provide the data. However, real world evidence would suggest that Palmer is correct.

 

I read a story on Quora by a wildlife ranger about eagles who have had one eye damaged, usually in intra-species mid-air fights. In nearly all cases (he described one exception), an eagle who is blind in one eye needs to be euthanised because they would invariably starve to death due to an inability to catch prey. So here you have ‘fitness’ dependent on vision being accurate.

 

Leaving aside all this nit-picking about natural selection favouring ‘fitness’ over ‘truth’, how does it support their fundamental thesis that reality only exists in the mind? According to them, their theory of evolution ‘proves’ that reality doesn’t exist unperceived. Can you even have evolution if reality doesn’t exist (except in the mind)?

 

And this brings me back to Darwin, because what he didn’t consider was that, in the case of humans, cultural evolution has overtaken biological evolution, and this is unique to humanity. I wrote another post where I argue that The search for ultimate truth is unattainable, but there are 'truths' we have found throughout the history of our cultural evolution and they are in mathematics. It’s true that evolution didn’t select for this; it’s an unexpected by-product, but it has led to the understanding of laws governing the very Universe that even Darwin would be amazed to know. 



Sunday 21 November 2021

Cancel culture – the scourge of our time

There are many things that cause me some anguish at the moment, not least that Donald Trump could easily be re-elected POTUS in 2024, despite deliberately undermining and damaging the very institution he wants to lead, which is American democracy. It’s not an exaggeration to say that he’s attacked it at its core.


This may seem a mile away from the topic I’ve alluded to in the title of my post, but they both seem to be symptoms of a divisiveness I haven’t seen since the Vietnam war. 

 

The word, ‘scourge’, is defined as ‘a whip used as an instrument of punishment’; and that’s exactly how cancel culture works, with social media the perfect platform from which to wield it.

 

In this weekend’s Good Weekend magazine (Fairfax Group), the feature article is on this very topic. But I would like to go back to the previous weekend, when another media outlet, Murdoch’s Weekend Australian Magazine published an article on well known atheist, Richard Dawkins. It turns out that at the ripe old age of 80, Dawkins has been cancelled. To be precise, he had his 1996 Humanist of the Year award withdrawn by the American Humanist Association (AHA) earlier this year, because, in 2015, he tweeted a defence of Rachel Doleza (a white chapter president of NAACP, the National Association for the Advancement of Coloured People) who had been vilified for identifying as Black.

 

Of course, I don’t know anything about Rachel Doleza or the context of that stoush, but I can identify with Dawkins, even though I’ve never suffered the same indignity. Dawkins and I are of a similar mould, though we live in different strata of society. In saying that, I don’t mean that I agree with all his arguments, because I obviously don’t, but we are both argumentative and are not shy in expressing our opinions. I really don’t possess the moral superiority to throw stones at Dawkins, even though I have.

 

I remember my father once telling me that if you admired an Australian fast bowler (he had someone in mind) then you also had to admire an English fast bowler (of the same generation), because they had the exact same temperament and wicket-taking abilities. Of course, that also applies to politicians. And it pretty much applies to me and Dawkins.

 

On the subject of identifying as ‘black’, I must tell a story related to me by a friend I knew when I worked in Princeton in 2001/2. She was a similar age to me and originally from Guyana. In fact, she was niece to West Indies champion cricketer, Lance Gibbs, and told me about attending his wedding when she was 8 years old (I promise no more cricketing references). But she told me how someone she knew (outside of work) told her that she ‘didn’t know what it was like to be black’. To which she replied, ‘Of course I know I’m black, I only have to look in the mirror every morning.’  Yes, it’s funny, but it goes to a deeper issue about identity. So a black person, who had lived their entire life in the USA, was telling another black person, who had come from outside of the US, that they didn’t know what it was like to be ‘black’. 

 

Dawkins said that, as a consequence, he’d started to self-censor, which is exactly what his detractors want. If Dawkins has started to self-censor, then none of us are safe or immune. What hurt him, of course, was being attacked by people on the Left, which he mostly identifies with. And, while this practice occurs on both sides, it’s on the Left where it has become most virulent. 

 

“I self-censor. More so in recent years. Why? It’s not a thing I’ve done throughout my life, I’ve always spoken my mind openly. But we’re now in a time when if you do speak your mind openly, you are at risk of being picked up and condemned.”

 

“Every time a lecturer is cancelled from an American university, that’s another God knows how many votes for Trump.”

 

And this is the thing: the Right loves nothing more than the Left turning on itself. It’s insidious, self-destructive and literally soul-destroying. In the Good Weekend article, they focus on a specific case, while also citing other cases, both in Australia and America. The specific case was actor, Hugh Sheridan, having a Sydney Festival show cancelled, which he’d really set his sights on, because he was playing a trans-gender person which created outrage in the LGBTQIA+ community. Like others cited in the article, he contemplated suicide which triggered close friends to monitor him. This is what it’s come to. It’s a very lengthy article, which I can’t do justice to on this post, but there is a perversion here: all the shows and people who are being targeted are actually bringing diversity of race and sexuality into the public arena and being crucified by the people they represent. The conservatives, wowsers and Bible-bashers must love it.

 

This is a phenomenon that is partly if not mostly, generational, and amplified by social media. People are being forced to grovel.

 

Emma Dawson, head of the Labor-aligned (Australian political party, for overseas readers) Per Capita think tank, told the Good Weekend“[cancel culture is] more worrying to me than just about anything other than far-right extremism. It is pervasive among educated young people; very few are willing to question it.”

 

‘In 2019, Barack Obama warned a group of young people: “This idea of purity, and you’re never compromised and always politically woke... you should get over that quickly. The world is messy.”

 

And this is the nub of the issue: cancel culture is all about silencing any debate, and, without debate, you have authoritarianism, even though it’s disguised as its opposite.

 

In the same article, the author, James Button, argues that the rise of Donald Trump is not a coincidence in the emergence of this phenomenon.

 

The election of Donald Trump horrified progressives. Here was a president – elected by ordinary Americans – who was racist, who winked at neo-Nazis and who told bare-faced lies in a brazen assertion of power while claiming that the liars were progressive media. His own strategy adviser, Stephen Bannon, said that the way to win the contest was to overwhelm the media with misinformation, to “flood the zone with shit”.

 

And they succeeded so well that America is more divided than it has been since its historical civil war.


To return to Hugh Sheridan, whom I think epitomises this situation, at least as it’s being played out in Australia, in that it’s the Arts that are coming under attack, and from the Left, it has to be said. Actors and writers (like myself) often portray characters who have different backgrounds to us. To give a recent example on ABC TV, which produces some outstanding free-to-air dramas with internationally renowned casts, when everything else is going into subscribed streaming services. Earlier this year, they produced and broadcast a series called The Newsreader, set in the 1980s when a lot of stuff was happening both locally and overseas. ‘At the 11th AACTA (Australian Academy of Cinema and Television Arts) awards, the show was nominated for more awards than any other program’ (Wikipedia).

 

A key plotline of the show was that the protagonist was gay but not openly so. The point is that I assume the actor was straight, although I don’t really know, but it’s what actors do. God knows, there have been enough gay actors who have played straight characters (Sir Ian McKellen, who played Gandalf, as well as Shakespearean roles). So why crucify someone who is part of the LGBTQIA+ community for playing a transgender role. He was even accused of being homophobic and transgenderphobic. He tweeted back, “you’re insane”, which only resulted in him being trolled for accusing his tormentors of being ‘insane’.

 

Someone recently asked me why I don’t publish what I write anymore. There is more than one reason, but one is fear of being cancelled. I doubt a publisher would publish what I write, anyway. But also, I suffer from impostor syndrome in that I genuinely feel like an impostor and I don’t need someone to tell me. The other thing is that I simply don’t care; I don’t feel the need to publish to validate my work.


Saturday 6 November 2021

Reality and our perception of it

The latest issue of Philosophy Now (Issue 146, Oct/Nov 2021) has as its theme, ‘Reality’. The cover depicts Alice falling down the rabbit hole, with the notated question, What’s Really Real? I was motivated (inspired is the wrong word) to write a letter to the Editor, after reading an essay by Paul Griffiths, titled, Against Direct Realism. According to the footnote at the end of the article: Dr Paul H. Griffiths has a background in physics and engineering, and a longstanding interest in the philosophy and science of perception. I have a background in engineering and an interest in philosophy and science (physics in particular), but there the similarity ends.

 

Griffiths gives an historical account, mostly last century, concerning problems and points of view on ‘direct realism’ and ‘indirect realism’, using terms like ‘disjunctivism’ and ‘representationalism’, making me wonder if all of philosophy can be reduced to a collection of isms. To be fair to Griffiths, he’s referencing what others have written on this topic, and how it’s led to various schools of thought. I took the easy way out and didn’t address any of that directly, nor reference any of his many citations. Instead, I simply gave my interpretation of the subject based on what I’ve learned from the science, and then provided my own philosophical twist.

 

I’ve covered a lot of this before when I wrote an essay on Kant. Griffiths doesn’t mention Kant, but arguably that’s where this debate began, when he argued that we can never know the ‘thing-in-itself’, but only a perception of it. Just to address that point, I’ve argued that the thing-in-itself varies depending on the scale one observes it at. It also depends on things like what wavelength radiation you might use to probe it. 

 

But, in the context of direct realism or indirect realism, various creatures perceive reality in different ways, which I allude to in my 400 word response. If I was to try and put myself in one of Griffith’s categories, I expect I’m an ‘indirect realist’ because I believe in an independent reality and that my ‘perception’ of it is unique to my species, meaning other species would perceive it differently, either because they have different senses or the senses they have can perceive other parts of the spectrum to mine. For example, some insects and birds can see in the ultra-violet range, and we can see some colours that other primates can’t see.

 

However, I never mention those terms, or even Kant, in my missive to the Editor. I do, however, mention the significance of space and time, both to reality, and our perception of it. Here is my response:

 

 

Paul Griffith’s essay titled, Against Direct Realism (Issue 146, October/November 2021) discusses both the philosophy and science of ‘perception’, within the last century in particular. There are two parts to this topic: an objective reality and our ability to perceive it. One is obviously dependent on the other, and they need to be addressed in that order.

 

The first part is whether there is an objective reality at all. Donald Hoffman claims that ‘nothing exists unperceived, including space and time’, and that there are only ‘conscious agents’. This is similar to the argument that we live in a simulation. There is, of course, one situation where this happens, and that’s when we are dreaming. Our brains create a simulacrum of reality in our minds, which we can not only see but sometimes feel. We’re only aware that it’s not reality when we wake up.

 

There is a major difference between this dream state and ‘real life’ and that is that reality can be fatal – it can kill you. This is key to understanding both aspects of this question. It’s not contentious that our brains have evolved the remarkable ability to model this reality, and that is true in other creatures as well, yet we perceive different things, colour being the most obvious example, which only occurs in some creature’s mind. Birds can see in almost 300 degree vision, and bats and dolphins probably ‘see’ in echo-location, which we can’t even imagine. Not only that, but time passes at different rates for different creatures, which we can mimic with time-lapse or slow-motion cinematography. 

 

But here’s the thing: all these ‘means’ of perception are about keeping us and all these creatures alive. Therefore, the model in our minds must match the external reality with some degree of accuracy, yet it does even better than that, because the model even appears to be external to our heads. What’s more, the model predicts the future, otherwise you wouldn’t be able to catch a ball thrown to you. *

 

There is one core attribute of both reality and its perception that is rarely discussed, and that is space and time. We live in a universe with three spatial dimensions and one time dimension, so the models our brains create need to reflect that. The reason we can’t imagine a higher dimensional space, even though we can represent it mathematically, is because we don’t live in one.

 

 

·      There is a 120 millisecond delay between the action and the perception, and your brain compensates for it.