Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Wednesday, 24 February 2021

Relativity makes sense if everything is wavelike

 When I first encountered relativity theory, I took an unusual approach. The point is that c can always be constant while the wavelength (λ) and frequency () can change accordingly, because c = λ x f. This is a direct consequence of v = s/t (where v is velocity, s distance and t time). We all know that velocity (or speed) is just distance divided by time. And λ represents distance while f represents 1/t. 

So, here’s the thing: it occurred to me that while wavelength and frequency would change according to the observer’s frame of reference (meaning relative velocity to the source), the number of waves over a specific distance would be the same for both, even though it’s impossible to measure the number of waves. And a logical consequence of the change in wavelength and frequency is that the observers would ‘measure’ different distances and different periods of time.

 

One of the first confirmations of relativity theory was to measure the half-lives of cosmic rays travelling through the Earth’s atmosphere to reach a detector at ground level. Measurements showed that more particles arrived than predicted by their half-life when stationary. However, allowing for relativistic effects (as the particles travelled at high fractional lightspeeds), the number of particles detected corresponded to time dilation (half-life longer, so more particles arrived). This means from the perspective of the observers on the ground, if the particles were waves, then the frequency slowed, which equates to time dilation - clocks slowing down. It also means that the wavelength was longer so the distance they travelled was further. 

 

If the particles travelled slower (or faster), then wavelength and frequency would change accordingly, but the number of waves would be the same. Of course, no one takes this approach - why would you calculate the Lorentz transformation on wavelength and frequency and multiply by the number of waves, when you could just do the same calculation on the overall distance and time.

 

Of course, when it comes to signals of communication, they all travel at c, and changes in frequency and wavelength also occur as a consequence of the Doppler effect. This can create confusion in that some people naively believe that relativity can be explained by the Doppler effect. However, the Doppler effect changes according to the direction something or someone is travelling while relativistic effects are independent of direction. If you come across a decent mathematical analysis of the famous ‘twin paradox’, you’ll find it allows for both the Doppler effect and relativistic effects, so don’t get them confused.

 

Back to the cosmic particles: from their inertial perspective, they are stationary and the Earth with its atmosphere is travelling at high fractional lightspeed relative to them. So the frequency of their internal clock would be the same as if they were stationary, which is higher than what the observers on the ground would have deduced. Using the wave analogy, higher frequency means shorter wavelength, so the particles would ‘experience’ the distance to the Earth’s surface as shorter, but again, the number of waves would be the same for all observers.

 

I’m not saying we should think of all objects as behaving like waves - despite the allusion in the title - but Einstein always referred to clocks and rulers. If one thinks of these clocks and rulers in terms of frequencies and wavelengths, then the mathematical analogy of a constant number of waves is an extension of that. It’s really just a mathematical trick, which allows one to visualise what’s happening.


Saturday, 6 February 2021

What is scientism?

 I’m currently reading a book (almost finished, actually) by Hugh Mackay, The Inner Self; The joy of discovering who we really are. Mackay is a psychologist but he writes very philosophically, and the only other book of his I’ve read is Right & Wrong: How to decide for yourself, which I’d recommend. In particular, I liked his chapter titled, The most damaging lies are the ones we tell ourselves.

You may wonder what this has to do with the topic, but I need to provide context. In The Inner Self, Mackay describes 20 ‘hiding places’ (his term) where we hide from our ‘true selves’. It’s all about living a more ‘authentic’ life, which I’d endorse. To give a flavour, hiding places include work, perfectionism, projection, narcissism, victimhood – you get the picture. Another term one could use is ‘obsession’. I recently watched a panel of elite athletes (all Australian) answering a series of public-sourced questions (for an ABC series called, You Can’t Ask That), and one of the take-home messages was that to excel in any field, internationally, you have to be obsessed to the point of self-sacrifice. But this also applies to other fields, like performing arts and scientific research. I’d even say that writing a novel requires an element of obsession. So, obsession is often a necessity for success.

 

With that caveat, I found Mackay’s book very insightful and thought-provoking – It will make you examine yourself, which is no bad thing. I didn’t find any of it terribly contentious until I reached his third-last ‘hiding place’, which was Religion and Science. The fact that he would put them together, in the same category, immediately evoked my dissent. In fact, his elaboration on the topic bordered on relativism, which has led me to write this post in response.

 

Many years ago (over 2 decades) when I studied philosophy, I took a unit that literally taught relativism, though that term was never used. I’m talking epistemological relativism as opposed to moral relativism. It’s effectively the view that no particular discipline or system of knowledge has a privileged or superior position. Yes, that viewpoint can be found in academia (at least, back then).

 

Mackay’s chapter on the topic has the same flavour, which allows him to include ‘scientism’ as effectively a religion. He starts off by pointing out that science has been used to commit atrocities the same as religion has, which is true. Science, at base, is all about knowledge, and knowledge can be used for good or evil, as we all know. But the ethics involved has more to do with politicians, lawmakers and board appointees. There are, of course, ethical arguments about GM foods, vaccinations and the use of animals in research. Regarding the last one, I couldn’t personally do research involving the harming of animals, not that I’ve ever done any form of research.

 

But this isn’t my main contention. He makes an offhand reference at one point about the ‘incompatibility’ of science and religion, as if it’s a pejorative remark that reflects an unjustified prejudice on the part of someone who’d make that comment. Well, to the extent that many religions are mythologically based, including religious texts (like the Bible), I’d say the prejudice is justified. It’s what the evolution versus creation debate is all about in the wealthiest and most technologically advanced nation in the world.

 

I’ve long argued that science is neutral on whether God exists or not. So let me talk about God before I talk about science. I contend that there are 2 different ideas of God that are commonly conflated. One is God as demiurge, and on that I’m an atheist. By which I mean, I don’t believe there is an anthropomorphic super-being who created a universe just for us. So I’m not even agnostic on that, though I’m agnostic about an after-life, because we simply don’t know.

 

The other idea of God is a personal subjective experience which is individually unique, and most likely a projection of an ‘ideal self’, yet feels external. This is very common, across cultures, and on this, I’m a theist. The best example I can think of is the famous mathematician, Srinivasa Ramanujan, who believed that all his mathematical insights and discoveries came directly from the Hindu Goddess, Namagiri Thayar. Ramanujan (pronounced rama-nu-jan) was both a genius and a mystic. His famous ‘notebooks’ are still providing fertile material 100 years later. He traversed cultures in a way that probably wouldn’t happen today.

 

Speaking of mathematics, I wrote a post called Mathematics as religion, based on John Barrow’s book, Pi in the Sky. According to Marcus du Sautoy, Barrow is Christian, though you wouldn’t know it from his popular science books. Einstein claimed he was religious, ‘but not in the conventional sense’. Schrodinger studied the Hindu Upanishads, which he revealed in his short tome, Mind and Matter (compiled with What is Life?).

 

Many scientists have religious beliefs, but the pursuit of science is atheistic by necessity. Once you bring God into science as an explanation for something, you are effectively saying, we can’t explain this and we’ve come to the end of science. It’s commonly called the God-of-the-gaps, but I call it the God of ignorance, because that’s exactly what it represents.

 

I have 2 equations tattooed on my arms, which I describe in detail elsewhere, but they effectively encapsulate my 3 worlds philosophy: the physical, the mental and the mathematical. Mackay doesn’t talk about mathematics specifically, which is not surprising, but it has a special place in epistemology. He does compare science to religion in that scientific theories incorporate ‘beliefs', and religious beliefs are 'the religious equivalent of theories'. However, you can’t compare scientific beliefs with religious faith, because one is contingent on future discoveries and the other is dogma. All scientists worthy of the name know how ignorant we are, but the same can’t be said for religious fundamentalists.

 

However, he's right that scientific theories are regularly superseded, though not in the way he infers. All scientific theories have epistemological limits, and new theories, like quantum mechanics and relativity (as examples), extend old theories, like Newtonian mechanics, into new fields without proving them wrong in the fields they already described. And that’s a major difference to just superseding them outright.

 

But mathematics is different. As Freeman Dyson once pointed out, a mathematical theorem is true for all time. New mathematical discoveries don’t prove old mathematical discoveries untrue. Mathematics has a special place in our system of knowledge.

 

So what is scientism? It’s a pejorative term that trivialises and diminishes science as an epistemological success story.


Thursday, 21 January 2021

Is the Universe deterministic?

 I’ve argued previously, and consistently, that the Universe is not deterministic; however, many if not most physicists believe it is. I’ve even been critical of Einstein for arguing that the Universe is deterministic (as per his famous dice-playing-God statement). 

Recently I’ve been watching YouTube videos by theoretical physicist, Sabine Hossenfelder, and I think she’s very good and I highly recommend her. Hossenfelder is quite adamant that the Universe is deterministic, and her video arguing against free will is very compelling and thought-provoking. I say this, because she addresses all the arguments I’ve raised in favour of free will, plus she has supplementary videos to support her arguments.

 

In fact, Hossenfelder states quite unequivocally towards the end of the video that ‘free will is an illusion’ and, in her own words, ‘needs to go into the rubbish bin’. Her principal argument, which she states right at the start, is that it’s ‘incompatible with the laws of nature’. She contends that the Universe is completely deterministic right from the Big Bang. She argues that everything can be described by differential equations, including gravity and quantum mechanics (QM), which she expounds upon in some detail in another video

 

My immediate reaction to this: is what about Poincare and chaos theory? Don’t worry, she addresses that as well. In fact, she has a couple of videos on chaos theory (though one is really about weather and climate change), which I’d recommend.

 

The standard definition of chaos is that it’s deterministic but unpredictable, which seems to be an oxymoron. As she points out, chaotic phenomena (which includes the weather and the orbits of the planet, among many other things, like evolution) are dependent on the ‘initial conditions’. An infinitesimal change in the initial conditions will result in a different outcome. The word ‘infinitesimal’ is the key here, because you need to work out the initial conditions to an infinite decimal place to get the answer. That’s why it’s not predictable. As to whether it’s deterministic, I think that’s another matter.

 

To overcome this apparent paradox, I prefer to say it’s indeterminable, which is not contentious. Hossenfelder explains, using a subtly different method, that you can mathematically prove, for any chaotic system, that you can only forecast to a finite time in the future, no matter how detailed your calculation (it’s worth watching her video, just to see this).

 

Because the above definition for chaos seems to lead to a contradiction or, at best, an oxymoron, I prefer another definition that is more pragmatic and is mostly testable (though not always). Basically, if you rerun a chaotic phenomenon, you’ll get a different outcome. The best known example is tossing a coin. It’s well known in probability theory (in fact it’s an axiom) that the result of the next coin toss is independent of all coin tosses that may have gone before. The reason for this is that coin tosses are chaotic. The same principle applies to throwing dice, and Marcus du Sautoy expounds on the chaos of throwing dice in this video. So, tossing coins and throwing dice are considered ‘random’ events in probability theory, but Hossenfelder contends they are totally deterministic; just unpredictable.

 

Basically, she’s arguing that just because we can’t calculate the initial conditions, they still happened and therefore everything that arises from them is deterministic. Du Sautoy (whom I referenced above) in the same video and in his book, What We Cannot Know, cites physicist turned theologian, John Polkinghorne, that chaos provides the perfect opportunity for an interventionist God – a point I’ve made myself (though I’m not arguing for an interventionist God). I’m currently reading Troy by Stephen Fry, an erudite rendition based on Homer’s tale, and it revolves around the premise that one’s destiny is largely predetermined by the Gods. The Hindu epic, Mahabharata, also portrays the notion of destiny that can’t be avoided. Leonard Cohen once remarked upon this in an interview, concerning his song, If It Be Your Will. In fact, I contend that you can’t believe in religious prophecy if you don’t believe in a deterministic universe. My non-belief in a deterministic universe is the basis of my argument against prophecy. And my argument against determinism is based on chaos and QM (which I’ll come to shortly).

 

Of course, one can’t turn back the clock and rerun the Universe, and, as best I can tell, that’s Hossenfelder’s sole argument for a deterministic universe – it can’t be changed and it can’t be predicted. She mentions Laplace’s Demon, who could hypothetically calculate the future of every particle in the Universe. But Laplace’s Demon is no different to the Gods of prophecy – it can do the infinite calculation that we mortals can’t do.

 

I have to concede that Hossenfelder could be right, based on the idea that the initial conditions obviously exist and we can’t rewind the clock to rerun the Universe. However, tossing coins and throwing dice demonstrate unequivocally that chaotic phenomena only become ‘known’ after the event and give different outcomes when rerun. 

 

So, on that basis, I contend that the future is open and unknowable and indeterminable, which leads me to say, it’s also non-deterministic. It’s a philosophical position based on what I know, but so is Hossenfelder’s, even though she claims otherwise: that her position is not philosophical but scientific.

 

Of course, Hossenfelder also brings up QM, and explains it is truly random but it’s also time reversible, which can be demonstrated with Schrodinger’s equation. She makes the valid point that the inherent randomness in QM doesn’t save free will. In fact, she says, ‘everything is either determined or random, neither of which are affected by free will’. However, she makes the claim that all the particles in our brain are quantum mechanically time reversible and therefore deterministic. However, I contend that the wave function that allows this time reversibility only exists in the future, which is why it’s never observed (I acknowledge that’s a personal prejudice). On the other hand, many physicists contend that the wave function is a purely mathematical construct that has no basis in reality.

 

My argument is that it’s only when the wave function ‘collapses’ or ‘decoheres’ that a ‘real’ physical event is observed, which becomes classical physics. Freeman Dyson argued something similar. Like chaotic events, if you were to rerun a quantum phenomenon you’d get a different outcome, which is why one can only deal in probabilities until an ‘observation’ is made. Erwin Schrodinger coined the term ‘statistico-deterministic’ to describe QM, because at a statistical level, quantum phenomena are predictable. He gives the example of radioactive decay, which we can predict holistically very accurately with ‘half-lives’, but you can’t predict the decay of an individual isotope at all. I argue that, both in the case of QM and chaos, you have time asymmetry, which means that if you could hypothetically rewind the clock before the wave function collapse or some initial conditions (whichever the case), you would witness a different outcome.

 

Hossenfelder sums up her entire thesis with the following statement:

 

...how ever you want to define the word [free will], we still cannot select among several possible different futures. This idea makes absolutely no sense if you know anything about physics.

 

Well, I know enough about physics to challenge her inference that there are no ‘possible different futures’. Hossenfelder, herself, knows that alternative futures are built-into QM, which is why the multiple worlds interpretation is so popular. And some adherents of the Copenhagen interpretation claim that you do get to ‘choose’ (though I don’t). If the wave function describes the future, it can have a multitude of future paths, only one of which becomes reality in the past. This derives logically from Dyson’s interpretation of QED.

 

Of course, none of this provides an argument for free will, even if the Universe is not deterministic.

 

Hossenfelder argues that the brain’s software (her term) runs calculations that determine our decisions, while giving the delusion of free will. I thought this was her best argument:

 

Your brain is running a calculation, and while it is going on you do not know the outcome of that calculation. So the impression of free will comes from our ‘awareness’ that we think about what we do, along with our inability to predict the result of what we are thinking.

 

You cannot separate the idea of free will from the experience of consciousness. In another video, Hossenfelder expresses scepticism at all the mathematical attempts to describe or explain consciousness. I’ve argued previously that if we didn’t all experience consciousness, science would tell us that it is an illusion just like free will is. That’s because science can’t explain the experience of consciousness any better than it can explain the intuitive sense of free will that most of us take for granted.

 

Leaving aside the use of the words, ‘calculation’ and ‘software’, which allude to the human brain being a computer, she’s right that much of our thinking occurs subconsciously. All artists are aware of this. As a storyteller, I know that the characters and their interactions I render on the page (or on a computer screen) largely come from my subconscious. But everyone experiences this in dreams. Do you think you have free will in a dream? In a so-called ‘lucid dream’, I’d say, yes.

 

I would like to drop the term, free will, along with all its pseudo-ontological baggage, and adopt another term, ‘agency’. Because it’s agency that we all believe we have, wherever it springs from. We all like to believe we can change our situation or exert some control over it, and I’d call that agency. And it requires a conscious effort – an ability to turn a thought into an action. In fact, I’d say it’s a psychological necessity: without a sense of agency, we might as well be automatons.

 

I will finish with an account of free will in extremis, as told by London bomber survivor, Gill Hicks. Gill Hicks was only one person removed from the bomber in one of the buses, and she lost both her legs. As she tells it, she heard a voice, like we do in a dream, and it was a female voice and it was ‘Death’ and it beckoned to her and it was very inviting; it was not tinged with fear at all. And then she heard another voice, which was male and it was ‘Life’, and it told her that if she chose to live she had a destiny to fulfil. So she had a choice, which is exactly how we define free will and she consciously chose Life. As it turned out, she lost 70% of her blood and she had a hole in the back of her head from a set of keys. In the ambulance, she later learned that she was showing no signs of life – no pulse and she had flatlined – yet she was talking. The ambo told the driver, ‘Dead but talking.’ It was only because she was talking that he continued to attempt to save her life.

 

Now, I’m often sceptical about accounts of ‘near-death experiences’, because they often come across as contrived and preachy. But Gill Hicks comes across as very authentic; down-to-Earth, as we say in Oz. So I believe that what she recalled is what she experienced. I tell her story, because it represents exactly what Hossenfelder claims about free will: it defies a scientific explanation.


Thursday, 24 December 2020

Does imagination separate us from AI?

 I think this is a very good question, but it depends on how one defines ‘imagination’. I remember having a conversation (via email) with Peter Watson, who wrote an excellent book, A Terrible Beauty (about the minds and ideas of the 20th Century) which covered the arts and sciences with equal erudition, and very little of the politics and conflicts that we tend to associate with that century. In reference to the topic, he argued that imagination was a word past its use-by date, just like introspection and any other term that referred to an inner world. Effectively, he argued that because our inner world is completely dependent on our outer world, it’s misleading to use terms that suggest otherwise.

It’s an interesting perspective, not without merit, when you consider that we all speak and think in a language that is totally dependent on an external environment from our earliest years. 

 

But memory for us is not at all like memory in a computer, which provides a literal record of whatever it stores, including images, words and sounds. On the contrary, our memories of events are ‘reconstructions’, which tend to become less reliable over time. Curiously, the imagination apparently uses the same part of the brain as memory. I’m talking semantic memory, not muscle memory, which is completely different, physiologically. So the imagination, from the brain’s perspective is like a memory of the future. In other words, it’s a projection into the future of something we might desire or fear or just expect to happen. I believe that many animals have this same facility, which they demonstrate when they hunt or, alternatively, evade being hunted.

 

Raymond Tallis, who has a background in neuroscience and writes books as well as a regular column in Philosophy Now, had this to say, when talking about free will:

 

Free agents, then, are free because they select between imagined possibilities, and use actualities to bring about one rather than another.

 

I find a correspondence here with Richard Feynman’s ‘sum over histories’ interpretation of quantum mechanics (QM). There are, in fact, an infinite number of possible paths in the future, but only one is ‘actualised’ in the past.

 

But the key here is imagination. It is because we can imagine a future that we attempt to bring it about - that's free will. And what we imagine is affected by our past, our emotions and our intellectual considerations, but that doesn't make it predetermined.

 

Now, recent advances in AI would appear to do something similar in the form of making predictions based on recordings of past events. So what’s the difference? Well, if we’re playing a game of chess, there might not be a lot of difference, and AI has reached the stage where it can do it even better than humans. There are even computer programmes available now that try and predict what I’m going to write next, based on what I’ve already written. How do you know this hasn’t been written by a machine?

 

Computers use data – lots of it – and use it mindlessly, which means the computer really doesn’t know what it means in the same way we do. A computer can win a game of chess, but it requires a human watching the game to appreciate what it actually did. In the same way that a computer can distinguish one colour from another, including different shades of a single colour, but without ever ‘seeing’ a colour the way we do.

 

So, when we ‘imagine’, we fabricate a mindscape that affects us emotionally. The most obvious examples are in art, including music and stories. We now have computers also creating works of art, including music and stories. But here’s the thing: the computer cannot respond to these works of art the way we do.

 

Imagination is one of the fundamental attributes that makes us humans. An AI can and will (in the future) generate scenarios and select the one that produces the best outcome, given specific criteria. But, even in these situations, it is a tool that a human will use to analyse enormous amounts of data that would be beyond our capabilities. But I wouldn’t call it imagination any more than I would say an AI could see colour.


Saturday, 5 December 2020

Some (personal) Notes on Writing

 This post is more personal, so don’t necessarily do what I’ve done. I struggled to find my way as a writer, and this might help to explain why. Someone recently asked me how to become a writer, and I said, ‘It helps, if you start early.’ I started pre-high school, about age 8-9. I can remember writing my own Tarzan scripts and drawing my own superheroes. 

 

Composition, as it was called then, was one of my favourite activities. At age 12 (first year high school), when asked to write about what we wanted to do as adults, I wrote that I wanted to write fiction. I used to draw a lot as a kid, as well. But, as I progressed through high school, I stopped drawing altogether and my writing deteriorated to the point that, by the time I left school, I couldn’t write an essay to save my life; I had constant writer’s block.

 

I was in my 30s before I started writing again and, when I started, I knew it was awful, so I didn’t show it to anyone. A couple of screenwriting courses (in my late 30s) was the best thing I ever did. With screenwriting, the character is all in what they say and what they do, not in what they look like. However, in my fiction, I describe mannerisms and body language as part of a character’s demeanour, in conjunction with their dialogue. Also, screenwriting taught me to be lean and economical – you don’t write anything that can’t be seen or heard on the screen. The main difference in writing prose is that you do all your writing from inside a character’s head; in effect, you turn the reader into an actor, subconsciously. Also, you write in real time so it unfolds like a movie in the reader’s imagination.

 

I break rules, but only because the rules didn’t work for me, and I learned that the hard way. So I don’t recommend that you do what I do, because, from what I’ve heard and read, most writers don’t. I don’t write every day and I don’t do multiple drafts. It took me a long time to accept this, but it was only after I became happy and confident with what I produced. In fact, I can go weeks, even months, without writing anything at all and then pick it up from where I left off.

 

I don’t do rewrites because I learned the hard way that, for me, they are a waste of time. I do revisions and you can edit something forever without changing the story or its characters in any substantial way. I correct for inconsistencies and possible plot holes, but if you’re going to do a rewrite, you might as well write something completely different – that’s how I feel about it. 

 

I recently saw a YouTube discussion between someone and a writer where they talked about the writer’s method. He said he did a lot of drafts, and there are a lot of highly successful writers who do (I’m not highly successful, yet I don’t think that’s the reason why). However, he said that if you pick something up you wrote some time ago, you can usually tell if it’s any good or not. Well, my writing passes that test for me.

 

I’m happiest when my characters surprise me, and, if they don’t, I know I’m wasting my time. I treat it like it’s their story, not mine; that’s the best advice I can give.

 

How to keep the reader engaged? I once wrote in another post that creating narrative tension is an essential writing skill, and there are a number of ways to do this. Even a slow-moving story can keep a reader engaged, if every scene moves the story forward. I found that keeping scenes short, like in a movie, and using logical sequencing so that one scene sets up the next, keeps readers turning the page. Narrative tension can be subliminally created by revealing information to the reader that the characters don’t know themselves; it’s a subtle form of suspense. Also, narrative tension is often manifest in the relationships between characters. I’ve always liked moral dilemmas, both in what I read (or watch) and what I write.

 

Finally, when I start off a new work, it will often take me into territory I didn’t anticipate; I mean psychological territory, as opposed to contextual territory or physical territory. 

 

A story has all these strands, and when you start out, you don’t necessarily know how they are going to come together – in fact, it’s probably better if you don’t. That way, when they do, it’s very satisfying and there is a sense that the story already existed before you wrote it. It’s like you’re the first to read it, not create it, which I think is a requisite perception.


Monday, 30 November 2020

Social norms determine morality

 The latest issue of Philosophy Now (Issue 140, Oct/Nov 2020) has Hegel as its theme. I confess that really the only thing I knew about Hegel was his ‘dialectic’ and that he influenced Marx, though, from memory, Marx claimed to have turned Hegel’s dialectic ‘on its head’. Hegel’s dialectic has relevance to politics and history, because, basically, he claimed that if someone proposes a ‘thesis’ someone else will propose its ‘antithesis’ and we end up with a ‘synthesis’ of the two. Some people claim that this is how history has progressed, but I’m not so sure. 

 

However, I do agree that if someone promotes an ideology or a social agenda, you will invariably get opposition to it, and the stronger the promotion, the stronger the opposition. We see this in politics a lot, but a good example is religion. Militant atheism only tends to occur in societies where you have militant fundamentalist religion, which is usually Christian, but could be Muslim. In societies where no one really cares about religion, no one cares too much about atheism either. 

 

I know this, because I live in a culture where no one cares and I’ve visited one where people do, which is America (at the dawn of the 21st Century). Mind you, I grew up in 1950s Australia when there was a division between Catholic and protestant that even affected the small rural town where I lived and was educated in. That division evaporated in the 1960s pretty much, with a zeitgeist that swept the Western world. It was largely driven by post war liberalism, the introduction of the contraceptive pill and a cultural phenomenon called rock and roll. But what I remember growing up in the 60’s, leaving school, going to university and entering the workforce in a major city, was that we males grew our hair long and everyone, including women, questioned everything. We were rebellious: there were marches against our involvement in the Vietnam war and an Australian academic by the name of Germaine Greer published a book called The Female Eunuch.

 

And all of this is relevant to the theme of my thesis, which is that morality is really about social norms, which is why morality evolves, and whether it evolves for the better or worse is dependent on a lot of factors, not least political forces and individuals’ perceptions of their own worth and sense of security within a social context.

 

But getting back to Hegel, many saw the rebellious attitude in the 1960s as a backlash against conservative forces, especially religious based ones, that had arisen in the 1950s. And this in turn, was a reaction to the forces of fascism that had ignited the most widespread and devastating conflict in the whole of human history. There was almost no one who had not been affected by that in Europe or Asia or North America. Even, in far-flung Australia and New Zealand, it seemed that every family had a member or knew someone who had been directly involved in that war. My family was no exception, as I’ve written about elsewhere.

 

In the same issue of Philosophy Now, there is an article by Terrence Thomson (a PhD candidate at the Centre for Research in Modern European Philosophy in Kingston University, London) titled, Kant, Conflict & Universal History. I’ve written about Kant elsewhere, but not in this context. He’s more famously known for his epistemology, discussed in some detail in his Critique of Pure Reason (1781), which was the subject of my essay. But, according to Thomson, 3 years later (1784), he published an article in a ‘prominent intellectual newspaper, titled: Idea for a Universal History from a Cosmopolitan Perspective.’ Without going into too much detail, Kant coined a term, ‘unsociable sociability’, which he contended is ‘a feature of human social interaction’, and which he defined as the human “tendency to enter into society, a tendency that continually threatens to break up this society”. Quoting Thomson (interpreting Kant): ‘...it is a natural human inclination to connect with people and to be part of a larger whole; yet it is also part of our natural inclination to destroy these social bonds through isolationism and divisiveness.’ One has to look no further than the just-held, US presidential election and its immediate aftermath to see this in action. But one could also see this as an example of Hegel’s ‘dialectic’ in action.

 

As I explained in my introduction, Hegel argued that a ‘synthesis’ arises out of an opposition between a ‘thesis’ and its ‘antithesis’, but then the synthesis becomes a new ‘thesis’, which creates a new ‘antithesis’, and so the dialectic never stops. This could also be seen as similar to, if not the same as, the dynamic that Thomson attributes to Kant: the human inclination to ‘belong’ followed by an opposing inclination to ‘break those social bonds’.

 

I take a much simpler view, which is that humans are inherently tribal. And tribalism is a double-edged sword. It creates the division that Kant alludes to in his ‘unsociable sociability’, and it creates the antithesis of Hegel’s dialectic. We see this in the division created by religion throughout history, and not just the petty example I witnessed in my childhood. And now, in the current age, we have a new tribalism in the form of political parties, exemplified by the deep divisions in the US, even before Donald Trump exploited them to the full in his recently eclipsed, 4 year term.

 

And this brings me back, by a very convoluted route, to the subject of this essay: morality and social norms. Trump did his best to change social norms, and, by so doing, change his society’s moral landscape, whether intentionally or not. He made it socially acceptable to be disrespectful to ‘others’, which included women, ‘grab them by the pussy’, immigrants, Muslims, the former President, anyone in the Democratic party, anyone in the GOP who didn’t support him and his own Intelligence community and Defence personnel. He also made white supremacy and fringe conspiracy theorist groups feel legitimised. But, most significantly, beyond everything else, he propagated a social norm whereby you could dismiss any report by any authority whatsoever that didn’t fit in with your worldview – you could simply create your own ‘facts’.

 

In most societies, especially Western democratic societies, we expect social norms to evolve that make people feel more inclusive and that constructively build collaborative relationships, because we know from experience, that that is how we get things done. As a retired naval admiral and self-ascribed conservative (in a TV interview prior to the election) pointed out, Trump did the exact opposite, both at home and abroad: he broke off relations and fomented division wherever and whenever he could. I’ve argued in previous posts that leaders bring out the best in the people they lead, which is how they are ultimately judged; contrarily, Trump brought out the worst in people.

 

In one of my better posts, I discussed at length how a young woman, raped and fatally tortured on a bus in India, exposed the generational divide in social norms in that country at that time, and how it directly affected one’s perception of the morality of that specific incident. 

 

From a Western perspective, especially given the recent ‘me-too’ movement, this is perverse. However, in the late 60s, early 70s, when I was entering adulthood, there was a double-standard when it came to sexual behaviour. It was okay for men to have sex with as many partners as they could find, but it was not alright for women to indulge in the same activity. This led to men behaving more predatory and it was considered normal for women to be ‘seduced’, even if it was against their better-intentions. The double-standard of the day really didn’t encourage much alternative. The introduction of the contraceptive pill, I believe, was the game-changer. Because, theoretically, women could have the same sexual freedom as men without the constant fear of becoming pregnant, which was still a stigma at that time. Now, some of my generation may have a different rear-vision view of this, but I give it as an example of changing social norms occurring concordantly with changing moral perceptions.

 

I write science fiction, as a hobby or pastime, rather than professionally. But what attracted me to sci-fi as a genre, was not so much the futuristic technologies one could conjure up, but the what-if societies that might exist on worlds isolated by astronomical distances. In a recent work, I explored a society that included clones (genetically engineered humans, so not copies per se). In this society, female clones are exploited because they have no family. Instead, they have guardianships that can be sold onto someone else, and this becomes a social norm that’s tacitly accepted. Logically, this leads to sexual exploitation. I admit to being influenced by Blade Runner 2049, though I go in a completely different direction, and my story is more of a psychological thriller than an action thriller. There is sexual exploitation on both sides: I have a man in authority having a sexual relationship with a character by blackmail; and I have a woman sexually exploiting a character so she can manipulate him into committing a crime. Neither of these scenarios were part of my original plot; they evolved in the way that stories do, and became core elements. In fact, it could be argued that the woman is even more evil than the man.

 

Both characters come undone in the end, but, more importantly to me was that the characters should be realistic and not paper cut-outs. I asked someone who’d read it what they thought of the man in power, and they said, ‘Oh, people like him exist, even now.’