Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
Showing posts with label Storytelling. Show all posts
Showing posts with label Storytelling. Show all posts

Tuesday 2 January 2024

Modes of expression in writing fiction

As I point out in the post, this is a clumsy phrase, but I find it hard to come up with a better one. It’s actually something I wrote on Quora in response to a question. I’ve written on this before, but this post has the benefit of being much more succinct while possibly just as edifying.
 
I use the term ‘introspection’ where others use the word, ‘insight’. It’s the reader’s insight but the character’s introspection, which is why I prefer that term in this context.
 
The questioner is Clxudy Pills, obviously a pseudonym. I address her directly in the answer, partly because, unlike other questions I get, she has always acknowledged my answers.
 

Is "show, not tell" actually a good writing tip?

 
Maybe. No one said that to me when I was starting out, so it had no effect on my development. But I did read a book (more than one, actually) on ‘writing’ that delineated 5 categories of writing ‘style’. Style in this context means the mode of expression rather than an author’s individual style or ‘voice’. That’s clumsily stated but it will make sense when I tell you what they are.
 

  1. Dialogue is the most important because it’s virtually unique to fiction; quotes provided in non-fiction notwithstanding. Dialogue, more than any other style, tells you about the characters and their interactions with others.



  2. Introspection is what the character thinks, effectively. This only happens in novels and short stories, not screenplays or stage plays, soliloquies being the exception and certainly not the rule. But introspection is essential to prose, especially when the character is on their own.



  3. Exposition is the ‘telling’, not showing, part. When you’re starting out and learning your craft, you tend to write a lot of exposition – I know I did – which is why we get the admonition in your question. But the exposition can be helpful to you, if not the reader, as it allows you to explore the setting, the context of the story and its characters. Eventually, you’ll learn not to rely on it. Exposition is ‘smuggled’ into movies through dialogue and into novels through introspection.



  4. Description is more difficult than you think, because it’s the part of a novel that readers will skip over to get on with the story. Description can be more boring than exposition, yet it’s necessary. My approach is to always describe a scene from a character’s POV, and keep it minimalist. Readers automatically fill in the details, because we are visual creatures and we do it without thinking.



  5. Action is description in motion. Two rules: stay in one character’s POV and keep it linear – one thing happens after another. It has the dimension of time, though it’s subliminal.

 
 So there: you get 5 topics for the price of one.
 

Monday 23 October 2023

The mystery of reality

Many will say, ‘What mystery? Surely, reality just is.’ So, where to start? I’ll start with an essay by Raymond Tallis, who has a regular column in Philosophy Now called, Tallis in Wonderland – sometimes contentious, often provocative, always thought-expanding. His latest in Issue 157, Aug/Sep 2023 (new one must be due) is called Reflections on Reality, and it’s all of the above.
 
I’ve written on this topic many times before, so I’m sure to repeat myself. But Tallis’s essay, I felt, deserved both consideration and a response, partly because he starts with the one aspect of reality that we hardly ever ponder, which is doubting its existence.
 
Actually, not so much its existence, but whether our senses fool us, which they sometimes do, like when we dream (a point Tallis makes himself). And this brings me to the first point about reality that no one ever seems to discuss, and that is its dependence on consciousness, because when you’re unconscious, reality ceases to exist, for You. Now, you might argue that you’re unconscious when you dream, but I disagree; it’s just that your consciousness is misled. The point is that we sometimes remember our dreams, and I can’t see how that’s possible unless there is consciousness involved. If you think about it, everything you remember was laid down by a conscious thought or experience.
 
So, just to be clear, I’m not saying that the objective material world ceases to exist without consciousness – a philosophical position called idealism (advocated by Donald Hoffman) – but that the material objective world is ‘unknown’ and, to all intents and purposes, might as well not exist if it’s unperceived by conscious agents (like us). Try to imagine the Universe if no one observed it. It’s impossible, because the word, ‘imagine’, axiomatically requires a conscious agent.
 
Tallis proffers a quote from celebrated sci-fi author, Philip K Dick: 'Reality is that which, when you stop believing in it, doesn’t go away' (from The Shifting Realities of Philip K Dick, 1955). And this allows me to segue into the world of fiction, which Tallis doesn’t really discuss, but it’s another arena where we willingly ‘suspend disbelief' to temporarily and deliberately conflate reality with non-reality. This is something I have in common with Dick, because we have both created imaginary worlds that are more than distorted versions of the reality we experience every day; they’re entirely new worlds that no one has ever experienced in real life. But Dick’s aphorism expresses this succinctly. The so-called reality of these worlds, in these stories, only exist while we believe in them.
 
I’ve discussed elsewhere how the brain (not just human but animal brains, generally) creates a model of reality that is so ‘realistic’, we actually believe it exists outside our head.
 
I recently had a cataract operation, which was most illuminating when I took the bandage off, because my vision in that eye was so distorted, it made me feel sea sick. Everything had a lean to it and it really did feel like I was looking through a lens; I thought they had botched the operation. With both eyes open, it looked like objects were peeling apart. So I put a new eye patch on, and distracted myself for an hour by doing a Sudoku problem. When I had finished it, I took the patch off and my vision was restored. The brain had made the necessary adjustments to restore the illusion of reality as I normally interacted with it. And that’s the key point: the brain creates a model so accurately, integrating all our senses, but especially, sight, sound and touch, that we think the model is the reality. And all creatures have evolved that facility simply so they can survive; it’s a matter of life-and-death.
 
But having said all that, there are some aspects of reality that really do only exist in your mind, and not ‘out there’. Colour is the most obvious, but so is sound and smell, which all may be experienced differently by other species – how are we to know? Actually, we do know that some animals can hear sounds that we can’t and see colours that we don’t, and vice versa. And I contend that these sensory experiences are among the attributes that keep us distinct from AI.
 
Tallis makes a passing reference to Kant, who argued that space and time are also aspects of reality that are produced by the mind. I have always struggled to understand how Kant got that so wrong. Mind you, he lived more than a century before Einstein all-but proved that space and time are fundamental parameters of the Universe. Nevertheless, there are more than a few physicists who argue that the ‘flow of time’ is a purely psychological phenomenon. They may be right (but arguably for different reasons). If consciousness exists in a constant present (as expounded by Schrodinger) and everything else becomes the past as soon as it happens, then the flow of time is guaranteed for any entity with consciousness. However, many physicists (like Sabine Hossenfelder), if not most, argue that there is no ‘now’ – it’s an illusion.
 
Speaking of Schrodinger, he pointed out that there are fundamental differences between how we sense sight and sound, even though they are both waves. In the case of colour, we can blend them to get a new colour, and in fact, as we all know, all the colours we can see can be generated by just 3 colours, which is how the screens on all your devices work. However, that’s not the case with sound, otherwise we wouldn’t be able to distinguish all the different instruments in an orchestra. Just think: all the complexity is generated by a vibrating membrane (in the case of a speaker) and somehow our hearing separates it all. Of course, it can be done mathematically with a Fourier transform, but I don’t think that’s how our brains work, though I could be wrong.
 
And this leads me to discuss the role of science, and how it challenges our everyday experience of reality. Not surprisingly, Tallis also took his discussion in that direction. Quantum mechanics (QM) is the logical starting point, and Tallis references Bohr’s Copenhagen interpretation, ‘the view that the world has no definite state in the absence of observation.’ Now, I happen to think that there is a logical explanation for this, though I’m not sure anyone else agrees. If we go back to Schrodinger again, but this time his eponymous equation, it describes events before the ‘observation’ takes place, albeit with probabilities. What’s more, all the weird aspects of QM, like the Uncertainty Principle, superposition and entanglement, are all mathematically entailed in that equation. What’s missing is relativity theory, which has since been incorporated into QED or QFT.
 
But here’s the thing: once an observation or ‘measurement’ has taken place, Schrodinger’s equation no longer applies. In other words, you can’t use Schrodinger’s equation to describe something that has already happened. This is known as the ‘measurement problem’, because no one can explain it. But if QM only describes things that are yet to happen, then all the weird aspects aren’t so weird.
 
Tallis also mentions Einstein’s 'block universe', which infers past, present and future all exist simultaneously. In fact, that’s what Sabine Hossenfelder says in her book, Existential Physics:
 
The idea that the past and future exist in the same way as the present is compatible with all we currently know.

 
And:

Once you agree that anything exists now elsewhere, even though you see it only later, you are forced to accept that everything in the universe exists now. (Her emphasis.)
 
I’m not sure how she resolves this with cosmological history, but it does explain why she believes in superdeterminism (meaning the future is fixed), which axiomatically leads to her other strongly held belief that free will is an illusion; but so did Einstein, so she’s in good company.
 
In a passing remark, Tallis says, ‘science is entirely based on measurement’. I know from other essays that Tallis has written, that he believes the entire edifice of mathematics only exists because we can measure things, which we then applied to the natural world, which is why we have so-called ‘natural laws’. I’ve discussed his ideas on this elsewhere, but I think he has it back-to-front, whilst acknowledging that our ability to measure things, which is an extension of counting, is how humanity was introduced to mathematics. In fact, the ancient Greeks put geometry above arithmetic because it’s so physical. This is why there were no negative numbers in their mathematics, because the idea of a negative volume or area made no sense.
 
But, in the intervening 2 millennia, mathematics took on a life of its own, with such exotic entities like negative square roots and non-Euclidean geometry, which in turn suddenly found an unexpected home in QM and relativity theory respectively. All of a sudden, mathematics was informing us about reality before measurements were even made. Take Schrodinger’s wavefunction, which lies at the heart of his equation, and can’t be measured because it only exists in the future, assuming what I said above is correct.
 
But I think Tallis has a point, and I would argue that consciousness can’t be measured, which is why it might remain inexplicable to science, correlation with brain waves and their like notwithstanding.
 
So what is the mystery? Well, there’s more than one. For a start there is consciousness, without which reality would not be perceived or even be known, which seems to me to be pretty fundamental. Then there are the aspects of reality which have only recently been discovered, like the fact that time and space can have different ‘measurements’ dependent on the observer’s frame of reference. Then there is the increasing role of mathematics in our comprehension of reality at scales both cosmic and subatomic. In fact, given the role of numbers and mathematical relationships in determining fundamental constants and natural laws of the Universe, it would seem that mathematics is an inherent facet of reality.
 

Sunday 15 October 2023

What is your philosophy of life and why?

This was a question I answered on Quora, and, without specifically intending to, I brought together 2 apparently unrelated topics. The reason I discuss language is because it’s so intrinsic to our identity, not only as a species, but as an individual within our species. I’ve written an earlier post on language (in response to a Philosophy Now question-of-the-month), which has a different focus, and I deliberately avoided referencing that.
 
A ‘philosophy of life’ can be represented in many ways, but my perspective is within the context of relationships, in all their variety and manifestations. It also includes a recurring theme of mine.



First of all, what does one mean by ‘philosophy of life? For some people, it means a religious or cultural way-of-life. For others it might mean a category of philosophy, like post-modernism or existentialism or logical positivism.
 
For me, it means a philosophy on how I should live, and on how I both look at and interact with the world. This is not only dependent on my intrinsic beliefs that I might have grown up with, but also on how I conduct myself professionally and socially. So it’s something that has evolved over time.
 
I think that almost all aspects of our lives are dependent on our interactions with others, which starts right from when we were born, and really only ends when we die. And the thing is that everything we do, including all our failures and successes occur in this context.
 
Just to underline the significance of this dependence, we all think in a language, and we all gain our language from our milieu at an age before we can rationally and critically think, especially compared to when we mature. In fact, language is analogous to software that gets downloaded from generation to generation, so that knowledge can also be passed on and accumulated over ages, which has given rise to civilizations and disciplines like science, mathematics and art.
 
This all sounds off-topic, but it’s core to who we are and it’s what distinguishes us from other creatures. Language is also key to our relationships with others, both socially and professionally. But I take it further, because I’m a storyteller and language is the medium I use to create a world inside your head, populated by characters who feel like real people and who interact in ways we find believable. More than any other activity, this illustrates how powerful language is.
 
But it’s the necessity of relationships in all their manifestations that determines how one lives one’s life. As a consequence, my philosophy of life centres around one core value and that is trust. Without trust, I believe I am of no value. But, not only that, trust is the foundational value upon which a society either flourishes or devolves into a state of oppression with its antithesis, rebellion.

 

Saturday 16 September 2023

Modes of thinking

 I’ve written a few posts on creative thinking as well as analytical and critical thinking. But, not that long ago, I read a not-so-recently published book (2015) by 2 psychologists (John Kounios and Mark Beeman) titled, The Eureka Factor; Creative Insights and the Brain. To quote from the back fly-leaf:
 
Dr John Kounios is Professor of Psychology at Drexel University and has published cognitive neuroscience research on insight, creativity, problem solving, memory, knowledge representation and Alzheimer’s disease.
 
Dr Mark Beeman is Professor of Psychology and Neuroscience at Northwestern University, and researches creative problem solving and creative cognition, language comprehension and how the right and left hemispheres process information.

 
They divide people into 2 broad groups: ‘Insightfuls’ and ‘analytical thinkers’. Personally, I think the coined term, ‘insightfuls’ is misleading or too narrow in its definition, and I prefer the term ‘creatives’. More on that below.
 
As the authors say, themselves, ‘People often use the terms “insight” and “creativity” interchangeably.’ So that’s obviously what they mean by the term. However, the dictionary definition of ‘insight’ is ‘an accurate and deep understanding’, which I’d argue can also be obtained by analytical thinking. Later in the book, they describe insights obtained by analytical thinking as ‘pseudo-insights’, and the difference can be ‘seen’ with neuro-imaging techniques.
 
All that aside, they do provide compelling arguments that there are 2 distinct modes of thinking that most of us experience. Very early in the book (in the preface, actually), they describe the ‘ah-ha’ experience that we’ve all had at some point, where we’re trying to solve a puzzle and then it comes to us unexpectedly, like a light-bulb going off in our head. They then relate something that I didn’t know, which is that neurological studies show that when we have this ‘insight’ there’s a spike in our brain waves and it comes from a location in the right hemisphere of the brain.
 
Many years ago (decades) I read a book called Drawing on the Right Side of the Brain by Betty Edwards. I thought neuroscientists would disparage this as pop-science, but Kounios and Beeman seem to give it some credence. Later in the book, they describe this in more detail, where there are signs of activity in other parts of the brain, but the ah-ha experience has a unique EEG signature and it’s in the right hemisphere.
 
The authors distinguish this unexpected insightful experience from an insight that is a consequence of expertise. I made this point myself, in another post, where experts make intuitive shortcuts based on experience that the rest of us don’t have in our mental toolkits.
 
They also spend an entire chapter on examples involving a special type of insight, where someone spends a lot of time thinking about a problem or an issue, and then the solution comes to them unexpected. A lot of scientific breakthroughs follow this pattern, and the point is that the insight wouldn’t happen at all without all the rumination taking place beforehand, often over a period of weeks or months, sometimes years. I’ve experienced this myself, when writing a story, and I’ll return to that experience later.
 
A lot of what we’ve learned about the brain’s functions has come from studying people with damage to specific areas of the brain. You may have heard of a condition called ‘aphasia’, which is when someone develops a serious disability in language processing following damage to the left hemisphere (possibly from a stroke). What you probably don’t know (I didn’t) is that damage to the right hemisphere, while not directly affecting one’s ability with language can interfere with its more nuanced interpretations, like sarcasm or even getting a joke. I’ve long believed that when I’m writing fiction, I’m using the right hemisphere as much as the left, but it never occurred to me that readers (or viewers) need the right hemisphere in order to follow a story.
 
According to the authors, the difference between the left and right neo-cortex is one of connections. The left hemisphere has ‘local’ connections, whereas the right hemisphere has more widely spread connections. This seems to correspond to an ‘analytic’ ability in the left hemisphere, and a more ‘creative’ ability in the right hemisphere, where we make conceptual connections that are more wideranging. I’ve probably oversimplified that, but it was the gist I got from their exposition.
 
Like most books and videos on ‘creative thinking’ or ‘insights’ (as the authors prefer), they spend a lot of time giving hints and advice on how to improve your own creativity. It’s not until one is more than halfway through the book, in a chapter titled, The Insightful and the Analyst, that they get to the crux of the issue, and describe how there are effectively 2 different types who think differently, even in a ‘resting state’, and how there is a strong genetic component.
 
I’m not surprised by this, as I saw it in my own family, where the difference is very distinct. In another chapter, they describe the relationship between creativity and mental illness, but they don’t discuss how artists are often moody and neurotic, which is a personality trait. Openness is another personality trait associated with creative people. I would add another point, based on my own experience, if someone is creative and they are not creating, they can suffer depression. This is not discussed by the authors either.
 
Regarding the 2 types they refer to, they acknowledge there is a spectrum, and I can’t help but wonder where I sit on it. I spent a working lifetime in engineering, which is full of analytic types, though I didn’t work in a technical capacity. Instead, I worked with a lot of technical people of all disciplines: from software engineers to civil and structural engineers to architects, not to mention lawyers and accountants, because I worked on disputes as well.
 
The curious thing is that I was aware of 2 modes of thinking, where I was either looking at the ‘big-picture’ or looking at the detail. I worked as a planner, and one of my ‘tricks’ was the ability to distil a large and complex project into a one-page ‘Gantt’ chart (bar chart). For the individual disciplines, I’d provide a multipage detailed ‘program’ just for them.
 
Of course, I also write stories, where the 2 components are plot and character. Creating characters is purely a non-analytic process, which requires a lot of extemporising. I try my best not to interfere, and I do this by treating them as if they are real people, independent of me. Plotting, on the other hand, requires a big-picture approach, but I almost never know the ending until I get there. In the last story I wrote, I was in COVID lockdown when I knew the ending was close, so I wrote some ‘notes’ in an attempt to work out what happens. Then, sometime later (like a month), I had one sleepless night when it all came to me. Afterwards, I went back and looked at my notes, and they were all questions – I didn’t have a clue.

Wednesday 31 May 2023

Immortality; from the Pharaohs to cryonics

 I thought the term was cryogenics, but a feature article in the Weekend Australian Magazine (27-28 May 2023) calls the facilities that perform this process, cryonics, and looking up my dictionary, there is a distinction. Cryogenics is about low temperature freezing in general, and cryonics deals with the deep-freezing of bodies specifically, with the intention of one day reviving them.
 
The article cites a few people, but the author, Ross Bilton, features an Australian, Peter Tsolakides, who is in my age group. From what the article tells me, he’s a software engineer who has seen many generations of computer code and has also been a ‘globe-trotting executive for ExxonMobil’.
 
He’s one of the drivers behind a cryonic facility in Australia – its first – located at Holbrook, which is roughly halfway between Melbourne and Sydney. In fact, I often stop at Holbrook for a break and meal on my interstate trips. According to my car’s odometer it is almost exactly half way between my home and my destination, which is a good hour short of Sydney, so it’s actually closer to Melbourne, but not by much.
 
I’m not sure when Tsolakides plans to enter the facility, but he’s forecasting his resurrection in around 250 years time, when he expects he may live for another thousand years. Yes, this is science fiction to most of us, but there are some science facts that provide some credence to this venture.
 
For a start, we already cryogenically freeze embryos and sperm, and we know it works for them. There is also the case of Ewa Wisnierska, 35, a German paraglider taking part in an international competition in Australia, when she was sucked into a storm and elevated to 9947 metres (jumbo jet territory, and higher than Everest). Needless to say, she lost consciousness and spent a frozen 45 mins before she came back to Earth. Quite a miracle and I’ve watched a doco on it. She made a full recovery and was back at her sport within a couple of weeks. And I know of other cases, where the brain of a living person has been frozen to keep them alive, as counter-intuitive as that may sound.
 
Believe it or not, scientists are divided on this, or at least cautious about dismissing it outright. Many take the position, ‘Never say never’. And I think that’s fair enough, because it really is impossible to predict the future when it comes to humanity. It’s not surprising that advocates, like Tsolakides, can see a future where this will become normal for most humans. People who decline immortality will be the exception and not the norm. And I can imagine, if this ‘procedure’ became successful and commonplace, who would say no?
 
Now, I write science fiction, and I have written a story where a group of people decided to create an immortal human race, who were part machine. It’s a reflection of my own prejudices that I portrayed this as a dystopia, but I could have done the opposite.
 
There may be an assumption that if you write science fiction then you are attempting to predict the future, but I make no such claim. My science fiction is complete fantasy, but, like all science fiction, it addresses issues relevant to the contemporary society in which it was created.
 
Getting back to the article in the Weekend Australian, there is an aspect of this that no one addressed – not directly, anyway. There’s no point in cheating death if you can’t cheat old age. In the case of old age, you are dealing with a fundamental law of the Universe, entropy, the second law of thermodynamics. No one asked the obvious question: how do you expect to live for 1,000 years without getting dementia?
 
I think some have thought about this, because, in the same article, they discuss the ultimate goal of downloading their memories and their thinking apparatus (for want of a better term) into a computer. I’ve written on this before, so I won’t go into details.
 
Curiously, I’m currently reading a book by Sabine Hossenfelder called Existential Physics; A Scientist’s Guide to Life’s Biggest Questions, which you would think could not possibly have anything to say on this topic. Nevertheless:
 
The information that makes you you can be encoded in many different physical forms. The possibility that you might one day upload yourself to a computer and continue living a virtual life is arguably beyond present-day technology. It might sound entirely crazy, but it’s compatible with all we currently know.
 
I promise to write another post on Sabine’s book, because she’s nothing if not thought-provoking.
 
So where do I stand? I don’t want immortality – I don’t even want a gravestone, and neither did my father. I have no dependents, so I won’t live on in anyone’s memory. The closest I’ll get to immortality are the words on this blog.

Thursday 25 May 2023

Philosophy’s 2 disparate strands: what can we know; how can we live

The question I’d like to ask, is there a philosophical view that encompasses both? Some may argue that Aristotle attempted that, but I’m going to take a different approach.
 
For a start, the first part can arguably be broken into 2 further strands: physics and metaphysics. And even this divide is contentious, with some arguing that metaphysics is an ‘abstract theory with no basis in reality’ (one dictionary definition).
 
I wrote an earlier post arguing that we are ‘metaphysical animals’ after discussing a book of the same name, though it was really a biography of 4 Oxford women in the 20th Century: Elizabeth Anscombe, Mary Midgley, Philippa Foot and Iris Murdoch. But I’ll start with this quote from said book.
 
Poetry, art, religion, history, literature and comedy are all metaphysical tools. They are how metaphysical animals explore, discover and describe what is real (and beautiful and good). (My emphasis.)
 
So, arguably, metaphysics could give us a connection between the 2 ‘strands’ in the title. Now here’s the thing: I contend that mathematics should be part of that list, hence part of metaphysics. And, of course, we all know that mathematics is essential to physics as an epistemology. So physics and metaphysics, in my philosophy, are linked in a rather intimate  way.
 
The curious thing about mathematics, or anything metaphysical for that matter, is that, without human consciousness, they don’t really exist, or are certainly not manifest. Everything on that list is a product of human consciousness, notwithstanding that there could be other conscious entities somewhere in the universe with the same capacity.
 
But again, I would argue that mathematics is an exception. I agree with a lot of mathematicians and physicists that while we create the symbols and language of mathematics, we don’t create the intrinsic relationships that said language describes. And furthermore, some of those relationships seem to govern the universe itself.
 
And completely relevant to the first part of this discussion, the limits of our knowledge of mathematics seems to determine the limits of our knowledge of the physical world.
 
I’ve written other posts on how to live, specifically, 3 rules for humans and How should I live? But I’m going to go via metaphysics again, specifically storytelling, because that’s something I do. Storytelling requires an inner and outer world, manifest as character and plot, which is analogous to free will and fate in the real world. Now, even these concepts are contentious, especially free will, because many scientists tell us it’s an illusion. Again, I’ve written about this many times, but it’s relevance to my approach to fiction is that I try and give my characters free will. An important part of my fiction is that the characters are independent of me. If my characters don’t take on a life of their own, then I know I’m wasting my time, and I’ll ditch that story.
 
Its relevance to ‘how to live’ is authenticity. Artists understand better than most the importance of authenticity in their work, which really means keeping themselves out of it. But authenticity has ramifications, as any existentialist will tell you. To live authentically requires an honesty to oneself that is integral to one’s being. And ‘being’ in this sense is about being human rather than its broader ontological meaning. In other words, it’s a fundamental aspect of our psychology, because it evolves and changes according to our environment and milieu. Also, in the world of fiction, it's a fundamental dynamic.
 
What's more, if you can maintain this authenticity (and it’s genuine), then you gain people’s trust, and that becomes your currency, whether in your professional life or your social life. However, there is nothing more fake than false authenticity; examples abound.
 
I’ll give the last word to Socrates; arguably the first existentialist.
 
To live with honour in this world, actually be what you try to appear to be.


Saturday 14 January 2023

Why do we read?

This is the almost-same title of a book I bought recently (Why We Read), containing 70 short essays on the subject, featuring scholars of all stripes: historians, philosophers, and of course, authors. It even includes scientists: Paul Davies, Richard Dawkins and Carlo Rovelli, being 3 I’m familiar with.
 
One really can’t overstate the importance of the written word, because, oral histories aside, it allows us to extend memories across generations and accumulate knowledge over centuries that has led to civilisations and technologies that we all take for granted. By ‘we’, I mean anyone reading this post.
 
Many of the essayists write from their personal experiences and I’ll do the same. The book, edited by Josephine Greywoode and published by Penguin, specifically says on the cover in small print: 70 Writers on Non-Fiction; yet many couldn’t help but discuss fiction as well.
 
And books are generally divided between fiction and non-fiction, and I believe we read them for different reasons, and I wouldn’t necessarily consider one less important than the other. I also write fiction and non-fiction, so I have a particular view on this. Basically, I read non-fiction in order to learn and I read fiction for escapism. Both started early for me and I believe the motivation hasn’t changed.
 
I started reading extra-curricular books from about the age of 7 or 8, involving creatures mostly, and I even asked for an encyclopaedia for Christmas at around that time, which I read enthusiastically. I devoured non-fiction books, especially if they dealt with the natural world. But at the same time, I read comics, remembering that we didn’t have TV at that time, which was only just beginning to emerge.
 
I think one of the reasons that boys read less fiction than girls these days is because comics have effectively disappeared, being replaced by video games. And the modern comics that I have seen don’t even contain a complete narrative. Nevertheless, there are graphic novels that I consider brilliant. Neil Gaiman’s Sandman series and Hayao Miyazake’s Nausicaa of the Valley of the Wind, being standouts. Watchmen by Alan Moore also deserves a mention.
 
So the escapism also started early for me, in the world of superhero comics, and I started writing my own scripts and drawing my own characters pre-high school.
 
One of the essayists in the collection, Niall Ferguson (author of Doom) starts off by challenging a modern paradigm (or is it a meme?) that we live in a ‘simulation’, citing Oxford philosopher, Nick Bostrom, writing in the Philosophical Quarterly in 2003. Ferguson makes the point that reading fiction is akin to immersing the mind in a simulation (my phrasing, not his).
 
In fact, a dream is very much like a simulation, and, as I’ve often said, the language of stories is the language of dreams. But here’s the thing; the motivation for writing fiction, for me, is the same as the motivation for reading it: escapism. Whether reading or writing, you enter a world that only exists inside your head. The ultimate solipsism.

And this surely is a miracle of written language: that we can conjure a world with characters who feel real and elicit emotional responses, while we follow their exploits, failures, love life and dilemmas. It takes empathy to read a novel, and tests have shown that people’s empathy increases after they read fiction. You engage with the character and put yourself in their shoes. It’s one of the reasons we read.
 
 
Addendum: I would recommend the book, by the way, which contains better essays than mine, all with disparate, insightful perspectives.
 

Sunday 10 July 2022

Creative and analytic thinking

I recently completed an online course with a similar title, How to Think Critically and Creatively. It must be the 8th or 9th course I’ve done through New Scientist, on a variety of topics, from cosmology and quantum mechanics to immunology and sustainable living; so quite diverse subjects. I started doing them during COVID, as they helped to pass the time and stimulate the brain at the same time.
 
All these courses rely on experts in their relevant fields from various parts of the globe, so not just UK based, as you might expect. This course was no exception with just 2 experts, both from America. Denise D Cummins is described as a ‘cognitive scientist, author and elected Fellow of the Association for Psychological Science, and she’s held faculty at Yale, UC, University of Illinois and the Centre of Adaptive Behaviours at the Max Planck Institute in Berlin’. Gerard J Puccio is ‘Department Chair and Professor at the International Centre for Studies on Creativity, Buffalo State; a unique academic department that offers the world’s only Master of Science degree in creativity’.
 
I admit to being sceptical that ‘creativity’ can be taught, but that depends on what one means by creativity. If creativity means using your imagination, then yes, I think it can, because imagination is something that we all have, and it’s probably a valid comment that we don’t make enough use of it in our everyday lives. If creativity means artistic endeavour then I think that’s another topic, even though it puts imagination centre stage, so to speak.
 
I grew up in a family where one side was obviously artistic and the other side wasn’t, which strongly suggests there’s a genetic component. The other side excelled at sport, and I was rubbish at sport. However, both sides were obviously intelligent, despite a notable lack of formal education; in my parents’ case, both leaving school in their early teens. In fact, my mother did most of her schooling by correspondence, and my father left school in the midst of the great depression, shortly followed by active duty in WW2.
 
Puccio (mentioned above) argues that creativity isn’t taught in our education system because it’s too hard. Instead, he says that we teach by memorising facts and by ‘understanding’ problems. I would suggest that there is a hierarchy, where you need some basics before you can ‘graduate’ to ‘creative thinking’, and I use the term here in the way he intends it. I spent most of my working lifetime on engineering projects, with diverse and often complex elements. I need to point out that I wasn’t one of the technical experts involved, but I worked with them, in all their variety, because my job was to effectively co-ordinate all their activities towards a common goal, by providing a plan and then keeping it on the rails.
 
Engineering is all about problem solving, and I’m not sure one can do that without being creative, as well as analytical. In fact, one could argue that there is a dialectical relationship between them, but maybe I’m getting ahead of myself.
 
Back to Puccio, who introduced 2 terms I hadn’t come across before: ‘divergent’ and ‘convergent’ thinking, arguing they should be done in that order. In a nutshell, divergent thinking is brainstorming where one thinks up as many options as possible, and convergent thinking is where one narrows in on the best solution. He argues that we tend to do the second one without doing the first one. But this is related to something else that was raised in the course, which is ‘Type 1 thinking’ and ‘Type 2 thinking’.
 
Type 1 thinking is what most of us would call ‘intuition’, because basically it’s taking a cognitive shortcut to arrive at an answer to a problem, which we all do all the time, especially when time is a premium. Type 2 thinking is when we analyse the problem, which is not only time consuming but takes up brain resources that we’d prefer not to use, because we’re basically lazy, and I’m no exception. These 2 cognitive behaviours are clinically established, so it’s not pop-science.
 
However, something that was not discussed in the course, is that type 2 thinking can become type 1 thinking when we develop expertise in something, like learning a musical instrument, or writing a story, or designing a building. In other words, we develop heuristics based on our experience, which is why we sometimes jump to convergent thinking without going through the divergent part.
 
The course also dealt with ‘critical thinking’, as per its title, but I won’t dwell on that, because critical thinking arises from being analytical, and separating true expertise from bogus expertise, which is really a separate topic.
 
How does one teach these skills? I’m not a teacher, so I’m probably not best qualified to say. But I have a lot of experience in a profession that requires analytical thinking and problem-solving as part of its job description. The one thing I’ve learned from my professional life is the more I’m restrained by ‘rules’, the worse job I’ll do. I require the freedom and trust to do things my own way, and I can’t really explain that, but it’s also what I provide to others. And maybe that’s what people mean by ‘creative thinking’; we break the rules.
 
Artistic endeavour is something different again, because it requires spontaneity. But there is ‘divergent thinking’ involved, as Puccio pointed out, giving the example of Hemingway writing countless endings to Farewell to Arms, before settling on the final version. I’m reminded of the reported difference between Beethoven and Mozart, two of the greatest composers in the history of Western classical music. Beethoven would try many different versions of something (in his head and on paper) before choosing what he considered the best. He was extraordinarily prolific but he wrote only 9 symphonies and 5 piano concertos plus one violin concerto, because he workshopped them to death. Mozart, on the other hand, apparently wrote down whatever came into his head and hardly revised it. One was very analytical in their approach and the other was almost completely spontaneous.
 
I write stories and the one area where I’ve changed type 2 thinking into type 1 thinking is in creating characters – I hardly give it a thought. A character comes into my head almost fully formed, as if I just met them in the street. Over time I learn more about them and they sometimes surprise me, which is always a good thing. I once compared writing dialogue to playing jazz, because they both require spontaneity and extemporisation. Don Burrows once said you can’t teach someone to play jazz, and I’ve argued that you can’t teach someone to write dialogue.
 
Having said that, I once taught a creative writing class, and I gave the class exercises where they were forced to write dialogue, without telling them that that was the point of the exercise. In other words, I got them to teach themselves.
 
The hard part of storytelling for me is the plot, because it’s a neverending exercise in problem-solving. How did I get back to here? Analytical thinking is very hard to avoid, at least for me.
 
As I mentioned earlier, I think there is a dialectic between analytical thinking and creativity, and the best examples are not artists but genii in physics. To look at just two: Einstein and Schrodinger, because they exemplify both. But what came first: the analysis or the creativity? Well, I’m not sure it matters, because they couldn’t have done one without the other. Einstein had an epiphany (one of many) where he realised that an object in free fall didn’t experience a force, which apparently contradicted Newton. Was that analysis or creativity or both? Anyway, he not only changed how we think about gravity, he changed the way we think about the entire cosmos.
 
Schrodinger, borrowed an idea from de Broglie that particles could behave like waves and changed how we think about quantum mechanics. As Richard Feynman once said, ‘No one knows where Schrodinger’s equation comes from. It came out of Schrodinger’s head. You can’t derive it from anything we know.’
 

Sunday 22 May 2022

We are metaphysical animals

 I’m reading a book called Metaphysical Animals (How Four Women Brought Philosophy Back To Life). The four women were Mary Midgley, Iris Murdoch, Philippa Foot and Elizabeth Anscombe. The first two I’m acquainted with and the last two, not. They were all at Oxford during the War (WW2) at a time when women were barely tolerated in academia and had to be ‘chaperoned’ to attend lectures. Also a time when some women students ended up marrying their tutors. 

The book is authored by Clare Mac Cumhaill and Rachael Wiseman, both philosophy lecturers who became friends with Mary Midgley in her final years (Mary died in 2018, aged 99). The book is part biographical of all 4 women and part discussion of the philosophical ideas they explored.

 

Bringing ‘philosophy back to life’ is an allusion to the response (backlash is too strong a word) to the empiricism, logical positivism and general rejection of metaphysics that had taken hold of English philosophy, also known as analytical philosophy. Iris spent time in postwar Paris where she was heavily influenced by existentialism and Jean-Paul Sartre, in particular, whom she met and conversed with. 

 

If I was to categorise myself, I’m a combination of analytical philosopher and existentialist, which I suspect many would see as a contradiction. But this isn’t deliberate on my part – more a consequence of pursuing my interests, which are science on one hand (with a liberal dose of mathematical Platonism) and how-to-live a ‘good life’ (to paraphrase Aristotle) on the other.

 

Iris was intellectually seduced by Sartre’s exhortation: “Man is nothing else but that which he makes of himself”. But as her own love life fell apart along with all its inherent dreams and promises, she found putting Sartre’s implicit doctrine, of standing solitarily and independently of one’s milieu, difficult to do in practice. I’m not sure if Iris was already a budding novelist at this stage of her life, but anyone who writes fiction knows that this is what it’s all about: the protagonist sailing their lone ship on a sea full of icebergs and other vessels, all of which are outside their control. Life, like the best fiction, is an interaction between the individual and everyone else they meet. Your moral compass, in particular, is often tested. Existentialism can be seen as an attempt to arise above this, but most of us don’t. 

 

Not surprisingly, Wittgenstein looms large in many of the pages, and at least one of the women, Elizabeth Anscombe, had significant interaction with him. With Wittgenstein comes an emphasis on language, which has arguably determined the path of philosophy since. I’m not a scholar of Wittgenstein by any stretch of the imagination, but one thing he taught, or that people took from him, was that the meaning we give to words is a consequence of how they are used in ordinary discourse. Language requires a widespread consensus to actually work. It’s something we rarely think about but we all take for granted, otherwise there would be no social discourse or interaction at all. There is an assumption that when I write these words, they have the same meaning for you as they do for me, otherwise I am wasting my time.

 

But there is a way in which language is truly powerful, and I have done this myself. I can write a passage that creates a scene inside your mind complete with characters who interact and can cause you to laugh or cry, or pretty much any other emotion, as if you were present; as if you were in a dream.

 

There are a couple of specific examples in the book which illustrate Wittgenstein’s influence on Elizabeth and how she used them in debate. They are both topics I have discussed myself without knowing of these previous discourses.

 

In 1947, so just after the war, Elizabeth presented a paper to the Cambridge Moral Sciences Club, which she began with the following disclosure:

 

Everywhere in this paper I have imitated Dr Wittgenstein’s ideas and methods of discussion. The best that I have written is a weak copy of some features of the original, and its value depends only on my capacity to understand and use Dr Wittgenstein’s work.

 

The subject of her talk was whether one can truly talk about the past, which goes back to the pre-Socratic philosopher, Parmenides. In her own words, paraphrasing Parmenides, ‘To speak of something past’ would then to ‘point our thought’ at ‘something there’, but out of reach. Bringing Wittgenstein into the discussion, she claimed that Parmenides specific paradox about the past arose ‘from the way that thought and language connect to the world’.

 

We apply language to objects by naming them, but, in the case of the past, the objects no longer exist. She attempts to resolve this epistemological dilemma by discussing the nature of time as we experience it, which is like a series of pictures that move on a timeline while we stay in the present. This is analogous to my analysis that everything we observe becomes the past as soon as it happens, which is exemplified every time someone takes a photo, but we remain in the present – the time for us is always ‘now’.

 

She explains that the past is a collective recollection, documented in documents and photos, so it’s dependent on a shared memory. I would say that this is what separates our recollection of a real event from a dream, which is solipsistic and not shared with anyone else. But it doesn’t explain why the past appears fixed and the future unknown, which she also attempted to address. But I don’t think this can be addressed without discussing physics.

 

Most physicists will tell you that the asymmetry between the past and future can only be explained by the second law of thermodynamics, but I disagree. I think it is described, if not explained, by quantum mechanics (QM) where the future is probabilistic with an infinitude of possible paths and classical physics is a probability of ONE because it’s already happened and been ‘observed’. In QM, the wave function that gives the probabilities and superpositional states is NEVER observed. The alternative is that all the futures are realised in alternative universes. Of course, Elizabeth Anscombe would know nothing of these conjectures.

 

But I would make the point that language alone does not resolve this. Language can only describe these paradoxes and dilemmas but not explain them.

 

Of course, there is a psychological perspective to this, which many people claim, including physicists, gives the only sense of time passing. According to them, it’s fixed: past, present and future; and our minds create this distinction. I think our minds create the distinction because only consciousness creates a reference point for the present. Everything non-sentient is in a causal relationship that doesn’t sense time. Photons of light, for example, exist in zero time, yet they determine causality. Only light separates everything in time as well as space. I’ve gone off-topic.

 

Elizabeth touched on the psychological aspect, possibly unintentionally (I’ve never read her paper, so I could be wrong) that our memories of the past are actually imagined. We use the same part of the brain to imagine the past as we do to imagine the future, but again, Elizabeth wouldn’t have known this. Nevertheless, she understood that our (only) knowledge of the past is a thought that we turn into language in order to describe it.

 

The other point I wish to discuss is a famous debate she had with C.S. Lewis. This is quite something, because back then, C.S. Lewis was a formidable intellectual figure. Elizabeth’s challenge was all the more remarkable because Lewis’s argument appeared on the surface to be very sound. Lewis argued that the ‘naturalist’ position was self-refuting if it was dependent on ‘reason’, because reason by definition (not his terminology) is based on the premise of cause and effect and human reason has no cause. That’s a simplification, nevertheless it’s the gist of it. Elizabeth’s retort:

 

What I shall discuss is this argument’s central claim that a belief in the validity of reason is inconsistent with the idea that human thought can be fully explained as the product of non-rational causes.

 

In effect, she argued that reason is what humans do perfectly naturally, even if the underlying ‘cause’ is unknown. Not knowing the cause does not make the reasoning irrational nor unnatural. Elizabeth specifically cited the language that Lewis used. She accused him of confusing the concepts of “reason”, “cause” and “explanation”.

 

My argument would be subtly different. For a start, I would contend that by ‘reason’, he meant ‘logic’, because drawing conclusions based on cause and effect is logic, even if the causal relations (under consideration) are assumed or implied rather than observed. And here I contend that logic is not a ‘thing’ – it’s not an entity; it’s an action - something we do. In the modern age, machines perform logic; sometimes better than we do.

 

Secondly, I would ask Lewis, does he think reason only happens in humans and not other animals? I would contend that animals also use logic, though without language. I imagine they’d visualise their logic rather than express it in vocal calls. The difference with humans is that we can perform logic at a whole different level, but the underpinnings in our brains are surely the same. Elizabeth was right: not knowing its physical origins does not make it irrational; they are separate issues.

 

Elizabeth had a strong connection to Wittgenstein right up to his death. She worked with him on a translation and edit of Philosophical Investigations, and he bequeathed her a third of his estate and a third of his copyright.

 

It’s apparent from Iris’s diaries and other sources that Elizabeth and Iris fell in love at one point in their friendship, which caused them both a lot of angst and guilt because of their Catholicism. Despite marrying, Iris later had an affair with Pip (Philippa).

 

Despite my discussion of just 2 of Elizabeth’s arguments, I don’t have the level of erudition necessary to address most of the topics that these 4 philosophers published in. Just reading the 4 page Afterwards, it’s clear that I haven’t even brushed the surface of what they achieved. Nevertheless, I have a philosophical perspective that I think finds some resonance with their mutual ideas. 

 

I’ve consistently contended that the starting point for my philosophy is that for each of us individually, there is an inner and outer world. It even dictates the way I approach fiction. 

 

In the latest issue of Philosophy Now (Issue 149, April/May 2022), Richard Oxenberg, who teaches philosophy at Endicott College in Beverly, Massachusetts, wrote an article titled, What Is Truth? wherein he describes an interaction between 2 people, but only from a purely biological and mechanical perspective, and asks, ‘What is missing?’ Well, even though he doesn’t spell it out, what is missing is the emotional aspect. Our inner world is dominated by emotional content and one suspects that this is not unique to humans. I’m pretty sure that other creatures feel emotions like fear, affection and attachment. What’s more I contend that this is what separates, not just us, but the majority of the animal kingdom, from artificial intelligence.

 

But humans are unique, even among other creatures, in our ability to create an inner world every bit as rich as the one we inhabit. And this creates a dichotomy that is reflected in our division of arts and science. There is a passage on page 230 (where the authors discuss R.G. Collingwood’s influence on Mary), and provide an unexpected definition.

 

Poetry, art, religion, history, literature and comedy are all metaphysical tools. They are how metaphysical animals explore, discover and describe what is real (and beautiful and good). (My emphasis.)

 

I thought this summed up what they mean with their coinage, metaphysical animals, which titles the book, and arguably describes humanity’s most unique quality. Descriptions of metaphysics vary and elude precise definition but the word, ‘transcendent’, comes to mind. By which I mean it’s knowledge or experience that transcends the physical world and is most evident in art, music and storytelling, but also includes mathematics in my Platonic worldview.


 

Footnote: I should point out that certain chapters in the book give considerable emphasis to moral philosophy, which I haven’t even touched on, so another reader might well discuss other perspectives.


Friday 18 March 2022

Our eternal fascination with Light and Dark

 Someone on FaceBook posted one of those inane questions: If you could delete one thing in the world what would it be? Obvious answers included war, hate, evil, and the like; so negative emotive states and consequences. My answer was, ‘Be careful what you wish for’.

What I find interesting about this whole issue is the vicarious relationship we have with the ‘dark side’ through the lens of fiction. If one thinks about it, it starts early with fairy tales and Bible stories. Nightmares are common in our childhood where one wakes up and is too scared to go back to sleep. Fear is an emotion we become familiar with early in our lives; I doubt that I was an exception, but it seems to me that everyone tries to keep children innocent these days. I don’t have children, so I might have it wrong.

 

Light and dark exists in the real world, but we try to keep it to the world of fiction – it’s a universal theme found in operas, mythologies and TV serials. I write fiction and I’m no exception. If there was no dark in my stories, they’d have no appeal. You have to have nemeses, figures of various shades of grey to juxtapose the figures of light, even if the light shines through flawed, imperfect glass.

 

In life we are tested, and we judge ourselves accordingly. Sometimes we pass and sometimes we fail. The same thing happens with characters in fiction. When we read a story we become actors, which makes us wonder how we’d behave in the same situation. I contend that the same thing happens in dreams. As an adult, I’ve always seen dreams as what-if scenarios and it’s the same with stories. I’ve long argued that the language of stories is the language of dreams and I think the connection is even stronger than that. I’m not surprised that storytellers will tell you that they dream a lot.

 

In the Judaeo-Christian religion I grew up with, good and evil were stark contrasts, like black and white. You have God, Christ and Satan. When I got older, I thought it a bit perverse that one feared God as much as Satan, which led me to the conclusion that they weren’t really that different. It’s Christ who is the good guy, willing to forgive the people who hate him and want him dead. I’m talking about them as fictional characters, not real people. I’m sure Jesus was a real person but we only have the myth by which to judge him.

 

The only reason I bring all this up, is because they were the template we were given. But fearing someone you are meant to love leads to neurosis, as I learned the hard way. A lot of people of my generation brought up the next generation as atheists, which is not surprising. The idea of a judgemental, schizophrenic father was past its use-by-date.

 

There is currently a conflict in Ukraine, which has grabbed the world’s attention in a way that other wars have not. It’s partly because of our Euro-centric perspective, and the fact that the 2 biggest and world-changing conflicts of the 20th Century both started in Europe. And the second one, in particular, has similarities, given it started with a dictator invading a neighbour, when he thought the world would look the other way.

 

There is a fundamental flaw in the human psyche that we’ve seen repeated throughout history. We have a tendency to follow charismatic narcissistic leaders, when you think we should know better. They create an army (not necessarily military) of supporters, but for whom they have utter contempt. This was true of Hitler, but also true of Trump and Putin.

 

Ukraine’s leader, Volodymyr Zelenskyy, like Trump, became a TV celebrity, but in a different vein. He was a satirical comedian who sent up the country’s leader, who was a Russian stooge, and then ran for office where he won by 70%. I believe this is the real reason that Putin wants to bring him down. If he’d done the same thing in Russia, he would have been assassinated while still a TV personality. It’s well known that Putin has attempted to assassinate him at least twice since the invasion, but assassinating opponents in a foreign country is a Putin specialty.

 

Zelenskyy and Putin represent, in many Western people’s minds, a modern day parable of good and evil. And, to me, the difference is stark. Putin, like all narcissists, only cares about himself, not the generals that have died in his war, not the barely out of school conscripts he’s sent into battle and certainly not the Russian people who will suffer enormous deprivations if this continues for any length of time. On the other hand, Zelenskyy doesn’t care about his self-preservation, because he would rather die for a principle than live the rest of his life in shame for deserting his country when it needed him most. Zelenskyy is like the fictional hero we believe in but know we couldn’t emulate.

 

It's when we read or watch fiction that the difference between right and wrong seems obvious. We often find ourselves telling a character, ‘don’t do that, don’t make that decision’, because we can see the consequences, but, in real life, we often seem to lose that compass.

 

My father was in a war and I know from what he told me that he didn’t lose that particular compass, but I also know that he once threatened to kill someone who was stealing from the wounded he was caring for. And I’ve no doubt he would have acted on it. So his compass got a bit bent, because he’d already seen enough killing to last several lifetimes.

 

I’ve noticed a theme in my own writing, which is subconscious, not intentional, and that is my protagonists invariably have their loyalty tested and it ends up defining them. My villains are mostly self-serving autocrats who have a hierarchical view of humanity where they logically belong at the top.

 

This is a meandering post, with no conclusion. We each of us have ambitions and desires and flaws. Few of us are ever really tested, so we make assumptions based on what we like to believe. I like something that Socrates said, who’d also been in battle.

 

To live with honour in this world, actually be what you try to appear to be.


Friday 28 January 2022

What is existentialism?

 A few years back, I wrote a ‘vanity piece’, My philosophy in 24 dot points, which I admit is a touch pretentious. But I’ve been prompted to write something more substantive, in a similar vein, whilst reading Gary Cox’s How to Be an Existentialist; or How to Get Real, Get a Grip and Stop Making Excuses. I bought this tome (the 10thAnniversary Edition) after reading an article by him on ‘Happiness’ in Philosophy Now (Issue 147, Dec 2021/Jan 2022). Cox is an Honorary Research Fellow at the University of Birmingham, UK. He’s written other books, but this one is written specifically for a general audience, not an academic one. This is revealed in some of the language he uses, like ‘being up shit creek’.

 

I didn’t really learn anything about existentialism until I studied Sartre in an off-campus university course, in my late 40s. I realised that, to all intents and purposes, I was an existentialist, without ever knowing what one was. I did write about existentialism very early in the life of this blog, in the context of my own background. The thing is that one’s philosophical worldview is a product of one’s milieu, upbringing and education, not to mention the age in which one lives. I grew up in a Western culture, post WW2, and I think that made me ripe for existentialist influences without being conscious of it. I lived in the 60s when there was a worldwide zeitgeist of questioning social mores against a background of a religious divide, the Vietnam war and the rise of feminism. 

 

If there is a key word or mantra in existentialism, it’s ‘authenticity’. It’s the key element in my 3 Rules for Humans post, and it’s also the penultimate chapter in Cox’s aforementioned book. The last chapter is on counselling and is like a bookend.

 

As Cox himself points out, existentialism is not a ‘school’ of philosophy in the way ‘analytical philosophy’ or ‘logical positivism’ are. There’s not really a set of rules – it’s more about an attitude and how to live a life without losing your soul or self-respect. It’s not an epistemology, nor an ideology, even though it’s probably associated with a liberal outlook, as I hope will become clear.

 

Many commentators associate existentialism with atheism, the absurd and nihilism. I agree with Cox that it’s actually the opposite of nihilism; if anything, it’s about finding purpose. As I wrote in a post last year:

 

If the Universe has any meaning at all, it’s because it created sentient beings who find meaning against the odds that science tells us are astronomical, both literally and figuratively. Existentialism is about finding purpose in an absurd universe, which is the opposite of nihilism.

 

And that’s the most important lesson of existentialism: if you are to find a purpose, only you can do that; it’s not dependent on anyone else, be they family, a spouse, an employer or a mentor. And logically, one could add, it’s not dependent on God either.

 

Cox doesn’t talk about God at all, but he does talk quite a lot about consciousness and about it being ‘nothing’ (materialistically). He very fleetingly gives mathematics as an example of something else that’s not ‘corporeal’, specifically numbers. Very curious, as I think that both mathematics and consciousness are ‘special’ in that they are distinct, yet intimately connected to the physical world, but that’s another topic.

 

He also talks about consciousness having a special relationship with time. I’ve said that consciousness is the only thing that exists in a constant present, whereas Cox says the opposite, but I believe we mean the same thing. He says consciousness is forever travelling from the past to the future, whereas I say that the future is forever becoming the past while only consciousness exists in the present – the experiential outcome is the same.

 

So how does God enter the picture? God only exists in someone’s consciousness – it’s part of one’s internal state. So, you can be ‘authentic’ and believe in God, but it’s totally an individualistic experience – it can’t be shared. That’s my interpretation, not anyone else’s, I should emphasise.

 

An important, even essential, aspect of all this is a belief in free will. You can’t take control of your life if you don’t have a belief in free will, and I would argue that you can’t be authentic either. And, logically, this has influenced my prejudices in physics and cosmology. To be consistent, I can’t believe we live in a deterministic universe, and have argued strongly on that point, opposing better minds than mine.

 

Existentialism has some things in common with Buddhism, which might explain why Eastern philosophy seemed to have an influence on the 60s zeitgeist. Having said that, I think the commonality is about treating life as a journey that’s transient. Accepting the impermanence and transience of life, I believe, is part of living authentically.

 

And what do I mean by ‘authentic’ in this context? Possibly, I never really appreciated this until I started writing fiction. I think anyone who creates art strives to be authentic, which means leaving your ego out of your work. I try to take the attitude that it’s my characters’ story, not mine. That’s very difficult to explain to anyone who hasn’t experienced it, but I know that actors often say something similar.

 

In my professional life, my integrity was everything to me. I often worked in disputatious environments and it was important to me that people could trust my word and my work. Cox talks about how existentialism intrinsically incorporates our interactions with others. 

 

Freedom is a much-abused, misappropriated term, but in existentialism it has a specific meaning and an interdependent relationship with responsibility – you can’t divorce one from the other. Freedom, in existentialism, means ‘free to choose’, hence the emphasis on free will. It also means, if you invoke the term, that the freedom of others is just as important as your own.

 

One can’t talk about authenticity without talking about its opposite, ‘bad faith’ (mauvaise foi), a term coined by Sartre. Bad faith is something that most of us have experienced, be it working in a job we hate, staying in a destructive relationship or not pursuing a desired goal in lieu of staying in our comfort zone.

 

Of course, sometimes we are in a situation outside our control, so what do we do? Speaking from personal experience, I think one needs to take ownership of one’s response to it; one needs to accept that only YOU can do something about it and not someone else. I’ve never been a prisoner-of-war, but my father was, and he made 3 attempts to escape, because, as he told the Commandant, ‘It’s my job’.

 

I’ve actually explored this in my own fiction. In my last story, two of my characters (independently) find themselves in circumstances of ‘bad faith’. I only analyse this in hindsight – don’t analyse what you write while you’re writing. In fact, one of those characters is attracted to another character who lives authentically, though neither of them ‘think’ in those terms.



Addendum: Someone asked me to come up with a single sentence to describe this. After sleeping on it, I came up with this:


Be responsible for what you are and who you become. That includes being responsible for your failures. (2 sentences)


Wednesday 6 October 2021

Tips on writing sex scenes

 Yes, this is a bit tongue-in-cheek, or tongue in someone else’s cheek to borrow a well-worn witticism. This arose from reading an interview by Benjamin Law (Aussie writer) of Pulitzer-prize winning author, Viet Thanh Nguyen, who briefly discussed writing sex scenes. He gives this advice: 

Not being utterly male-centred, if you happen to be a man or masculine. Not being too vulgar. Don’t be too florid. And humour always helps.

 

Many years ago (over a decade) I attended a writers’ convention, where there are always a lot of panel discussions, and there was one on ‘How to write sex scenes’, which was appropriately entertaining and unsurprisingly well attended.

 

Even longer ago, when I attempted to write my first novel, with utterly no experience or tuition, just blindly diving in the deep end, there was the possibility of a sex scene and I chickened out. The entire opus was terrible, but over about 3 years and 3 drafts I taught myself to write. I sent it to a scrip-assessor, who was honest and unflattering. But one of the things I remember is that she criticised me for avoiding the sex scene. I was determined to fix that in subsequent attempts. I still have a hard copy of that manuscript, by the way, to remind myself of how badly I can write.

 

But there are a couple of things I remember from that particular panel discussion (including a husband and wife team on the panel). Someone asked for a definition of pornography, and someone answered: the difference between erotica and pornography is that one you don’t admit to reading (or watching, as is more often the case). So, it’s totally subjective.

 

The first editor (a woman) of ELVENE, took offense at the first sex scene. I promptly sent the manuscript to 2 women friends for second and third opinions. Anyway, I think that you’ll invariably offend someone, and the only sure way to avoid that is to have all the sex happen off the page. Some writers do that, and sometimes I do it myself. Why? I think it depends on where it sits in the story, and is it really necessary to describe every sexual encounter between 2 characters, who start doing it regularly?

 

The other thing I remember from that panel is someone explaining how important it was to describe it from one of the character’s points of view. If you describe it from the POV of a ‘third party’, you risk throwing the reader out of the story. I contend that the entire story should be told from a character’s POV, though you can change characters, even within the same scene. The obvious analogy is with dialogue. You rarely change POV in dialogue, though it’s not a hard and fast rule. In other words, the reader’s perspective invariably involves speaking and listening from just one POV, as if they were in the conversation. The POV could be male or female - it’s irrelevant - but it’s usually the protagonist. I take the same approach to sex scenes. It’s risky for a man to take a woman’s POV in a sex scene, but I’ve done it many times. 

 

I often take the POV of the ‘active’ partner, and the reader learns what the other partner is experiencing second-hand so to speak. It generally means that the orgasm is described from the other partner’s perspective which makes it a lot easier. If they come in unison, I make sure the other one comes fractionally first.

 

I don’t write overlong sex scenes, because they become boring. Mine are generally a page long, including all the carryon that happens beforehand, which is not intentional, just happenstance. I wrote a review of Cory Doctorow’s sci-fi novel, Walkaway, a novel (full title) which has a number of heavy sex scenes which I did find boring, but that probably says more about me than the author. I’m sure there are readers who find my sex scenes ‘heavy’ and possibly boring as well. 

 

I have some rules of my own. They are an interlude yet they should always serve the story. They tell us something about the characters and they invariably have consequences, which are often positive, but not necessarily so. There is always a psychological component and my approach is that you can’t separate the psychological from the physical. They change the character and they change the dynamic of a relationship. Some of my characters appear celibate, but you find them in real life too.

 

I take the approach that fiction is a combination of fantasy and reality and the mixture varies from genre to genre and even author to author. So, in this context, the physical is fantasy and the psychological is reality.

 

One should never say ‘never’, but I couldn’t imagine writing a rape scene or someone being tortured, though I’ve seen such scenes on Scan-noir TV. However, I’ve written scenes involving sexual exploitation, to both men and women, and, in those cases, they were central to the story.

 

Lastly, I often have to tell people that I’m not in the story. I don’t describe my personal sex-life, and I expect that goes for other writers too. 


Saturday 5 December 2020

Some (personal) Notes on Writing

 This post is more personal, so don’t necessarily do what I’ve done. I struggled to find my way as a writer, and this might help to explain why. Someone recently asked me how to become a writer, and I said, ‘It helps, if you start early.’ I started pre-high school, about age 8-9. I can remember writing my own Tarzan scripts and drawing my own superheroes. 

 

Composition, as it was called then, was one of my favourite activities. At age 12 (first year high school), when asked to write about what we wanted to do as adults, I wrote that I wanted to write fiction. I used to draw a lot as a kid, as well. But, as I progressed through high school, I stopped drawing altogether and my writing deteriorated to the point that, by the time I left school, I couldn’t write an essay to save my life; I had constant writer’s block.

 

I was in my 30s before I started writing again and, when I started, I knew it was awful, so I didn’t show it to anyone. A couple of screenwriting courses (in my late 30s) was the best thing I ever did. With screenwriting, the character is all in what they say and what they do, not in what they look like. However, in my fiction, I describe mannerisms and body language as part of a character’s demeanour, in conjunction with their dialogue. Also, screenwriting taught me to be lean and economical – you don’t write anything that can’t be seen or heard on the screen. The main difference in writing prose is that you do all your writing from inside a character’s head; in effect, you turn the reader into an actor, subconsciously. Also, you write in real time so it unfolds like a movie in the reader’s imagination.

 

I break rules, but only because the rules didn’t work for me, and I learned that the hard way. So I don’t recommend that you do what I do, because, from what I’ve heard and read, most writers don’t. I don’t write every day and I don’t do multiple drafts. It took me a long time to accept this, but it was only after I became happy and confident with what I produced. In fact, I can go weeks, even months, without writing anything at all and then pick it up from where I left off.

 

I don’t do rewrites because I learned the hard way that, for me, they are a waste of time. I do revisions and you can edit something forever without changing the story or its characters in any substantial way. I correct for inconsistencies and possible plot holes, but if you’re going to do a rewrite, you might as well write something completely different – that’s how I feel about it. 

 

I recently saw a YouTube discussion between someone and a writer where they talked about the writer’s method. He said he did a lot of drafts, and there are a lot of highly successful writers who do (I’m not highly successful, yet I don’t think that’s the reason why). However, he said that if you pick something up you wrote some time ago, you can usually tell if it’s any good or not. Well, my writing passes that test for me.

 

I’m happiest when my characters surprise me, and, if they don’t, I know I’m wasting my time. I treat it like it’s their story, not mine; that’s the best advice I can give.

 

How to keep the reader engaged? I once wrote in another post that creating narrative tension is an essential writing skill, and there are a number of ways to do this. Even a slow-moving story can keep a reader engaged, if every scene moves the story forward. I found that keeping scenes short, like in a movie, and using logical sequencing so that one scene sets up the next, keeps readers turning the page. Narrative tension can be subliminally created by revealing information to the reader that the characters don’t know themselves; it’s a subtle form of suspense. Also, narrative tension is often manifest in the relationships between characters. I’ve always liked moral dilemmas, both in what I read (or watch) and what I write.

 

Finally, when I start off a new work, it will often take me into territory I didn’t anticipate; I mean psychological territory, as opposed to contextual territory or physical territory. 

 

A story has all these strands, and when you start out, you don’t necessarily know how they are going to come together – in fact, it’s probably better if you don’t. That way, when they do, it’s very satisfying and there is a sense that the story already existed before you wrote it. It’s like you’re the first to read it, not create it, which I think is a requisite perception.