Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
Showing posts with label Art. Show all posts
Showing posts with label Art. Show all posts

Tuesday 2 January 2024

Modes of expression in writing fiction

As I point out in the post, this is a clumsy phrase, but I find it hard to come up with a better one. It’s actually something I wrote on Quora in response to a question. I’ve written on this before, but this post has the benefit of being much more succinct while possibly just as edifying.
 
I use the term ‘introspection’ where others use the word, ‘insight’. It’s the reader’s insight but the character’s introspection, which is why I prefer that term in this context.
 
The questioner is Clxudy Pills, obviously a pseudonym. I address her directly in the answer, partly because, unlike other questions I get, she has always acknowledged my answers.
 

Is "show, not tell" actually a good writing tip?

 
Maybe. No one said that to me when I was starting out, so it had no effect on my development. But I did read a book (more than one, actually) on ‘writing’ that delineated 5 categories of writing ‘style’. Style in this context means the mode of expression rather than an author’s individual style or ‘voice’. That’s clumsily stated but it will make sense when I tell you what they are.
 

  1. Dialogue is the most important because it’s virtually unique to fiction; quotes provided in non-fiction notwithstanding. Dialogue, more than any other style, tells you about the characters and their interactions with others.



  2. Introspection is what the character thinks, effectively. This only happens in novels and short stories, not screenplays or stage plays, soliloquies being the exception and certainly not the rule. But introspection is essential to prose, especially when the character is on their own.



  3. Exposition is the ‘telling’, not showing, part. When you’re starting out and learning your craft, you tend to write a lot of exposition – I know I did – which is why we get the admonition in your question. But the exposition can be helpful to you, if not the reader, as it allows you to explore the setting, the context of the story and its characters. Eventually, you’ll learn not to rely on it. Exposition is ‘smuggled’ into movies through dialogue and into novels through introspection.



  4. Description is more difficult than you think, because it’s the part of a novel that readers will skip over to get on with the story. Description can be more boring than exposition, yet it’s necessary. My approach is to always describe a scene from a character’s POV, and keep it minimalist. Readers automatically fill in the details, because we are visual creatures and we do it without thinking.



  5. Action is description in motion. Two rules: stay in one character’s POV and keep it linear – one thing happens after another. It has the dimension of time, though it’s subliminal.

 
 So there: you get 5 topics for the price of one.
 

Saturday 16 September 2023

Modes of thinking

 I’ve written a few posts on creative thinking as well as analytical and critical thinking. But, not that long ago, I read a not-so-recently published book (2015) by 2 psychologists (John Kounios and Mark Beeman) titled, The Eureka Factor; Creative Insights and the Brain. To quote from the back fly-leaf:
 
Dr John Kounios is Professor of Psychology at Drexel University and has published cognitive neuroscience research on insight, creativity, problem solving, memory, knowledge representation and Alzheimer’s disease.
 
Dr Mark Beeman is Professor of Psychology and Neuroscience at Northwestern University, and researches creative problem solving and creative cognition, language comprehension and how the right and left hemispheres process information.

 
They divide people into 2 broad groups: ‘Insightfuls’ and ‘analytical thinkers’. Personally, I think the coined term, ‘insightfuls’ is misleading or too narrow in its definition, and I prefer the term ‘creatives’. More on that below.
 
As the authors say, themselves, ‘People often use the terms “insight” and “creativity” interchangeably.’ So that’s obviously what they mean by the term. However, the dictionary definition of ‘insight’ is ‘an accurate and deep understanding’, which I’d argue can also be obtained by analytical thinking. Later in the book, they describe insights obtained by analytical thinking as ‘pseudo-insights’, and the difference can be ‘seen’ with neuro-imaging techniques.
 
All that aside, they do provide compelling arguments that there are 2 distinct modes of thinking that most of us experience. Very early in the book (in the preface, actually), they describe the ‘ah-ha’ experience that we’ve all had at some point, where we’re trying to solve a puzzle and then it comes to us unexpectedly, like a light-bulb going off in our head. They then relate something that I didn’t know, which is that neurological studies show that when we have this ‘insight’ there’s a spike in our brain waves and it comes from a location in the right hemisphere of the brain.
 
Many years ago (decades) I read a book called Drawing on the Right Side of the Brain by Betty Edwards. I thought neuroscientists would disparage this as pop-science, but Kounios and Beeman seem to give it some credence. Later in the book, they describe this in more detail, where there are signs of activity in other parts of the brain, but the ah-ha experience has a unique EEG signature and it’s in the right hemisphere.
 
The authors distinguish this unexpected insightful experience from an insight that is a consequence of expertise. I made this point myself, in another post, where experts make intuitive shortcuts based on experience that the rest of us don’t have in our mental toolkits.
 
They also spend an entire chapter on examples involving a special type of insight, where someone spends a lot of time thinking about a problem or an issue, and then the solution comes to them unexpected. A lot of scientific breakthroughs follow this pattern, and the point is that the insight wouldn’t happen at all without all the rumination taking place beforehand, often over a period of weeks or months, sometimes years. I’ve experienced this myself, when writing a story, and I’ll return to that experience later.
 
A lot of what we’ve learned about the brain’s functions has come from studying people with damage to specific areas of the brain. You may have heard of a condition called ‘aphasia’, which is when someone develops a serious disability in language processing following damage to the left hemisphere (possibly from a stroke). What you probably don’t know (I didn’t) is that damage to the right hemisphere, while not directly affecting one’s ability with language can interfere with its more nuanced interpretations, like sarcasm or even getting a joke. I’ve long believed that when I’m writing fiction, I’m using the right hemisphere as much as the left, but it never occurred to me that readers (or viewers) need the right hemisphere in order to follow a story.
 
According to the authors, the difference between the left and right neo-cortex is one of connections. The left hemisphere has ‘local’ connections, whereas the right hemisphere has more widely spread connections. This seems to correspond to an ‘analytic’ ability in the left hemisphere, and a more ‘creative’ ability in the right hemisphere, where we make conceptual connections that are more wideranging. I’ve probably oversimplified that, but it was the gist I got from their exposition.
 
Like most books and videos on ‘creative thinking’ or ‘insights’ (as the authors prefer), they spend a lot of time giving hints and advice on how to improve your own creativity. It’s not until one is more than halfway through the book, in a chapter titled, The Insightful and the Analyst, that they get to the crux of the issue, and describe how there are effectively 2 different types who think differently, even in a ‘resting state’, and how there is a strong genetic component.
 
I’m not surprised by this, as I saw it in my own family, where the difference is very distinct. In another chapter, they describe the relationship between creativity and mental illness, but they don’t discuss how artists are often moody and neurotic, which is a personality trait. Openness is another personality trait associated with creative people. I would add another point, based on my own experience, if someone is creative and they are not creating, they can suffer depression. This is not discussed by the authors either.
 
Regarding the 2 types they refer to, they acknowledge there is a spectrum, and I can’t help but wonder where I sit on it. I spent a working lifetime in engineering, which is full of analytic types, though I didn’t work in a technical capacity. Instead, I worked with a lot of technical people of all disciplines: from software engineers to civil and structural engineers to architects, not to mention lawyers and accountants, because I worked on disputes as well.
 
The curious thing is that I was aware of 2 modes of thinking, where I was either looking at the ‘big-picture’ or looking at the detail. I worked as a planner, and one of my ‘tricks’ was the ability to distil a large and complex project into a one-page ‘Gantt’ chart (bar chart). For the individual disciplines, I’d provide a multipage detailed ‘program’ just for them.
 
Of course, I also write stories, where the 2 components are plot and character. Creating characters is purely a non-analytic process, which requires a lot of extemporising. I try my best not to interfere, and I do this by treating them as if they are real people, independent of me. Plotting, on the other hand, requires a big-picture approach, but I almost never know the ending until I get there. In the last story I wrote, I was in COVID lockdown when I knew the ending was close, so I wrote some ‘notes’ in an attempt to work out what happens. Then, sometime later (like a month), I had one sleepless night when it all came to me. Afterwards, I went back and looked at my notes, and they were all questions – I didn’t have a clue.

Thursday 25 May 2023

Philosophy’s 2 disparate strands: what can we know; how can we live

The question I’d like to ask, is there a philosophical view that encompasses both? Some may argue that Aristotle attempted that, but I’m going to take a different approach.
 
For a start, the first part can arguably be broken into 2 further strands: physics and metaphysics. And even this divide is contentious, with some arguing that metaphysics is an ‘abstract theory with no basis in reality’ (one dictionary definition).
 
I wrote an earlier post arguing that we are ‘metaphysical animals’ after discussing a book of the same name, though it was really a biography of 4 Oxford women in the 20th Century: Elizabeth Anscombe, Mary Midgley, Philippa Foot and Iris Murdoch. But I’ll start with this quote from said book.
 
Poetry, art, religion, history, literature and comedy are all metaphysical tools. They are how metaphysical animals explore, discover and describe what is real (and beautiful and good). (My emphasis.)
 
So, arguably, metaphysics could give us a connection between the 2 ‘strands’ in the title. Now here’s the thing: I contend that mathematics should be part of that list, hence part of metaphysics. And, of course, we all know that mathematics is essential to physics as an epistemology. So physics and metaphysics, in my philosophy, are linked in a rather intimate  way.
 
The curious thing about mathematics, or anything metaphysical for that matter, is that, without human consciousness, they don’t really exist, or are certainly not manifest. Everything on that list is a product of human consciousness, notwithstanding that there could be other conscious entities somewhere in the universe with the same capacity.
 
But again, I would argue that mathematics is an exception. I agree with a lot of mathematicians and physicists that while we create the symbols and language of mathematics, we don’t create the intrinsic relationships that said language describes. And furthermore, some of those relationships seem to govern the universe itself.
 
And completely relevant to the first part of this discussion, the limits of our knowledge of mathematics seems to determine the limits of our knowledge of the physical world.
 
I’ve written other posts on how to live, specifically, 3 rules for humans and How should I live? But I’m going to go via metaphysics again, specifically storytelling, because that’s something I do. Storytelling requires an inner and outer world, manifest as character and plot, which is analogous to free will and fate in the real world. Now, even these concepts are contentious, especially free will, because many scientists tell us it’s an illusion. Again, I’ve written about this many times, but it’s relevance to my approach to fiction is that I try and give my characters free will. An important part of my fiction is that the characters are independent of me. If my characters don’t take on a life of their own, then I know I’m wasting my time, and I’ll ditch that story.
 
Its relevance to ‘how to live’ is authenticity. Artists understand better than most the importance of authenticity in their work, which really means keeping themselves out of it. But authenticity has ramifications, as any existentialist will tell you. To live authentically requires an honesty to oneself that is integral to one’s being. And ‘being’ in this sense is about being human rather than its broader ontological meaning. In other words, it’s a fundamental aspect of our psychology, because it evolves and changes according to our environment and milieu. Also, in the world of fiction, it's a fundamental dynamic.
 
What's more, if you can maintain this authenticity (and it’s genuine), then you gain people’s trust, and that becomes your currency, whether in your professional life or your social life. However, there is nothing more fake than false authenticity; examples abound.
 
I’ll give the last word to Socrates; arguably the first existentialist.
 
To live with honour in this world, actually be what you try to appear to be.


Saturday 14 January 2023

Why do we read?

This is the almost-same title of a book I bought recently (Why We Read), containing 70 short essays on the subject, featuring scholars of all stripes: historians, philosophers, and of course, authors. It even includes scientists: Paul Davies, Richard Dawkins and Carlo Rovelli, being 3 I’m familiar with.
 
One really can’t overstate the importance of the written word, because, oral histories aside, it allows us to extend memories across generations and accumulate knowledge over centuries that has led to civilisations and technologies that we all take for granted. By ‘we’, I mean anyone reading this post.
 
Many of the essayists write from their personal experiences and I’ll do the same. The book, edited by Josephine Greywoode and published by Penguin, specifically says on the cover in small print: 70 Writers on Non-Fiction; yet many couldn’t help but discuss fiction as well.
 
And books are generally divided between fiction and non-fiction, and I believe we read them for different reasons, and I wouldn’t necessarily consider one less important than the other. I also write fiction and non-fiction, so I have a particular view on this. Basically, I read non-fiction in order to learn and I read fiction for escapism. Both started early for me and I believe the motivation hasn’t changed.
 
I started reading extra-curricular books from about the age of 7 or 8, involving creatures mostly, and I even asked for an encyclopaedia for Christmas at around that time, which I read enthusiastically. I devoured non-fiction books, especially if they dealt with the natural world. But at the same time, I read comics, remembering that we didn’t have TV at that time, which was only just beginning to emerge.
 
I think one of the reasons that boys read less fiction than girls these days is because comics have effectively disappeared, being replaced by video games. And the modern comics that I have seen don’t even contain a complete narrative. Nevertheless, there are graphic novels that I consider brilliant. Neil Gaiman’s Sandman series and Hayao Miyazake’s Nausicaa of the Valley of the Wind, being standouts. Watchmen by Alan Moore also deserves a mention.
 
So the escapism also started early for me, in the world of superhero comics, and I started writing my own scripts and drawing my own characters pre-high school.
 
One of the essayists in the collection, Niall Ferguson (author of Doom) starts off by challenging a modern paradigm (or is it a meme?) that we live in a ‘simulation’, citing Oxford philosopher, Nick Bostrom, writing in the Philosophical Quarterly in 2003. Ferguson makes the point that reading fiction is akin to immersing the mind in a simulation (my phrasing, not his).
 
In fact, a dream is very much like a simulation, and, as I’ve often said, the language of stories is the language of dreams. But here’s the thing; the motivation for writing fiction, for me, is the same as the motivation for reading it: escapism. Whether reading or writing, you enter a world that only exists inside your head. The ultimate solipsism.

And this surely is a miracle of written language: that we can conjure a world with characters who feel real and elicit emotional responses, while we follow their exploits, failures, love life and dilemmas. It takes empathy to read a novel, and tests have shown that people’s empathy increases after they read fiction. You engage with the character and put yourself in their shoes. It’s one of the reasons we read.
 
 
Addendum: I would recommend the book, by the way, which contains better essays than mine, all with disparate, insightful perspectives.
 

Sunday 10 July 2022

Creative and analytic thinking

I recently completed an online course with a similar title, How to Think Critically and Creatively. It must be the 8th or 9th course I’ve done through New Scientist, on a variety of topics, from cosmology and quantum mechanics to immunology and sustainable living; so quite diverse subjects. I started doing them during COVID, as they helped to pass the time and stimulate the brain at the same time.
 
All these courses rely on experts in their relevant fields from various parts of the globe, so not just UK based, as you might expect. This course was no exception with just 2 experts, both from America. Denise D Cummins is described as a ‘cognitive scientist, author and elected Fellow of the Association for Psychological Science, and she’s held faculty at Yale, UC, University of Illinois and the Centre of Adaptive Behaviours at the Max Planck Institute in Berlin’. Gerard J Puccio is ‘Department Chair and Professor at the International Centre for Studies on Creativity, Buffalo State; a unique academic department that offers the world’s only Master of Science degree in creativity’.
 
I admit to being sceptical that ‘creativity’ can be taught, but that depends on what one means by creativity. If creativity means using your imagination, then yes, I think it can, because imagination is something that we all have, and it’s probably a valid comment that we don’t make enough use of it in our everyday lives. If creativity means artistic endeavour then I think that’s another topic, even though it puts imagination centre stage, so to speak.
 
I grew up in a family where one side was obviously artistic and the other side wasn’t, which strongly suggests there’s a genetic component. The other side excelled at sport, and I was rubbish at sport. However, both sides were obviously intelligent, despite a notable lack of formal education; in my parents’ case, both leaving school in their early teens. In fact, my mother did most of her schooling by correspondence, and my father left school in the midst of the great depression, shortly followed by active duty in WW2.
 
Puccio (mentioned above) argues that creativity isn’t taught in our education system because it’s too hard. Instead, he says that we teach by memorising facts and by ‘understanding’ problems. I would suggest that there is a hierarchy, where you need some basics before you can ‘graduate’ to ‘creative thinking’, and I use the term here in the way he intends it. I spent most of my working lifetime on engineering projects, with diverse and often complex elements. I need to point out that I wasn’t one of the technical experts involved, but I worked with them, in all their variety, because my job was to effectively co-ordinate all their activities towards a common goal, by providing a plan and then keeping it on the rails.
 
Engineering is all about problem solving, and I’m not sure one can do that without being creative, as well as analytical. In fact, one could argue that there is a dialectical relationship between them, but maybe I’m getting ahead of myself.
 
Back to Puccio, who introduced 2 terms I hadn’t come across before: ‘divergent’ and ‘convergent’ thinking, arguing they should be done in that order. In a nutshell, divergent thinking is brainstorming where one thinks up as many options as possible, and convergent thinking is where one narrows in on the best solution. He argues that we tend to do the second one without doing the first one. But this is related to something else that was raised in the course, which is ‘Type 1 thinking’ and ‘Type 2 thinking’.
 
Type 1 thinking is what most of us would call ‘intuition’, because basically it’s taking a cognitive shortcut to arrive at an answer to a problem, which we all do all the time, especially when time is a premium. Type 2 thinking is when we analyse the problem, which is not only time consuming but takes up brain resources that we’d prefer not to use, because we’re basically lazy, and I’m no exception. These 2 cognitive behaviours are clinically established, so it’s not pop-science.
 
However, something that was not discussed in the course, is that type 2 thinking can become type 1 thinking when we develop expertise in something, like learning a musical instrument, or writing a story, or designing a building. In other words, we develop heuristics based on our experience, which is why we sometimes jump to convergent thinking without going through the divergent part.
 
The course also dealt with ‘critical thinking’, as per its title, but I won’t dwell on that, because critical thinking arises from being analytical, and separating true expertise from bogus expertise, which is really a separate topic.
 
How does one teach these skills? I’m not a teacher, so I’m probably not best qualified to say. But I have a lot of experience in a profession that requires analytical thinking and problem-solving as part of its job description. The one thing I’ve learned from my professional life is the more I’m restrained by ‘rules’, the worse job I’ll do. I require the freedom and trust to do things my own way, and I can’t really explain that, but it’s also what I provide to others. And maybe that’s what people mean by ‘creative thinking’; we break the rules.
 
Artistic endeavour is something different again, because it requires spontaneity. But there is ‘divergent thinking’ involved, as Puccio pointed out, giving the example of Hemingway writing countless endings to Farewell to Arms, before settling on the final version. I’m reminded of the reported difference between Beethoven and Mozart, two of the greatest composers in the history of Western classical music. Beethoven would try many different versions of something (in his head and on paper) before choosing what he considered the best. He was extraordinarily prolific but he wrote only 9 symphonies and 5 piano concertos plus one violin concerto, because he workshopped them to death. Mozart, on the other hand, apparently wrote down whatever came into his head and hardly revised it. One was very analytical in their approach and the other was almost completely spontaneous.
 
I write stories and the one area where I’ve changed type 2 thinking into type 1 thinking is in creating characters – I hardly give it a thought. A character comes into my head almost fully formed, as if I just met them in the street. Over time I learn more about them and they sometimes surprise me, which is always a good thing. I once compared writing dialogue to playing jazz, because they both require spontaneity and extemporisation. Don Burrows once said you can’t teach someone to play jazz, and I’ve argued that you can’t teach someone to write dialogue.
 
Having said that, I once taught a creative writing class, and I gave the class exercises where they were forced to write dialogue, without telling them that that was the point of the exercise. In other words, I got them to teach themselves.
 
The hard part of storytelling for me is the plot, because it’s a neverending exercise in problem-solving. How did I get back to here? Analytical thinking is very hard to avoid, at least for me.
 
As I mentioned earlier, I think there is a dialectic between analytical thinking and creativity, and the best examples are not artists but genii in physics. To look at just two: Einstein and Schrodinger, because they exemplify both. But what came first: the analysis or the creativity? Well, I’m not sure it matters, because they couldn’t have done one without the other. Einstein had an epiphany (one of many) where he realised that an object in free fall didn’t experience a force, which apparently contradicted Newton. Was that analysis or creativity or both? Anyway, he not only changed how we think about gravity, he changed the way we think about the entire cosmos.
 
Schrodinger, borrowed an idea from de Broglie that particles could behave like waves and changed how we think about quantum mechanics. As Richard Feynman once said, ‘No one knows where Schrodinger’s equation comes from. It came out of Schrodinger’s head. You can’t derive it from anything we know.’
 

Sunday 22 May 2022

We are metaphysical animals

 I’m reading a book called Metaphysical Animals (How Four Women Brought Philosophy Back To Life). The four women were Mary Midgley, Iris Murdoch, Philippa Foot and Elizabeth Anscombe. The first two I’m acquainted with and the last two, not. They were all at Oxford during the War (WW2) at a time when women were barely tolerated in academia and had to be ‘chaperoned’ to attend lectures. Also a time when some women students ended up marrying their tutors. 

The book is authored by Clare Mac Cumhaill and Rachael Wiseman, both philosophy lecturers who became friends with Mary Midgley in her final years (Mary died in 2018, aged 99). The book is part biographical of all 4 women and part discussion of the philosophical ideas they explored.

 

Bringing ‘philosophy back to life’ is an allusion to the response (backlash is too strong a word) to the empiricism, logical positivism and general rejection of metaphysics that had taken hold of English philosophy, also known as analytical philosophy. Iris spent time in postwar Paris where she was heavily influenced by existentialism and Jean-Paul Sartre, in particular, whom she met and conversed with. 

 

If I was to categorise myself, I’m a combination of analytical philosopher and existentialist, which I suspect many would see as a contradiction. But this isn’t deliberate on my part – more a consequence of pursuing my interests, which are science on one hand (with a liberal dose of mathematical Platonism) and how-to-live a ‘good life’ (to paraphrase Aristotle) on the other.

 

Iris was intellectually seduced by Sartre’s exhortation: “Man is nothing else but that which he makes of himself”. But as her own love life fell apart along with all its inherent dreams and promises, she found putting Sartre’s implicit doctrine, of standing solitarily and independently of one’s milieu, difficult to do in practice. I’m not sure if Iris was already a budding novelist at this stage of her life, but anyone who writes fiction knows that this is what it’s all about: the protagonist sailing their lone ship on a sea full of icebergs and other vessels, all of which are outside their control. Life, like the best fiction, is an interaction between the individual and everyone else they meet. Your moral compass, in particular, is often tested. Existentialism can be seen as an attempt to arise above this, but most of us don’t. 

 

Not surprisingly, Wittgenstein looms large in many of the pages, and at least one of the women, Elizabeth Anscombe, had significant interaction with him. With Wittgenstein comes an emphasis on language, which has arguably determined the path of philosophy since. I’m not a scholar of Wittgenstein by any stretch of the imagination, but one thing he taught, or that people took from him, was that the meaning we give to words is a consequence of how they are used in ordinary discourse. Language requires a widespread consensus to actually work. It’s something we rarely think about but we all take for granted, otherwise there would be no social discourse or interaction at all. There is an assumption that when I write these words, they have the same meaning for you as they do for me, otherwise I am wasting my time.

 

But there is a way in which language is truly powerful, and I have done this myself. I can write a passage that creates a scene inside your mind complete with characters who interact and can cause you to laugh or cry, or pretty much any other emotion, as if you were present; as if you were in a dream.

 

There are a couple of specific examples in the book which illustrate Wittgenstein’s influence on Elizabeth and how she used them in debate. They are both topics I have discussed myself without knowing of these previous discourses.

 

In 1947, so just after the war, Elizabeth presented a paper to the Cambridge Moral Sciences Club, which she began with the following disclosure:

 

Everywhere in this paper I have imitated Dr Wittgenstein’s ideas and methods of discussion. The best that I have written is a weak copy of some features of the original, and its value depends only on my capacity to understand and use Dr Wittgenstein’s work.

 

The subject of her talk was whether one can truly talk about the past, which goes back to the pre-Socratic philosopher, Parmenides. In her own words, paraphrasing Parmenides, ‘To speak of something past’ would then to ‘point our thought’ at ‘something there’, but out of reach. Bringing Wittgenstein into the discussion, she claimed that Parmenides specific paradox about the past arose ‘from the way that thought and language connect to the world’.

 

We apply language to objects by naming them, but, in the case of the past, the objects no longer exist. She attempts to resolve this epistemological dilemma by discussing the nature of time as we experience it, which is like a series of pictures that move on a timeline while we stay in the present. This is analogous to my analysis that everything we observe becomes the past as soon as it happens, which is exemplified every time someone takes a photo, but we remain in the present – the time for us is always ‘now’.

 

She explains that the past is a collective recollection, documented in documents and photos, so it’s dependent on a shared memory. I would say that this is what separates our recollection of a real event from a dream, which is solipsistic and not shared with anyone else. But it doesn’t explain why the past appears fixed and the future unknown, which she also attempted to address. But I don’t think this can be addressed without discussing physics.

 

Most physicists will tell you that the asymmetry between the past and future can only be explained by the second law of thermodynamics, but I disagree. I think it is described, if not explained, by quantum mechanics (QM) where the future is probabilistic with an infinitude of possible paths and classical physics is a probability of ONE because it’s already happened and been ‘observed’. In QM, the wave function that gives the probabilities and superpositional states is NEVER observed. The alternative is that all the futures are realised in alternative universes. Of course, Elizabeth Anscombe would know nothing of these conjectures.

 

But I would make the point that language alone does not resolve this. Language can only describe these paradoxes and dilemmas but not explain them.

 

Of course, there is a psychological perspective to this, which many people claim, including physicists, gives the only sense of time passing. According to them, it’s fixed: past, present and future; and our minds create this distinction. I think our minds create the distinction because only consciousness creates a reference point for the present. Everything non-sentient is in a causal relationship that doesn’t sense time. Photons of light, for example, exist in zero time, yet they determine causality. Only light separates everything in time as well as space. I’ve gone off-topic.

 

Elizabeth touched on the psychological aspect, possibly unintentionally (I’ve never read her paper, so I could be wrong) that our memories of the past are actually imagined. We use the same part of the brain to imagine the past as we do to imagine the future, but again, Elizabeth wouldn’t have known this. Nevertheless, she understood that our (only) knowledge of the past is a thought that we turn into language in order to describe it.

 

The other point I wish to discuss is a famous debate she had with C.S. Lewis. This is quite something, because back then, C.S. Lewis was a formidable intellectual figure. Elizabeth’s challenge was all the more remarkable because Lewis’s argument appeared on the surface to be very sound. Lewis argued that the ‘naturalist’ position was self-refuting if it was dependent on ‘reason’, because reason by definition (not his terminology) is based on the premise of cause and effect and human reason has no cause. That’s a simplification, nevertheless it’s the gist of it. Elizabeth’s retort:

 

What I shall discuss is this argument’s central claim that a belief in the validity of reason is inconsistent with the idea that human thought can be fully explained as the product of non-rational causes.

 

In effect, she argued that reason is what humans do perfectly naturally, even if the underlying ‘cause’ is unknown. Not knowing the cause does not make the reasoning irrational nor unnatural. Elizabeth specifically cited the language that Lewis used. She accused him of confusing the concepts of “reason”, “cause” and “explanation”.

 

My argument would be subtly different. For a start, I would contend that by ‘reason’, he meant ‘logic’, because drawing conclusions based on cause and effect is logic, even if the causal relations (under consideration) are assumed or implied rather than observed. And here I contend that logic is not a ‘thing’ – it’s not an entity; it’s an action - something we do. In the modern age, machines perform logic; sometimes better than we do.

 

Secondly, I would ask Lewis, does he think reason only happens in humans and not other animals? I would contend that animals also use logic, though without language. I imagine they’d visualise their logic rather than express it in vocal calls. The difference with humans is that we can perform logic at a whole different level, but the underpinnings in our brains are surely the same. Elizabeth was right: not knowing its physical origins does not make it irrational; they are separate issues.

 

Elizabeth had a strong connection to Wittgenstein right up to his death. She worked with him on a translation and edit of Philosophical Investigations, and he bequeathed her a third of his estate and a third of his copyright.

 

It’s apparent from Iris’s diaries and other sources that Elizabeth and Iris fell in love at one point in their friendship, which caused them both a lot of angst and guilt because of their Catholicism. Despite marrying, Iris later had an affair with Pip (Philippa).

 

Despite my discussion of just 2 of Elizabeth’s arguments, I don’t have the level of erudition necessary to address most of the topics that these 4 philosophers published in. Just reading the 4 page Afterwards, it’s clear that I haven’t even brushed the surface of what they achieved. Nevertheless, I have a philosophical perspective that I think finds some resonance with their mutual ideas. 

 

I’ve consistently contended that the starting point for my philosophy is that for each of us individually, there is an inner and outer world. It even dictates the way I approach fiction. 

 

In the latest issue of Philosophy Now (Issue 149, April/May 2022), Richard Oxenberg, who teaches philosophy at Endicott College in Beverly, Massachusetts, wrote an article titled, What Is Truth? wherein he describes an interaction between 2 people, but only from a purely biological and mechanical perspective, and asks, ‘What is missing?’ Well, even though he doesn’t spell it out, what is missing is the emotional aspect. Our inner world is dominated by emotional content and one suspects that this is not unique to humans. I’m pretty sure that other creatures feel emotions like fear, affection and attachment. What’s more I contend that this is what separates, not just us, but the majority of the animal kingdom, from artificial intelligence.

 

But humans are unique, even among other creatures, in our ability to create an inner world every bit as rich as the one we inhabit. And this creates a dichotomy that is reflected in our division of arts and science. There is a passage on page 230 (where the authors discuss R.G. Collingwood’s influence on Mary), and provide an unexpected definition.

 

Poetry, art, religion, history, literature and comedy are all metaphysical tools. They are how metaphysical animals explore, discover and describe what is real (and beautiful and good). (My emphasis.)

 

I thought this summed up what they mean with their coinage, metaphysical animals, which titles the book, and arguably describes humanity’s most unique quality. Descriptions of metaphysics vary and elude precise definition but the word, ‘transcendent’, comes to mind. By which I mean it’s knowledge or experience that transcends the physical world and is most evident in art, music and storytelling, but also includes mathematics in my Platonic worldview.


 

Footnote: I should point out that certain chapters in the book give considerable emphasis to moral philosophy, which I haven’t even touched on, so another reader might well discuss other perspectives.


Saturday 26 March 2022

Symptoms of living in a post-truth world

 I recently had 2 arguments with different people, who took extreme positions on what we mean by truth. One argued that there is no difference between mathematical truths and truths in fiction – in fact, he described mathematics, that is not being ‘applied’, as ‘mathematical fiction’. The other argued that there is no objective truth and everything we claim to know are only ‘beliefs’, including mathematics. When I told her that I know there will always be mathematics that remain unknown, she responded that I ‘believe I know’. I thought that was an oxymoron, but I let it go. The trivial example, that there are an infinite number of primes or an infinite number of digits in pi, should put that to rest, or so one would think. 

Norman Wildberger, whom I’ve cited before, says that he doesn’t ‘believe’ in Real numbers, and neither does he believe in infinity, and he provides compelling arguments. But I feel that he’s redefining what we mean by mathematics, because his sole criterion is that it can be computed. Meanwhile, we have a theorem by Gregory Chaitin who contends that there are infinitely more incomputable Real numbers than computable Real numbers. People will say that mathematics is an abstract product of the mind, so who cares. But, as Paul Davies says, ‘mathematics works’, and it works so well that we can comprehend the Universe from the cosmic scale to the infinitesimal. 

 

Both of my interlocutors, I should point out, were highly intelligent, well-educated and very articulate, and I believe that they really believed in what they were saying. But, if there is no objective truth, then there are no 'true or false' questions that can be answered. To take the example I’ve already mentioned, it’s either true or false that we can’t know everything in mathematics. And if it’s false, then we must know everything. But my interlocutor would say that I claimed we’d never know and I can’t say I know that for sure. 

 

Well, putting aside the trivial example of infinity, there are proofs based on logic that says it’s true and that’s good enough for me. She claimed that logic can be wrong if the inputs are wrong, which is correct. In mathematics, this is dependent on axioms, and mathematics like all other sciences never stands still, so we keep getting new axioms. But it’s the nature of science that it builds on what went before, and, if it’s all ‘belief’, then it’s a house built on sand. And if it's a house built on sand, then all the electronic gadgets we use and the satellite systems we depend on could all crash without warning, but no one really believes that.

 

So that’s one side of the debate and the other side is that truths in art have the same status as truths in science. There are a couple of arguments one can use to counter this, the most obvious being that a work of art, like Beethoven’s 5th, is unique – no one else created that. But Pythagoras’s theorem could have been discovered by anyone, and in fact, it was discovered by the Chinese some 500 years before Pythagoras. I write fiction, and while I borrow tropes and themes and even plot devices from others, I contend that my stories are unique and so are the characters I create. In fact, my stories are so unique, that they don’t even resemble each other, as at least one reader has told me.

 

But there is another argument and that involves memes, which are cultural ideas, for want of a better definition, that persist and propagate. Now, some argue that virtually everything is a meme, including scientific theories and mathematical theorems. But there is a difference. Cultural memes are successful because they outlive their counterparts, but scientific theories and mathematical theorems outlive their counterparts because they are successful. And that’s a fundamental distinction between truth in mathematics and science, and truth in art.



Addendum: I just came across this video (only posted yesterday) and it’s very apposite to this post. It’s about something called Zero Knowledge Proof, and it effectively proves if someone is lying or not. It’s relevance to my essay is that it applies to true or false questions. You can tell if someone is telling the truth without actually knowing what that truth is. Apparently, it’s used algorithmically as part of blockchain for bitcoin transactions.

 

To give the example that Jade provides in her video, if someone claims that they have a proof of Riemann’s hypothesis, you can tell if they’re lying or not without them having to reveal the actual proof. That’s a very powerful tool, and, as a consequence, it virtually guarantees that a mathematical truth exists for a true or false proposition; in this hypothetical case, Riemann’s hypothesis, because it’s either true or false by definition.






Sunday 21 November 2021

Cancel culture – the scourge of our time

There are many things that cause me some anguish at the moment, not least that Donald Trump could easily be re-elected POTUS in 2024, despite deliberately undermining and damaging the very institution he wants to lead, which is American democracy. It’s not an exaggeration to say that he’s attacked it at its core.


This may seem a mile away from the topic I’ve alluded to in the title of my post, but they both seem to be symptoms of a divisiveness I haven’t seen since the Vietnam war. 

 

The word, ‘scourge’, is defined as ‘a whip used as an instrument of punishment’; and that’s exactly how cancel culture works, with social media the perfect platform from which to wield it.

 

In this weekend’s Good Weekend magazine (Fairfax Group), the feature article is on this very topic. But I would like to go back to the previous weekend, when another media outlet, Murdoch’s Weekend Australian Magazine published an article on well known atheist, Richard Dawkins. It turns out that at the ripe old age of 80, Dawkins has been cancelled. To be precise, he had his 1996 Humanist of the Year award withdrawn by the American Humanist Association (AHA) earlier this year, because, in 2015, he tweeted a defence of Rachel Doleza (a white chapter president of NAACP, the National Association for the Advancement of Coloured People) who had been vilified for identifying as Black.

 

Of course, I don’t know anything about Rachel Doleza or the context of that stoush, but I can identify with Dawkins, even though I’ve never suffered the same indignity. Dawkins and I are of a similar mould, though we live in different strata of society. In saying that, I don’t mean that I agree with all his arguments, because I obviously don’t, but we are both argumentative and are not shy in expressing our opinions. I really don’t possess the moral superiority to throw stones at Dawkins, even though I have.

 

I remember my father once telling me that if you admired an Australian fast bowler (he had someone in mind) then you also had to admire an English fast bowler (of the same generation), because they had the exact same temperament and wicket-taking abilities. Of course, that also applies to politicians. And it pretty much applies to me and Dawkins.

 

On the subject of identifying as ‘black’, I must tell a story related to me by a friend I knew when I worked in Princeton in 2001/2. She was a similar age to me and originally from Guyana. In fact, she was niece to West Indies champion cricketer, Lance Gibbs, and told me about attending his wedding when she was 8 years old (I promise no more cricketing references). But she told me how someone she knew (outside of work) told her that she ‘didn’t know what it was like to be black’. To which she replied, ‘Of course I know I’m black, I only have to look in the mirror every morning.’  Yes, it’s funny, but it goes to a deeper issue about identity. So a black person, who had lived their entire life in the USA, was telling another black person, who had come from outside of the US, that they didn’t know what it was like to be ‘black’. 

 

Dawkins said that, as a consequence, he’d started to self-censor, which is exactly what his detractors want. If Dawkins has started to self-censor, then none of us are safe or immune. What hurt him, of course, was being attacked by people on the Left, which he mostly identifies with. And, while this practice occurs on both sides, it’s on the Left where it has become most virulent. 

 

“I self-censor. More so in recent years. Why? It’s not a thing I’ve done throughout my life, I’ve always spoken my mind openly. But we’re now in a time when if you do speak your mind openly, you are at risk of being picked up and condemned.”

 

“Every time a lecturer is cancelled from an American university, that’s another God knows how many votes for Trump.”

 

And this is the thing: the Right loves nothing more than the Left turning on itself. It’s insidious, self-destructive and literally soul-destroying. In the Good Weekend article, they focus on a specific case, while also citing other cases, both in Australia and America. The specific case was actor, Hugh Sheridan, having a Sydney Festival show cancelled, which he’d really set his sights on, because he was playing a trans-gender person which created outrage in the LGBTQIA+ community. Like others cited in the article, he contemplated suicide which triggered close friends to monitor him. This is what it’s come to. It’s a very lengthy article, which I can’t do justice to on this post, but there is a perversion here: all the shows and people who are being targeted are actually bringing diversity of race and sexuality into the public arena and being crucified by the people they represent. The conservatives, wowsers and Bible-bashers must love it.

 

This is a phenomenon that is partly if not mostly, generational, and amplified by social media. People are being forced to grovel.

 

Emma Dawson, head of the Labor-aligned (Australian political party, for overseas readers) Per Capita think tank, told the Good Weekend“[cancel culture is] more worrying to me than just about anything other than far-right extremism. It is pervasive among educated young people; very few are willing to question it.”

 

‘In 2019, Barack Obama warned a group of young people: “This idea of purity, and you’re never compromised and always politically woke... you should get over that quickly. The world is messy.”

 

And this is the nub of the issue: cancel culture is all about silencing any debate, and, without debate, you have authoritarianism, even though it’s disguised as its opposite.

 

In the same article, the author, James Button, argues that the rise of Donald Trump is not a coincidence in the emergence of this phenomenon.

 

The election of Donald Trump horrified progressives. Here was a president – elected by ordinary Americans – who was racist, who winked at neo-Nazis and who told bare-faced lies in a brazen assertion of power while claiming that the liars were progressive media. His own strategy adviser, Stephen Bannon, said that the way to win the contest was to overwhelm the media with misinformation, to “flood the zone with shit”.

 

And they succeeded so well that America is more divided than it has been since its historical civil war.


To return to Hugh Sheridan, whom I think epitomises this situation, at least as it’s being played out in Australia, in that it’s the Arts that are coming under attack, and from the Left, it has to be said. Actors and writers (like myself) often portray characters who have different backgrounds to us. To give a recent example on ABC TV, which produces some outstanding free-to-air dramas with internationally renowned casts, when everything else is going into subscribed streaming services. Earlier this year, they produced and broadcast a series called The Newsreader, set in the 1980s when a lot of stuff was happening both locally and overseas. ‘At the 11th AACTA (Australian Academy of Cinema and Television Arts) awards, the show was nominated for more awards than any other program’ (Wikipedia).

 

A key plotline of the show was that the protagonist was gay but not openly so. The point is that I assume the actor was straight, although I don’t really know, but it’s what actors do. God knows, there have been enough gay actors who have played straight characters (Sir Ian McKellen, who played Gandalf, as well as Shakespearean roles). So why crucify someone who is part of the LGBTQIA+ community for playing a transgender role. He was even accused of being homophobic and transgenderphobic. He tweeted back, “you’re insane”, which only resulted in him being trolled for accusing his tormentors of being ‘insane’.

 

Someone recently asked me why I don’t publish what I write anymore. There is more than one reason, but one is fear of being cancelled. I doubt a publisher would publish what I write, anyway. But also, I suffer from impostor syndrome in that I genuinely feel like an impostor and I don’t need someone to tell me. The other thing is that I simply don’t care; I don’t feel the need to publish to validate my work.


Wednesday 6 October 2021

Tips on writing sex scenes

 Yes, this is a bit tongue-in-cheek, or tongue in someone else’s cheek to borrow a well-worn witticism. This arose from reading an interview by Benjamin Law (Aussie writer) of Pulitzer-prize winning author, Viet Thanh Nguyen, who briefly discussed writing sex scenes. He gives this advice: 

Not being utterly male-centred, if you happen to be a man or masculine. Not being too vulgar. Don’t be too florid. And humour always helps.

 

Many years ago (over a decade) I attended a writers’ convention, where there are always a lot of panel discussions, and there was one on ‘How to write sex scenes’, which was appropriately entertaining and unsurprisingly well attended.

 

Even longer ago, when I attempted to write my first novel, with utterly no experience or tuition, just blindly diving in the deep end, there was the possibility of a sex scene and I chickened out. The entire opus was terrible, but over about 3 years and 3 drafts I taught myself to write. I sent it to a scrip-assessor, who was honest and unflattering. But one of the things I remember is that she criticised me for avoiding the sex scene. I was determined to fix that in subsequent attempts. I still have a hard copy of that manuscript, by the way, to remind myself of how badly I can write.

 

But there are a couple of things I remember from that particular panel discussion (including a husband and wife team on the panel). Someone asked for a definition of pornography, and someone answered: the difference between erotica and pornography is that one you don’t admit to reading (or watching, as is more often the case). So, it’s totally subjective.

 

The first editor (a woman) of ELVENE, took offense at the first sex scene. I promptly sent the manuscript to 2 women friends for second and third opinions. Anyway, I think that you’ll invariably offend someone, and the only sure way to avoid that is to have all the sex happen off the page. Some writers do that, and sometimes I do it myself. Why? I think it depends on where it sits in the story, and is it really necessary to describe every sexual encounter between 2 characters, who start doing it regularly?

 

The other thing I remember from that panel is someone explaining how important it was to describe it from one of the character’s points of view. If you describe it from the POV of a ‘third party’, you risk throwing the reader out of the story. I contend that the entire story should be told from a character’s POV, though you can change characters, even within the same scene. The obvious analogy is with dialogue. You rarely change POV in dialogue, though it’s not a hard and fast rule. In other words, the reader’s perspective invariably involves speaking and listening from just one POV, as if they were in the conversation. The POV could be male or female - it’s irrelevant - but it’s usually the protagonist. I take the same approach to sex scenes. It’s risky for a man to take a woman’s POV in a sex scene, but I’ve done it many times. 

 

I often take the POV of the ‘active’ partner, and the reader learns what the other partner is experiencing second-hand so to speak. It generally means that the orgasm is described from the other partner’s perspective which makes it a lot easier. If they come in unison, I make sure the other one comes fractionally first.

 

I don’t write overlong sex scenes, because they become boring. Mine are generally a page long, including all the carryon that happens beforehand, which is not intentional, just happenstance. I wrote a review of Cory Doctorow’s sci-fi novel, Walkaway, a novel (full title) which has a number of heavy sex scenes which I did find boring, but that probably says more about me than the author. I’m sure there are readers who find my sex scenes ‘heavy’ and possibly boring as well. 

 

I have some rules of my own. They are an interlude yet they should always serve the story. They tell us something about the characters and they invariably have consequences, which are often positive, but not necessarily so. There is always a psychological component and my approach is that you can’t separate the psychological from the physical. They change the character and they change the dynamic of a relationship. Some of my characters appear celibate, but you find them in real life too.

 

I take the approach that fiction is a combination of fantasy and reality and the mixture varies from genre to genre and even author to author. So, in this context, the physical is fantasy and the psychological is reality.

 

One should never say ‘never’, but I couldn’t imagine writing a rape scene or someone being tortured, though I’ve seen such scenes on Scan-noir TV. However, I’ve written scenes involving sexual exploitation, to both men and women, and, in those cases, they were central to the story.

 

Lastly, I often have to tell people that I’m not in the story. I don’t describe my personal sex-life, and I expect that goes for other writers too. 


Saturday 5 December 2020

Some (personal) Notes on Writing

 This post is more personal, so don’t necessarily do what I’ve done. I struggled to find my way as a writer, and this might help to explain why. Someone recently asked me how to become a writer, and I said, ‘It helps, if you start early.’ I started pre-high school, about age 8-9. I can remember writing my own Tarzan scripts and drawing my own superheroes. 

 

Composition, as it was called then, was one of my favourite activities. At age 12 (first year high school), when asked to write about what we wanted to do as adults, I wrote that I wanted to write fiction. I used to draw a lot as a kid, as well. But, as I progressed through high school, I stopped drawing altogether and my writing deteriorated to the point that, by the time I left school, I couldn’t write an essay to save my life; I had constant writer’s block.

 

I was in my 30s before I started writing again and, when I started, I knew it was awful, so I didn’t show it to anyone. A couple of screenwriting courses (in my late 30s) was the best thing I ever did. With screenwriting, the character is all in what they say and what they do, not in what they look like. However, in my fiction, I describe mannerisms and body language as part of a character’s demeanour, in conjunction with their dialogue. Also, screenwriting taught me to be lean and economical – you don’t write anything that can’t be seen or heard on the screen. The main difference in writing prose is that you do all your writing from inside a character’s head; in effect, you turn the reader into an actor, subconsciously. Also, you write in real time so it unfolds like a movie in the reader’s imagination.

 

I break rules, but only because the rules didn’t work for me, and I learned that the hard way. So I don’t recommend that you do what I do, because, from what I’ve heard and read, most writers don’t. I don’t write every day and I don’t do multiple drafts. It took me a long time to accept this, but it was only after I became happy and confident with what I produced. In fact, I can go weeks, even months, without writing anything at all and then pick it up from where I left off.

 

I don’t do rewrites because I learned the hard way that, for me, they are a waste of time. I do revisions and you can edit something forever without changing the story or its characters in any substantial way. I correct for inconsistencies and possible plot holes, but if you’re going to do a rewrite, you might as well write something completely different – that’s how I feel about it. 

 

I recently saw a YouTube discussion between someone and a writer where they talked about the writer’s method. He said he did a lot of drafts, and there are a lot of highly successful writers who do (I’m not highly successful, yet I don’t think that’s the reason why). However, he said that if you pick something up you wrote some time ago, you can usually tell if it’s any good or not. Well, my writing passes that test for me.

 

I’m happiest when my characters surprise me, and, if they don’t, I know I’m wasting my time. I treat it like it’s their story, not mine; that’s the best advice I can give.

 

How to keep the reader engaged? I once wrote in another post that creating narrative tension is an essential writing skill, and there are a number of ways to do this. Even a slow-moving story can keep a reader engaged, if every scene moves the story forward. I found that keeping scenes short, like in a movie, and using logical sequencing so that one scene sets up the next, keeps readers turning the page. Narrative tension can be subliminally created by revealing information to the reader that the characters don’t know themselves; it’s a subtle form of suspense. Also, narrative tension is often manifest in the relationships between characters. I’ve always liked moral dilemmas, both in what I read (or watch) and what I write.

 

Finally, when I start off a new work, it will often take me into territory I didn’t anticipate; I mean psychological territory, as opposed to contextual territory or physical territory. 

 

A story has all these strands, and when you start out, you don’t necessarily know how they are going to come together – in fact, it’s probably better if you don’t. That way, when they do, it’s very satisfying and there is a sense that the story already existed before you wrote it. It’s like you’re the first to read it, not create it, which I think is a requisite perception.


Saturday 12 September 2020

Dame Diana Rigg (20 Jul 1938 – 10 Sep 2020)

It’s very rare for me to publish 2 posts in 2 days, and possibly unprecedented to publish 3 in less than a week. However, I couldn’t let this pass, for a number of reasons. Arguably, Dame Diana Rigg has had little to do with philosophy but quite a lot to do with culture and, of course, storytelling, which is a topic close to my heart.


In one of the many tributes that came out, there is an embedded video (c/- BBC Archives, 1997), where she talks about acting in a way that most of us don’t perceive it. She says, in effect, that an audience comes to a theatre (or a cinema) because they want to ‘believe’, and an actor has to give them (or honour) that ‘belief’. (I use the word, honour, she didn’t.)


This is not dissimilar to the ‘suspension of disbelief’ that writers attempt to draw from their readers. I’ve watched quite a few of Diana Rigg’s interviews, given over the decades, and I’m always struck by her obvious intelligence, not to mention her wit and goodwill.

 

I confess to being somewhat smitten by her character, Emma Peel, as a teenager. It was from watching her that I learned one falls for the character and not the actor playing her. Seeing her in another role, I was at first surprised, then logically reconciled, that she could readily play someone else less appealing.

 

Emma Peel was a role before its time in which the female could have the same hero status as her male partner. She explained, in one of the interviews I saw, that the role had originally been written for a man and they didn’t have time to rewrite it. So it occurred by accident. Originally, it was Honor Blackman, as Cathy Gale (who also passed away this year). But it was Diana Rigg as Emma Peel who seemed to be the perfect foil for Steed (Patrick Macnee). No one else filled those shoes with quite the same charm.

 

It was a quirky show, as only the British seem to be able to pull off: Steed in his vintage Bentley and Mrs Peel in her Lotus Elan, which I desired almost as much as her character.

 

The show time-travelled without a tardis, combining elements of fantasy and sci-fi that influenced my own writing. I suspect there is a bit of Emma Peel in Elvene, though I’ve never really analysed it.




Monday 7 September 2020

Secrets to good writing

I wrote this, because it came up on Quora as a question, What makes good writing?

I should say up front that there are a lot of much better writers than me, most of whom write for television, in various countries, but Europe, UK, America, Australia and New Zealand are the ones I’m most familiar with.

 

I should also point out that you can be ‘good’ at something without being ‘known’, so to speak. Not all ‘good’ cricketers play for Australia and not all ‘good’ footballers play in the national league. I have a friend who has won awards in theatre, yet she’s never made any money out of it; it’s strictly amateur theatre. She was even invited (as part of a group) to partake in a ‘theatre festival’ in Monaco a couple of years ago. Luckily, the group qualified for a government grant so they could participate.

 

Within this context, I call myself a good writer, based partly on feedback and partly on comparing myself to other writers I’ve read. I’ve written about this before, but I’ll keep it simple; almost dot points.

 

Firstly, good writing always tells the story from some character’s point of view (POV) and it doesn’t have to be the same character throughout the story. In fact, you can change POV even within the same scene or within dialogue, but it’s less confusing if you stay in one.

 

You take the reader inside a character’s mind, so they subconsciously become an actor. It’s why the reader is constantly putting themselves in the character’s situation and reacting accordingly.

 

Which brings me to the second point about identifying good writing. It can make the reader cry or laugh or feel angry or scared – in fact, feel any human emotion.

 

Thirdly, good writing makes the reader want to keep returning to the story. There are 2 ways you can do this. The most obvious and easiest way is to create suspense – put someone in jeopardy – which is why crime fiction is so popular.

 

The second way is to make the reader invest in the character(s)’ destiny. They like the characters so much that they keep returning to their journey. This is harder to do, but ultimately more satisfying. Sometimes, you can incorporate both into the same story.


A story should flow, and there is one way that virtually guarantees this. When I attended a screenwriting course (some decades ago), I was told that a scene should either provide information about the story or information about a character or move the story forward. In practice, I found that if I did the last one, the other 2 took care of themselves.


Another ‘trick’ from screenwriting is to write in ‘real time’ with minimal description, which effectively allows the story to unfold like a movie inside the reader’s head.

 

A story is like a journey, and a journey needs a map. A map is a sequence of plot points that are filled in with scenes that become the story.


None of the above are contentious, but my next point is. I contend that good writing is transparent or invisible. By this I mean that readers, by and large, don’t notice good writing, they only notice bad writing. If you watch a movie, the writing is completely invisible. No one consciously comments on good screenwriting; they always comment on the good acting or the good filmmaking, neither of which would exist without a good script.

 

How is this analogous to prose writing? The story takes place in the reader’s imagination, not on the page. Therefore, the writing should be easy-to-read and it should flow, following a subliminal rhythm; and most importantly, the reader should never be thrown out of the story. Writing that says, ‘look at me, see how clever I am’, is the antithesis of this. I concede, not everyone agrees.

 

I’ve said before that if we didn’t dream, stories wouldn’t work. Dream language is the language of stories, and they can both affect us the same way. I remember when I was a kid, movies could affect me just as dramatically as dreams. When reading a story, we inhabit its world in our imagination, conjuring up imagery without conscious effort.

 

 

Example:

 

The world got closer until it eventually took up almost all their vision. Their craft seemed to level out as if it was skimming the surface, but at an ultra-high altitude. As they got lower the dark overhead was replaced by a cobalt-blue and then they passed through clouds and they could see they were travelling across an ocean with waves tipped by froth, and then eventually they approached a shoreline and they seemed to slow down as a long beach stretched like a ribbon from horizon to horizon. Beyond the beach there were hills and mountains, which they accelerated over until they came to flat grassy plains, and in the distance they saw some dots on the ground, which became a village of people and horses and huts that poked into the air like upside down cones.


Friday 10 July 2020

Not losing the plot, and even how to find it

As I’ve pointed out in previous posts, the most difficult part of writing for me is plotting. The characters come relatively easy, though there is always the danger that they can be too much alike. I’ve noticed from my own reading that some authors produce a limited range of characters, not unlike some actors. Whether I fall into that category is for others to judge. 

But my characters do vary in age and gender and include AI entities (like androids). Ideally, a character reveals more of themselves as the story unfolds and even changes or grows. One should not do this deliberately – it’s best to just let it happen – try not to interfere is the intention if not always the result.

I’ve also pointed out previously that whether to outline or not is a personal preference, and sometimes a contentious one. As I keep saying, you need to find what works for you, and for me it took trial and error.

In my last post on this topic, I compared plotting to planning a project, because that is what I did professionally. On a project you have milestones that become ‘goals’ and there is invariably a suite of often diverse activities required to come together at the right time. In effect, making sure everything aligns was what my job was all about.

When it comes to plotting, we have ‘plot points’, which are analogous to milestones but not really the same thing. And this is relevant to whether one ‘outlines’ or not. A very good example is given in the movie, Their Finest (excellent movie), which is a film within a film and has a screenwriter as the protagonist. The writers have a board where they pin up the plot points and then join them up with scenes, which is what they write.

On the other hand, a lot of highly successful writers will tell you that they never outline at all, and there is a good reason for that. Spontaneity is what all artists strive for – it’s the very essence of creativity. I’ve remarked myself, that the best motivation to write a specific scene is the same as the reader’s: to find out what happens next. As a writer, you know that if you are surprised then so will your readers be.

Logically, if you don’t have an outline, you axiomatically don’t know what happens next, and the spontaneity that you strive for, is all but guaranteed. So what do I do? I do something in between. I learned early on that I need a plot point to aim at, and whether I know what lies beyond that plot point is not essential.

I found a method that works for me, and any writer needs to find a method that works for them. I keep a notebook, where I’ll ‘sketch’ what-ifs, which I’ll often do when I don’t know what the next plot point is. But once I’ve found it, and I always recognise it when I see it, I know I can go back to my story-in-progress. But that particular plot point should be far enough in the future that I can extemporise, and other plot points will occur spontaneously in the interim.

Backstory is often an important part of plot development. J.K. Rowling created a very complex backstory that was only revealed in the last 2 books of her Harry Potter series. George Lucas created such an extensive backstory for Star Wars, he was able to make 3 prequels out of it.

So, whether you outline or not may be dependent on how much you already know about your characters before you start.

Monday 18 May 2020

An android of the seminal android storyteller

I just read a very interesting true story about an android built in the early 2000s based on the renowned sci-fi author, Philip K Dick, both in personality and physical appearance. It was displayed in public at a few prominent events where it interacted with the public in 2005, then was lost on a flight between Dallas and Las Vegas in 2006, and has never been seen since. The book is called Lost In Transit; The Strange Story of the Philip K Dick Android by David F Duffy.

You have to read the back cover to know it’s non-fiction published by Melbourne University Press in 2011, so surprisingly a local publication. I bought it from my local bookstore at a 30% discount price as they were closing down for good. They were planning to close by Good Friday but the COVID-19 pandemic forced them to close a good 2 weeks earlier and I acquired it at the 11th hour, looking for anything I might find interesting.

To quote the back cover:

David F Duffy was a postdoctoral fellow at the University of Memphis at the time the android was being developed... David completed a psychology degree with honours at the University of Newcastle [Australia] and a PhD in psychology at Macquarie University, before his fellowship at the University of Memphis, Tennessee. He returned to Australia in 2007 and lives in Canberra with his wife and son.

The book is written chronologically and is based on extensive interviews with the team of scientists involved, as well as Duffy’s own personal interaction with the android. He had an insider’s perspective as a cognitive psychologist who had access to members of the team while the project was active. Like everyone else involved, he is a bit of a sci-fi nerd with a particular affinity and knowledge of the works of Philip K Dick.

My specific interest is in the technical development of the android and how its creators attempted to simulate human intelligence. As a cognitive psychologist, with professionally respected access to the team, Duffy is well placed to provide some esoteric knowledge to an interested bystander like myself.

There were effectively 2 people responsible (or 2 team leaders), David Hanson and Andrew Olney, who were brought together by Professor Art Greasser, head of the Institute of Intelligent Systems, a research lab in the psychology building at the University of Memphis (hence the connection with the author). 

Hanson is actually an artist, and his specialty was building ‘heads’ with humanlike features and humanlike abilities to express facial emotions. His heads included mini-motors that pulled on a ‘skin’, which could mimic a range of facial movements, including talking.

Olney developed the ‘brains’ of the android that actually resided on a laptop and was connected by wires going into the back of the android’s head. Hanson’s objective was to make an android head that was so humanlike that people would interact with it on an emotional and intellectual level. For him, the goal was to achieve ‘empathy’. He had made at least 2 heads before the Philip K Dick project.

Even though the project got the ‘blessing’ of Dick’s daughters, Laura and Isa, and access to an inordinate amount of material, including transcripts of extensive interviews, they had mixed feelings about the end result, and, tellingly, they were ‘relieved’ when the head disappeared. It suggests that it’s not the way they wanted him to be remembered.

In a chapter called Life Inside a Laptop, Duffy gives a potted history of AI, specifically in relation to the Turing test, which challenges someone to distinguish an AI from a human. He also explains the 3 levels of processing that were used to create the android’s ‘brain’. The first level was what Olney called ‘canned’ answers, which were pre-recorded answers to obvious questions and interactions, like ‘Hi’, ‘What’s your name?’, ‘What are you?’ and so on. Another level was ‘Latent Semantic Analysis’ (LSA), which was originally developed in a lab in Colorado, with close ties to Graesser’s lab in Memphis, and was the basis of Grasser’s pet project, ‘AutoTutor’ with Olney as its ‘chief programmer’. AutoTutor was an AI designed to answer technical questions as a ‘tutor’ for students in subjects like physics.

To create the Philip K Dick database, Olney downloaded all of Dick’s opus, plus a vast collection of transcribed interviews from later in his life. The Author conjectures that ‘There is probably more dialogue in print of interviews with Philip K Dick than any other person, alive or dead.’

The third layer ‘broke the input (the interlocutor’s side of the dialogue) into sections and looked for fragments in the dialogue database that seemed relevant’ (to paraphrase Duffy). Duffy gives a cursory explanation of how LSA works – a mathematical matrix using vector algebra – that’s probably a little too esoteric for the content of this post.

In practice, this search and synthesise approach could create a self-referencing loop, where the android would endlessly riff on a subject, going off on tangents, that sounded cogent but never stopped. To overcome this, Olney developed a ‘kill switch’ that removed the ‘buffer’ he could see building up on his laptop. At one display at ComicCon (July 2005) as part of the promotion for A Scanner Darkly (a rotoscope movie by Richard Linklater, starring Keanu Reeves), Hanson had to present the android without Olney, and he couldn’t get the kill switch to work, so Hanson stopped the audio with the mouth still working and asked for the next question. The android simply continued with its monolithic monologue which had no relevance to any question at all. I think it was its last public appearance before it was lost. Dick’s daughters, Laura and Isa, were in the audience and they were not impressed.

It’s a very informative and insightful book, presented like a documentary without video, capturing a very quirky, unique and intellectually curious project. There is a lot of discussion about whether we can produce an AI that can truly mimic human intelligence. For me, the pertinent word in that phrase is ‘mimic’, because I believe that’s the best we can do, as opposed to having an AI that actually ‘thinks’ like a human. 

In many parts of the book, Duffy compares what Graesser’s team is trying to do with LSA with how we learn language as children, where we create a memory store of words, phrases and stock responses, based on our interaction with others and the world at large. It’s a personal prejudice of mine, but I think that words and phrases have a ‘meaning’ to us that an AI can never capture.

I’ve contended before that language for humans is like ‘software’ in that it is ‘downloaded’ from generation to generation. I believe that this is unique to the human species and it goes further than communication, which is its obvious genesis. It’s what we literally think in. The human brain can connect and manipulate concepts in all sorts of contexts that go far beyond the simple need to tell someone what they want them to do in a given situation, or ask what they did with their time the day before or last year or whenever. We can relate concepts that have a spiritual connection or are mathematical or are stories. In other words, we can converse in topics that relate not just to physical objects, but are products of pure imagination.

Any android follows a set of algorithms that are designed to respond to human generated dialogue, but, despite appearances, the android has no idea what it’s talking about. Some of the sample dialogue that Duffy presented in his book, drifted into gibberish as far as I could tell, and that didn’t surprise me.

I’ve explored the idea of a very advanced AI in my own fiction, where ‘he’ became a prominent character in the narrative. But he (yes, I gave him a gender) was often restrained by rules. He can converse on virtually any topic because he has a Google-like database and he makes logical sense of someone’s vocalisations. If they are not logical, he’s quick to point it out. I play cognitive games with him and his main interlocutor because they have a symbiotic relationship. They spend so much time together that they develop a psychological interdependence that’s central to the narrative. It’s fiction, but even in my fiction I see a subtle difference: he thinks and talks so well, he almost passes for human, but he is a piece of software that can make logical deductions based on inputs and past experiences. Of course, we do that as well, and we do it so well it separates us from other species. But we also have empathy, not only with other humans, but other species. Even in my fiction, the AI doesn’t display empathy, though he’s been programmed to be ‘loyal’.

Duffy also talks about the ‘uncanny valley’, which I’ve discussed before. Apparently, Hanson believed it was a ‘myth’ and that there was no scientific data to support it. Duffy appears to agree. But according to a New Scientist article I read in Jan 2013 (by Joe Kloc, a New York correspondent), MRI studies tell another story. Neuroscientists believe the symptom is real and is caused by a cognitive dissonance between 3 types of empathy: cognitive, motor and emotional. Apparently, it’s emotional empathy that breaks the spell of suspended disbelief.

Hanson claims that he never saw evidence of the ‘uncanny valley’ with any of his androids. On YouTube you can watch a celebrity android called Sophie and I didn’t see any evidence of the phenomenon with her either. But I think the reason is that none of these androids appear human enough to evoke the response. The uncanny valley is a sense of unease and disassociation we would feel because it’s unnatural; similar to seeing a ghost - a human in all respects except actually being flesh and blood. 

I expect, as androids, like the Philip K Dick simulation and Sophie, become more commonplace, the sense of ‘unnaturalness’ would dissipate - a natural consequence of habituation. Androids in movies don’t have this effect, but then a story is a medium of suspended disbelief already.