This is so obvious once I explain it to you, you’ll wonder why no one else ever mentions it. I’ve pointed out a number of times before that consciousness exists in a constant present, so the time is always ‘now’ for us. I credit Erwin Schrodinger for providing this insight in his lectures, Mind and Matter, appended to his short tome (an oxymoron), What is Life?
A logical consequence is that, without memory, you wouldn’t know you’re conscious. And this has actually happened, where people have been knocked unconscious, then acted as if they were conscious in order to defend themselves, but have no memory of it. It happened to my father in a boxing ring (I didn’t believe him when he first told me) and it happened to a woman security guard (in Sydney) where she shot her assailant after he knocked her out. In both cases, they claimed they had no memory of the incident.
And, as I’ve pointed out before, this begs a question: if we can survive an attack without being consciously aware of it, then why did evolution select for consciousness? In other words, we could be automatons. The difference is that we have memory.
The brain is effectively a memory storage device, without which we would function quite differently. Perhaps this is the real difference between animals and plants. Perhaps plants are sentient, but without memories they can’t ‘think’. There are different types of memory. There is so-called muscle-memory, whereby when we learn a new skill we don’t have to keep relearning it, and eventually we do it without really thinking about it. Driving a car is an example that most of us are familiar with, but it applies to most sports and the playing of musical instruments. I’ve learned that this applies to cognitive skills as well. For example, I write stories and creating characters is something I do without thinking about it too much.
People who suffer from retrograde amnesia (as described by Oliver Sacks in his seminal book, The Man Who Mistook His Wife for a Hat, in the chapter titled, The Lost Mariner) don’t lose their memory of specific skills, or what we call muscle-memory. So you could have muscle-memory and still be an automaton, as I described above.
Other types of memory are semantic memory and episodic memory. Semantic memory, which is essential to learning a language, is basically our ability to remember facts, which may or may not require a specific context. Rote learning is just exercising semantic memory, which doesn’t necessarily require a deep understanding of a subject, but that’s another topic.
Episodic memory is the one I’m most concerned with here. It’s the ability to recount an event in one’s life – a form of time-travelling we all indulge in from time to time. Unlike a computer memory, it’s not an exact recollection – we reconstruct it – which is why it can change over time and why it doesn’t necessarily agree with someone else’s recollection of the same event. Then there is imagination, which I believe is the key to it all. Apparently, imagination uses the same part of the brain as episodic memory. In effect, we are creating a memory of something that is yet to happen – an attempt to time-travel into the future. And this, I argue, is how free will works.
Philosophers have invented a term called ‘intentionality’, which is not what you might think it is. I’ll give a dictionary definition:
The quality of mental states (e.g. thoughts, beliefs, desires, hopes) which consists in their being directed towards some object or state of affairs.
Philosophers who write on the topic of consciousness, like Daniel C Dennett and John Searle, like to use the term ‘aboutness’ to describe intentionality, and if you break down the definition I gave above, you might discern what they mean. It’s effectively the ability to direct ‘thoughts… towards some object or state of affairs’. But I see this as either episodic memory or imagination. In other words, the ‘object or state of affairs’ could be historical or yet to happen or pure fantasy. We can imagine events we’ve never experienced, though we may have read or heard about them, and they may not only have happened in another time but also another place – so mental time-travelling.
As well as a memory storage device, the brain is also a predictability device – it literally thinks a fraction of a second ahead. I’ve pointed out in another post that the brain creates a model in space and time so we can interact with the real world of space and time, which allows us to survive it. And one of the facets of that model is that it’s actually, minisculy ahead of the real world, otherwise we wouldn’t even be able to catch a ball. In other words, it makes predictions that our life depends on. But I contend that this doesn’t need episodic memory or imagination either, because it happens subconsciously and is part of our automaton brain.
My point is that the automaton brain, as I’ve coined it, could have evolved by natural selection, without memory. The major difference memory makes is that we become self-aware, and it gives consciousness a role it would otherwise not possess. And that role is what we call free will. I like a definition that philosopher and neuroscientist, Raymond Tallis, gave:
Free agents, then, are free because they select between imagined possibilities, and use actualities to bring about one rather than another.
So, as I said earlier, I think imagination is key. Free will requires imagination, which I argue is called ‘aboutness’ or ‘intentionality’ in philosophical jargon (though others may differ). And imagination requires episodic memory or mental time-travelling, without which we would all be automatons; still able to interact with the real world of space and time and to acquire skills necessary for survival.
And if one goes back to the very beginning of this essay, it is all premised on the observed and experiential phenomenon that consciousness exists in a constant present. We take this for granted, yet nothing else does. Everything becomes the past as soon as it happens, which I keep repeating, is demonstrated every time someone takes a photo. The only exception I can think of is a photon of light, for which time is zero. Our very thoughts become memory as soon as we think them, otherwise we wouldn’t know we exist, yet we could apparently survive without it.
Just today, I read a review in New Scientist (27 April 2024) of a book, The Elephant and the Blind: The experience of pure consciousness – philosophy, science and 500+ experiential reports by Thomas Metzinger. Apparently, Metzinger did an ‘online survey of meditators from 57 countries providing over 500 reports for the book.’ Basically, he argues that one can achieve a state that he calls ‘pure consciousness’ whereby the practitioner loses all sense of self. In effect, he argues (according to the reviewer, Alun Anderson):
That a first-person perspective isn’t necessary for consciousness at all: your sense of self, of a continuous “you”, is part of the content of consciousness, not consciousness itself.
A provocative and contentious perspective, yet it reminds me of studies, also reported in New Scientist, many years ago, using brain-scan-imagery, of people experiencing ‘God’ also having a sense of being ‘self-less’, if I can use that term. Personally, I think consciousness is something fundamental with a possible independent existence to anything physical. It has a physical manifestation, if you like, purely because of memory, because our brains are effectively a storage device for consciousness.
This is a radical idea, but it is one I woke up with one day as if it was an epiphany, and realised that it was quite a departure from what I normally think. Raymond Tallis, whom I’ve already mentioned, once made the claim that science can only study objects and phenomena that can be measured. I claim that consciousness can’t be measured, but because we can measure brain waves and neuron activity many people argue that we are measuring consciousness.
But here’s the thing: if we didn’t experience consciousness, then scientists would tell us it doesn’t exist in the same way they tell us that free will doesn’t exist. I can make this claim because the same scientists argue that eventually AI will exhibit consciousness while simultaneously telling us that we will know this from the way the AI behaves, not because anyone will be measuring anything.
Addendum: I came across this related video by self-described philosopher-physicist, Avshalom Elitzur, who takes a subtly different approach to the same issue, giving examples from the animal kingdom. Towards the end, he talks about specific 'isms' (e.g. physicalism and dualism), but he doesn't mention the one I'm an advocate of, which is a 'loop' - that matter interacts with consciousness, via neurons, and then consciousness interacts with matter, which is necessary for free will.
Basically, he argues that consciousness interacting with matter breaks conservation laws (watch the video) but the brain consumes energy whether it's doing a maths calculation, running around an oval or lying asleep. Running around an oval is arguably consciousness interacting with matter - the same for an animal chasing prey - because one assumes they're based on a conscious decision, which is based on an imagined future, as per my thesis above. Also, processing information uses energy, which is why computers get hot, with no consciousness required. I fail to see what the difference is.
No comments:
Post a Comment