Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). 2 Reviews: here. Also this promotional Q&A on-line.

Friday, 15 February 2019

3 rules for humans

A very odd post. I joined a Science Fiction group on Facebook, which has its moments. I sometimes comment, even have conversations, and most of the time manage to avoid conflicts. I’ve been a participator on the internet long enough to know when to shut up, or let something through to the keeper, as we say in Oz. In other words, avoid being baited. Most of the time I succeed, considering how opinionated I can be.

Someone asked the question: what would the equivalent 3 laws for humans be, analogous to Asimov’s 3 laws for robotics?

The 3 laws of robotics (without looking them up) are about avoiding harm to humans within certain constraints and then avoiding harm to robots or itself. It’s hierarchical with humans' safety being at the top, or the first law (from memory).

So I submitted an answer, which I can no longer find, so maybe someone took the post down. But it got me thinking, and I found that what I came up with was more like a manifesto than laws per se; so they're nothing like Asimov’s 3 laws for robotics.

In the end, my so-called laws aren't exactly what I submitted but they are succinct and logically consistent, with enough substance to elaborate upon.

                        1.    Don’t try or pretend to be something you’re not

This is a direct attempt at what existentialists call ‘authenticity’, but it’s as plain as one can make it. I originally thought of something Socrates apparently said:

   To live with honour in this world, actually be what you try to seem to be.

And my Rule No1 (preferable to law) is really another way of saying the same thing, only it’s more direct, and it has a cultural origin as well. As a child, growing up, ‘having tickets on yourself’, or ‘being up yourself’, to use some local colloquialisms, was considered the greatest sin. So I grew up with a disdain for pretentiousness that became ingrained. But there is more to it than that. I don’t believe in false modesty either.

There is a particular profession where being someone you’re not is an essential skill. I’m talking about acting. Spying also comes to mind, but the secret there I believe is to become invisible, which is the opposite to what James Bond does. That’s why John Le Carre’s George Smiley seems more like the real thing than 007 does. Going undercover, by the way, is extremely stressful and potentially detrimental to your health – just ask anyone who’s done it.

But actors routinely become someone they’re not. Many years ago, I used to watch a TV programme called The Actor’s Studio, where well known actors were interviewed, and I have to say that many of them struck me with their authenticity, which seems like a contradiction. But an Australian actress, Kerry Armstrong, once pointed out that acting requires leaving your ego behind. It struck me that actors know better than anyone else what the difference is between being yourself and being someone else.

I’m not an actor but I create characters in fiction, and I’ve always believed the process is mentally the same. Someone once said that ‘acting requires you to say something as if you’ve just thought of it, and not everyone can do that.’ So it’s spontaneity that matters. Someone else once said that acting requires you to always be in the moment. Writing fiction, I would contend, requires the same attributes. Writing, at least for me, requires you to inhabit the character, and that’s why the dialogue feels spontaneous, because it is. But paradoxically, it also requires authenticity. The secret is to leave yourself out of it.

The Chinese hold modesty in high regard. The I Ching has a lot to say about modesty, but basically we all like and admire people who are what they appear to be, as Socrates himself said.

We all wear masks, but I think those rare people who seem most comfortable without a mask are those we intrinsically admire the most.

                                   2.    Honesty starts with honesty to yourself

It’s not hard to see that this is directly related to Rule 1. The truth is that we can’t be honest to others if we are not honest to ourselves. It should be no surprise that sociopathic narcissists are also serial liars. Narcissists, from my experience, and from what I’ve read, create a ‘reality distortion field’ that is often at odds with everyone else except for their most loyal followers.

There is an argument that this should be Rule 1. They are obviously interdependent. But Rule 1 seems to be the logical starting point for me. Rule 2 is a consequence of Rule 1 rather than the other way round.

Hugh Mackay made the observation in his book, Right & Wrong: How to Decide for Yourself, that ‘The most damaging lies are the ones we tell ourselves’. From this, neurosis is born and many of the ills that beleaguer us. Self-honesty can be much harder than we think. Obviously, if we are deceiving ourselves, then, by definition, we are unaware of it. But the real objective of self-honesty is so we can have social intercourse with others and all that entails.

So you can see there is a hierarchy in my rules. It goes from how we perceive ourselves to how others perceive us, and logically to how we interact with them.

But before leaving Rule 2, I would like to mention a movie I saw a few years back called Ali’s Wedding, which was an Australian Muslim rom-com. Yes, it sounds like an oxymoron but it was a really good film, partly because it was based on real events experienced by the filmmaker. The music by Nigel Weslake was so good, I bought the soundtrack. It’s relevance to this discussion is that the movie opens with a quote from the Quran about lying. It effectively says that lies have a habit of snowballing; so you dig yourself deeper the further you go. It’s the premise upon which the entire film is based.

                              3.    Assume all humans have the same rights as you

This is so fundamental, it could be Rule 1, but I would argue that you can’t put this into practice without Rules 1 and 2. It’s the opposite to narcissism, which is what Rules 1 and 2 are attempting to counter.

One can see that a direct consequence is Confucius’s dictum: ‘Don’t do to others what you wouldn’t want done to yourself’; better known in the West as the Golden Rule: ‘Do unto others as you would have others do unto you’; and accredited to Jesus of course.

It’s also the premise behind the United Nations Bill of Human Rights. All these rules are actually hard to live by, and I include myself in that broad statement.

A couple of years back when I wrote a post in response to the question: Is morality objective? I effectively argued that Rule No3 is the only objective morality.

Friday, 8 February 2019

Some people might be offended by this

I read an article recently in The New Yorker (Issue Jan. 21, 2019) by Vinson Cunningham called The Bad Place; How the idea of Hell has shaped the way we think. I think it was meant to be a review of a book, called, aptly enough, The Penguin Book of Hell, edited by Scott G. Bruce, but Cunningham’s discussion meanders widely, including Dante’s Inferno and Homer’s Odyssey, as well as his own Christian upbringing in a Harlem church.

I was reminded of the cultural difference between America and Australia, when it comes to religion. A difference I was very aware of when I lived and worked in America over a combined period of 9 months, including New Jersey, Texas and California.

It’s hard to imagine any mainstream magazine or newspaper having this discussion in Australia, or, if they did, it would be more academic. I was in the US post 9/11 – in fact, I landed in New York the night before. I remember reading an editorial in a newspaper where people were arguing about whether the victims of the attack would go to heaven or not. I thought: how ridiculous. In the end, someone quoted from the Bible, as if that resolved all arguments  – even more ridiculous, from my perspective.

I remember reading in an altogether different context someone criticising a doctor for facilitating prayer meetings in a Jewish hospital because the people weren’t praying to Jesus, so their prayers would be ineffective. This was a cultural shock to me. No one discussed these issues or had these arguments in Australian media. At least, not in mainstream media, be it conservative or liberal.

Reading Cunningham’s article reminded me of all this because he talks about how real hell is for many people. To be fair, he also talks about how hell has been sidelined in secular societies. In Australia, people don’t discuss their religious views that much, so one can’t be sure what people really believe. But I was part of a generation that all but rejected institutionalised religion. I’ve met many people from succeeding generations who have no knowledge of biblical stories, whereas for me, it was simply part of one’s education.

One of the best ‘modern’ examples of hell or the underworld I found was in Neil Gaiman’s Sandman graphic novel series. It’s arguably the best graphic novel series written by anyone, though I’m sure aficionados of the medium may beg to differ. Gaiman borrowed freely from a range of mythologies, including Orpheus, the Bible (in particular the story of Cain and Abel) and even Shakespeare. His hero has to go to Hell and gets out by answering a riddle from its caretaker, the details of which I’ve long forgotten, but I remember thinking it to be one of those gems that writers of fiction (like me) envy. 

Gaiman also co-wrote a book with Terry Pratchett called Good Omens: The Nice and Accurate Prophecies of Agnes Nutter (1990) which is a great deal of fun. The premise, as described in Wikipedia: ‘The book is a comedy about the birth of the son of Satan, the coming of the End Times.’ Both authors are English, which possibly allows them a sense of irreverence that many Americans would find hard to manage. I might be wrong, but it seems to me that Americans take their religion way more seriously than the rest of us English-speaking nations, and this is reflected in their media.

And this brings me back to Cunningham’s article because it’s written in a cultural context that I simply don’t share. And I feel that’s the crux of this issue. Religion and all its mental constructs are cultural, and hell is nothing if not a mental construct.

My own father, whom I’ve written about before, witnessed hell first hand. He was in the Field Ambulance Corp in WW2 so he retrieved bodies in various states of beyond-repair from both sides of the conflict. He also spent 2.5 years as a POW in Germany. I bring this up, because when I was a teenager he told me why he didn’t believe in the biblical hell. He said, in effect, he couldn’t believe in a ‘father’ who sent his children to everlasting torment. I immediately saw the sense in his argument and I rejected the biblical god from that day on. This is the same man, I should point out, who believed it was his duty that I should have a Christian education. I thank him for that, otherwise I’d know nothing about it. When I was young I believed everything I was taught, which perversely made it easier to reject when I started questioning things. I know many people who had the same experience. The more they believed, the stronger their rejection.

I recently watched an excellent 3 part series, available on YouTube, called Testing God, which is really a discussion about science and religion. It was made by the UK’s Channel 4 in 2001, and includes some celebrity names in science, like Roger Penrose, Paul Davies and Richard Dawkins, and theologians as well; in particular, theologians who had become, or been, scientists.

In the last episode they interviewed someone who suffered horrendously in the War – he was German, and a victim of the fire-storm bombing. Contrary to many who have had similar experiences he found God, whereas, before, he’d been an atheist. But his idea of God is of someone who is patiently waiting for us.

I’ve long argued that God is subjective not objective. If humans are the only connection between the Universe and God, then, without humans, there is no reason for God to exist. There is no doubt in my mind that God is a projection, otherwise there wouldn’t be so many variants. Xenophanes, who lived in the 5th century BC, famously said:

The Ethiops say that their gods are flat-nosed and black,

While the Thracians say that theirs have blue eyes and red hair.

Yet if cattle or horses or lions had hands and could draw,

And could sculpt like men, then the horses would draw their gods

Like horses, and cattle like cattle; and each they would shape

Bodies of gods in the likeness, each kind, of their own.

At the risk of offending people even further, the idea that the God one finds in oneself is the Creator of the Universe is a non sequitur. My point is that there are two concepts of God which are commonly conflated. God as a Creator and God as a mystic experience, and there is no reason to believe that they are one and the same. In fact, the God as experience is unique to the person who has it, whilst God as Creator is, by definition, outside of space and time. One does not logically follow from the other.

In another YouTube video altogether, I watched an interview with Freeman Dyson on science and religion. He argues that they are quite separate and there is only conflict when people try to adapt religion to science or science to religion. In fact, he is critical of Einstein because Dyson believes that Einstein made science a religion. Einstein was influenced by Spinoza and would have argued, I believe, that the laws of physics are God.

John Barrow in one his books (Pi in the Sky) half-seriously suggests that the traditional God could be replaced by mathematics.

This brings me to a joke, which I’ve told elsewhere, but is appropriate, given the context.

What is the difference between a physicist and a mathematician?
A physicist studies the laws that God chose for the Universe to obey.
A mathematician studies the laws that God has to obey.


Einstein, in a letter to a friend, once asked the rhetorical question: Do you think God had a choice in creating the laws of the Universe?

I expect that’s unanswerable, but I would argue that if God created mathematics he had no choice. It’s not difficult to see that God can’t make a prime number non-prime, nor can he change the value of pi. To put it more succinctly, God can’t exist without mathematics, but mathematics can exist without God.

In light of this, I expect Freeman Dyson would accuse me of the same philosophical faux pas as Einstein.

As for hell, it’s a cultural artefact, a mental construct devised to manipulate people on a political scale. An anachronism at best and a perverse psychological contrivance at worst.

Thursday, 24 January 2019

Understanding Einstein’s special theory of relativity

In imagining a Sci-Fi scenario, I found a simple way of describing, if not explaining, Einstein’s special theory of relativity.

Imagine if a flight to the moon was no different to flying half way round the world in a contemporary airliner. In my scenario, the ‘shuttle’ would use an anti-gravity drive that allows high accelerations without killing its occupants with inertial forces. In other words, it would accelerate at hyper-speeds without anyone feeling it. I even imagined this when I was in high school, believe it or not.

The craft would still not be able to break the speed of light but it would travel fast enough that relativistic effects would be observable, both by the occupants and anyone remaining on the Earth or at its destination, the Moon.

So what are those relativistic effects? There is a very simple equation for velocity, and this is the only equation I will use to supplement my description.

v = s/t

Where v is the velocity, s is the distance travelled and t is the time or duration it takes. You can’t get much simpler than that. Note that s and t have an inverse relationship: if s gets larger, v increases, but if t gets larger, v decreases.

But it also means that for v to remain constant, if s gets smaller then so must t.

For the occupants of the shuttle, getting to the moon in such a short time means that, for them, the distance has shrunk. It normally takes about 3 days to get to the Moon (using current technology), so let’s say we manage it in 10 hrs instead. I haven’t done the calculations, because it depends on what speeds are attained and I’m trying to provide a qualitative, even intuitive, explanation rather than a technical one. The point is that if the occupants measured the distance using some sort of range finder, they’d find it was measurably less than if they did it using a range finder on Earth or on the Moon. It also means that whatever clocks they were carrying (including their own heartbeats) they would show that the duration was less, completely consistent with the equation above.

For the people on the Moon awaiting their arrival, or those on Earth left behind, the duration would be consistent with the distance they would measure independently of the craft, which means the distance would be whatever it was all of the time (allowing for small variances created by any elliptic eccentricity in its orbit). That means they would expect the occupants’ clocks to be the same as theirs. So when they see the discrepancy in the clocks it can only mean that time elapsed slower for the shuttle occupants compared to the moon’s inhabitants.

Now, many of you reading this will see a conundrum if not a flaw in my description. Einstein’s special theory of relativity infers that for the occupants of the shuttle, the clocks of the Moon and Earth occupants should also have slowed down, but when they disembark, they notice that they haven’t. That’s because there is an asymmetry inherent in this scenario. The shuttle occupants had to accelerate and decelerate to make the journey, whereas the so-called stationary observers didn’t. This is the same for the famous twin paradox.

Note that from the shuttle occupants’ perspective, the distance is shorter than the moon and Earth inhabitants’ measurements; therefore so is the time. But from the perspective of the moon and Earth inhabitants, the distance is unchanged but the time duration has shortened for the shuttle occupants compared to their own timekeeping. And that is special relativity theory in a nutshell.


Footnote: If you watch videos explaining the twin paradox, they emphasise that it’s not the acceleration that makes the difference (because it’s not part of the Lorentz transformation). But the acceleration and deceleration is what creates the asymmetry that one ‘moved’ respect to another that was ‘stationary’. In the scenario above, the entire solar system doesn’t accelerate and decelerate with respect to the shuttle, which would be absurd.

Addendum 1: Here is an attempted explanation of Einstein’s general theory of relativity, which is slightly more esoteric.

Addendum 2: I’ve done a rough calculation and the differences would be negligible, but if I changed the destination to Mars, the difference in distances would be in the order of 70,000 kilometres, but the time difference would be only in the order of 10 seconds. You could, of course, make the journey closer to lightspeed so the effects are more obvious.

Addendum 3: I’ve read the chapter on the twin paradox in Jim Al-Khalili’s book, Paradox: The Nine Greatest Enigmas in Physics. He points out that during the Apollo missions to the moon, the astronauts actually aged more (by nanoseconds) because the time increase by leaving Earth’s gravity was greater than any special relativistic effects experienced over the week-long return trip. Al-Khalili also explains that the twin who makes the journey, endures less time because the distance is shorter for them (as I expounded above). But, contrary to the YouTube lectures (that I viewed) he claims that it’s the acceleration and deceleration creating general relativistic effects that creates the asymmetry.


Saturday, 12 January 2019

Are natural laws reality?

In a not-so-recent post, I mentioned a letter I wrote to Philosophy Now challenging a throwaway remark by Raymond Tallis in his regular column called Tallis in Wonderland. As I said in that post, I have a lot of respect for Tallis but we disagree on what physics means. Like a lot of 20th Century philosophers, he challenges the very idea of mathematically determined natural laws. George Lakoff (a linguist) is another who comes to mind, though I’m reluctant to put Tallis and Lakoff in the same philosophical box. I expect Tallis has more respect for science and philosophy in general, than Lakoff has. But both of them, I believe, would see our ‘discovery’ of natural laws as ‘projections’ and their mathematical representation as metaphorical.

There is an aspect of this that would seem to support their point of view, and that’s the fact that our discoveries are never complete. We can always find circumstances where the laws don’t apply or new laws are required. The most obvious examples are Einstein’s general theory of relativity replacing Newton’s universal theory of gravity, and quantum mechanics replacing Newtonian mechanics.

I’ve discussed these before, but I’ll repeat myself because it’s important to understand why and how these differences arise. One of the conditions that Einstein set himself when he created his new theory of gravity was that it reduced to Newton’s theory when relativistic effects were negligible. This feat is quite astounding when one considers that the mathematics, involved in both theories, appear, on the surface, to have little in common.

In respect to quantum mechanics, I contend that it is distinct from classical physics and the mathematics reflects that. I should point out that no one else agrees with this view (to my knowledge) except Freeman Dyson.

Newtonian mechanics has other limitations as well. In regard to predicting the orbits of the planets, it quickly becomes apparent that as one increases the number of bodies the predictions become more impossible over longer periods of time, and this has nothing to do with relativity. As Jeremy Lent pointed out in The Patterning Instinct, Newtonian classical physics doesn’t really work for the real world, long term, and has been largely replaced by chaos theory. Newton’s, Einstein’s and Schrodinger’s equations are all ‘linear’, whereas nature appears to be persistently non-linear. This means that the Universe is unpredictable and I’ve discussed this in some detail elsewhere.

Nature obeys different rules at different levels. The curious thing is that we always believe that we’ve just about discovered everything there is to know, then we discover a whole new layer of reality. The Universe is worlds within worlds. Our comprehension of those worlds is largely dependent on our knowledge of mathematics.

Some people (like Gregory Chaitin and Stephen Wolfram) even think that there is something akin to computer code underpinning the entire Universe, but I don’t. Computers can’t deal with chaotic non-linear phenomena because one needs to calculate to infinity to get the initial conditions that determine the phenomenon’s ultimate fate. That’s why even the location of the solar system’s planets are not mathematically guaranteed.

Below is a draft of the letter I wrote to Philosophy Now in response to Raymond Tallis’s scepticism about natural laws. It’s not the one I sent.


Quantities actually exist in the real world, in nature, and they come in specific ratios and relationships to each other; hence the 'natural laws'. They are not fictions, we did not make them up, they are not products of our imaginations.

Having said that, the wave function in quantum mechanics is a product of Schrodinger's imagination, and some people argue that it is a fiction. Nevertheless, it forms the basis of QED (quantum electrodynamics) which is the most successful empirically verified scientific theory to date, so they may actually be real; it's debatable. Einstein's field equations, based on tensors, are also arguably a product of his imagination, but, according to Einstein's own admission, the mathematics determined his conclusion that space-time is curved, not the other way round. Also his famous equation,
E= mc2, is mathematically derived from his special theory of relativity and was later confirmed by experimental evidence. So sometimes, in physics, the map is discovered before the terrain.

The last line is a direct reference to Tallis’s own throwaway line that mathematical physicists tend to ‘confuse the map for the terrain’.

Saturday, 5 January 2019

What makes humans unique

Now everyone pretty well agrees that there is not one single thing that makes humans unique in the animal kingdom, but most people would agree that our cognitive abilities leave the most intelligent and social of species in our wake. I say ‘most’ because there are some, possibly many, who argue that humans are not as special as we like to think and there is really nothing we can do that other species can’t do. They would point out that other species, if not all advanced species, have language, and many produce art to attract a mate and build structures (like ants and beavers) and some even use tools (like apes and crows).

However, I find it hard to imagine that other species can think and conceptualise in a language the way we do or even communicate complex thoughts and intentions using oral utterances alone. To give other examples, I know of no other species that tells stories, keeps track of days by inventing a calendar based on heavenly constellations (like the Mayans) or even thinks about thinking. And as far as I know, we are the only species who literally invents a complex language that we teach our children (it’s not inherited) so that we can extend memories across generations. Even cultures without written scripts can do this using songs and dances and art. As someone said (John Hands in Cosmo Sapiens) we are the only species ‘who know that we know’. Or, as I said above, we are the only species that ‘thinks about thinking’.

Someone once pointed out to me that the only thing that separates us from all other species is the accumulation of knowledge, resulting in what we call civilization. He contended that over hundreds, even thousands of years, this had resulted in a huge gap between us and every other sentient creature on the planet. I pointed out to him that this only happened because we had invented the written word, based on languages, that allowed us to transfer memories across generations. Other species can teach their young certain skills, that may not be genetically inherited, but none can accumulate knowledge over hundreds of generations like we can. His very point demonstrated the difference he was trying to deny.

In a not-so-recent post, I delineated my philosophical ruminations into 23 succinct paragraphs, covering everything from science and mathematics to language, morality and religion.  My 16th point said:



Humans have the unique ability to nest concepts within concepts ad-infinitum, which mirror the physical world.

In another post from 2012, in answer to a Question of the Month in Philosophy  Now: How does language work?; I made the same point. (This is the only submission to Philosophy Now, out of 8 thus far, that didn’t get published.)

I attributed the above ‘philosophical point’ to Douglas Hofstadter, because he says something similar in his Pulitzer Prize winning book, Godel Escher Bach, but in reality, I had reached this conclusion before reading it.

It’s my contention that it is this ability that separates us from other species and that has allowed all the intellectual endeavours we associate with humanity, including stories, music, art, architecture, mathematics, science and engineering.

I will illustrate with an example that we are all familiar with, yet many of us struggle to pursue at an advanced level. I’m talking about mathematics, and I choose it because I believe it also explains why many of us fail to achieve the degree of proficiency we might prefer.

With mathematics we learn modules which we then use as a subroutine in a larger calculation. To give a very esoteric example, Einstein’s general theory of relativity requires at least 4 modules: calculus, vectors, matrices and the Lorentz transformation. These all combine in a metric tensor that becomes the basis of his field equations. The thing is, if you don’t know how to deal with any one of these, you obviously can’t derive his field equations. But the point is that the human brain can turn all these ‘modules’ into black boxes and then the black boxes can be manipulated at another level.

It’s not hard to see that we do this with everything, including writing an essay like I’m doing now. I raise a number of ideas and then try to combine them into a coherent thesis. The ‘atoms’ are individual words but no one tries to comprehend it at that level. Instead they think in terms of the ideas that I’ve expressed in words.

We do the same with a story, which becomes like a surrogate life for the time that we are under its spell. I’ve pointed out in other posts that we only learn something new when we integrate it into what we already know. And, with a story, we are continually integrating new information into existing information. Without this unique cognitive skill, stories wouldn’t work.

But more relevant to the current topic, the medium for a story is not words but the reader’s imagination. In a movie, we short-circuit the process, which is why they are so popular.

Because a story works at the level of imagination, it’s like a dream in that it evokes images and emotions that can feel real. One could imagine that a dog or a cat could experience emotions if we gave them a virtual reality experience, but a human story has the same level of complexity that we find in everyday life and which we express in a language. The simple fact that we can use language alone to conjure up a world with characters, along with a plot that can be followed, gives some indication of how powerful language is for the human species.

In a post I wrote on storytelling back in 2012, I referenced a book by Kiwi academic, Brian Boyd, who points out that pretend play, which we all do as children (though I suspect it’s now more likely done using a videogame console) gives us cognitive skills and is the precursor to both telling and experiencing stories. The success of streaming services indicates how stories are an essential part of the human experience.

While it’s self-evident that both mathematics and storytelling are two human endeavours that no other species can do (even at a rudimentary level) it’s hard to see how they are related.

People who are involved in computer programming or writing code, are aware of the value, even necessity, of subroutines. Our own brain does this when we learn to do something without having to think about it, like walking. But we can do the same thing with more complex tasks like driving a car or playing a musical instrument. The key point here is that they are all ‘motor tasks’, and we call the result ‘muscle memory’, as distinct from cognitive tasks. However, I expect it relates to cognitive tasks as well. For example, every time you say something it’s like the sentence has been pre-formed in your brain. We use particular phrases, all the time, which are analogous to ‘subroutines.’

I should point out that this doesn’t mean that computers ‘think’, which is a whole other topic. I’m just relating how the brain delegates tasks so it can ‘think’ about more important things. If we had to concentrate every time we took a step, we would lose the train of thought of whatever it was we were engaged in at the time; a conversation being the most obvious example.

The mathematics example I gave is not dissimilar to the idea of a ‘subroutine’. In fact, one can employ mathematical ‘modules’ into software, so it’s more than an analogy. So with mathematics we’ve effectively achieved cognitively what the brain achieves with motor skills at the subconscious level. And look where it has got us: Einstein’s general theory of relativity, which is the basis of all current theories of the Universe.

We can also think of a story in terms of modules. They are the individual scenes, which join together to form an episode, which form together to create an overarching narrative that we can follow even when it’s interrupted.

What mathematics and storytelling have in common is that they are both examples where the whole appears to be greater than the sum of its parts. Yet we know that in both cases, the whole is made up of the parts, because we ‘process’ the parts to get the whole. My point is that only humans are capable of this.

In both cases, we mentally build a structure that seems to have no limits. The same cognitive skill that allows us to follow a story in serial form also allows us to develop scientific theories. The brain breaks things down into components and then joins them back together to form a complex cognitive structure. Of course, we do this with physical objects as well, like when we manufacture a car or construct a building, or even a spacecraft. It’s called engineering.

Saturday, 22 December 2018

When real life overtakes fiction

I occasionally write science fiction; a genre I chose out of fundamental laziness. I knew I could write in that medium without having to do any research to speak of. I liked the idea of creating the entire edifice - world, story and characters - from my imagination with no constraints except the bounds of logic.

There are many subgenres of sci-fi: extraterrestrial exploration, alien encounters, time travel, robots & cyborgs, inter-galactic warfare, genetically engineered life-forms; but most SF stories, including mine, are a combination of some of these. Most sci-fi can be divided into 2 broad categories – space opera and speculative fiction, sometimes called hardcore SF. Space operas, exemplified by the Star Wars franchise, Star Trek and Dr Who, generally take more liberties with the science part of science fiction.

I would call my own fictional adventures science-fantasy, in the mould of Frank Herbert’s Dune series or Ursula K Le Guin’s fiction; though it has to be said, I don’t compete with them on any level.

I make no attempt to predict the future, even though the medium seems to demand it. Science fiction is a landscape that I use to explore ideas in the guise of a character-oriented story. I discovered, truly by accident, that I write stories about relationships. Not just relationships between lovers, but between mother and daughter, daughter and father(s), protagonist and nemesis, protagonist and machine.

One of the problems with writing science fiction is that the technology available today seems to overtake what one imagines. In my fiction no one uses a mobile phone. I can see a future where people can just talk to someone in the ether, because they can connect in their home or in their car, without a device per se. People can connect via a holographic form of Skype, which means they can have a meeting with someone in another location. We are already doing this, of course, and variations on this theme have been used in Star Wars and other space operas. But most of the interactions I describe are very old fashioned face-to-face, because that's still the best way to tell a story.

If you watch (or read) crime fiction you’ll generally find it’s very suspenseful with violence not too far away. But if you analyze it, you’ll find it’s a long series of conversations, with occasional action and most of the violence occurring off-screen (or off-the-page). In other words, it’s more about personal interactions than you realise, and that’s what generally attracts you, probably without you even knowing it.

This is a longwinded introduction to explain why I am really no better qualified to predict future societies than anyone else. I subscribe to New Scientist and The New Yorker, both of which give insights into the future by examining the present. In particular, I recently read an article in The New Yorker (Dec, 17, 2018) by David Owen about facial-recognition, called Here’s Looking At You, that is already being used by police forces in America to target arrests without any transparency. Mozilla (in a podcast last year) described how a man had been misidentified twice, was arrested and subsequently lost his job and his career. I also read in last week’s New Scientist (15 Dec. 2018) how databases are being developed to know everything about a person, even what TV shows they watch and their internet use. It’s well known that in China there is a credit-point system that determines what buildings you can access and what jobs you can apply for. China has the most surveillance cameras anywhere in the world, and they intend to combine them with the latest facial recognition software.

Yuval Harari, in Homo Deus, talks about how algorithms are going to take over our lives, but I think he missed the mark. We are slowly becoming more Orwellian with social media already determining election results. In the same issue of New Scientist, journalist, Chelsea Whyte, asks: Is it time to unfriend the social network? with specific reference to Facebook’s recently exposed track-record. According to her: “Facebook’s motto was once ‘move fast and break things.’ Now everything is broken.” Quoting from the same article:

Now, the UK parliament has published internal Facebook emails that expose the mindset inside the company. They reveal discussions among staff over whether to collect users’ phone call logs and SMS texts through its Android app. “This is a pretty high-risk thing to do from a PR perspective but it appears that the growth team will charge ahead and do it.” (So said Product Manager Michael LeBeau in an email from 2015)

Even without Edward Snowden’s whistle-blowing expose, we know that governments the world over are collecting our data because the technological ability to do that is now available. We are approaching a period in our so-called civilised development where we all have an on-line life (if you are reading this) and it can be accessed by governments and corporations alike. I’ve long known that anyone can learn everything they need to know about me from my computer, and increasingly they don’t even need the computer.

In one of my fictional stories, I created a dystopian world where everyone had a ‘chip’ that allowed all conversations to be recorded so there was literally no privacy. We are fast approaching that scenario in some totalitarian societies. In Communist China under Mao, and Communist Soviet Union under Stalin, people found the circle of people they could trust got smaller and smaller. Now with AI capabilities and internet-wide databases, privacy is becoming illusory. With constant surveillance, all subversion can be tracked and subsequently prosecuted. Someone once said that only societies that are open to new ideas progress. If you live in a society where new ideas are censored then you will get stagnation.

In my latest fiction I’ve created another autocratic world, where everyone is tracked because everywhere they go they interact with very realistic androids who act as servants, butlers and concierges, but, in reality, keep track of what everyone’s doing. The only ‘futuristic’ aspect of this are the androids and the fact that I’ve set it on some alien world. (My worlds aren’t terra-formed; people live in bubbles that create a human-friendly environment.)

After reading these very recent articles in New Scientist and TNY, I’ve concluded that our world is closer to the one I’ve created in my imagination than I thought.


Addendum 1: This is a podcast about so-called Surveillance Capitalism, from Mozilla. Obviously, I use Google and I'm also on FaceBook, but I don't use Twitter. Am I part of the problem or part of the solution? The truth is I don't know. I try to make people think and share ideas. I have political leanings, obviously, but they're transparent. Foremost, I believe, that if you can't put your name to something you shouldn't post it.