Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
Showing posts with label Art. Show all posts
Showing posts with label Art. Show all posts

Monday 18 May 2020

An android of the seminal android storyteller

I just read a very interesting true story about an android built in the early 2000s based on the renowned sci-fi author, Philip K Dick, both in personality and physical appearance. It was displayed in public at a few prominent events where it interacted with the public in 2005, then was lost on a flight between Dallas and Las Vegas in 2006, and has never been seen since. The book is called Lost In Transit; The Strange Story of the Philip K Dick Android by David F Duffy.

You have to read the back cover to know it’s non-fiction published by Melbourne University Press in 2011, so surprisingly a local publication. I bought it from my local bookstore at a 30% discount price as they were closing down for good. They were planning to close by Good Friday but the COVID-19 pandemic forced them to close a good 2 weeks earlier and I acquired it at the 11th hour, looking for anything I might find interesting.

To quote the back cover:

David F Duffy was a postdoctoral fellow at the University of Memphis at the time the android was being developed... David completed a psychology degree with honours at the University of Newcastle [Australia] and a PhD in psychology at Macquarie University, before his fellowship at the University of Memphis, Tennessee. He returned to Australia in 2007 and lives in Canberra with his wife and son.

The book is written chronologically and is based on extensive interviews with the team of scientists involved, as well as Duffy’s own personal interaction with the android. He had an insider’s perspective as a cognitive psychologist who had access to members of the team while the project was active. Like everyone else involved, he is a bit of a sci-fi nerd with a particular affinity and knowledge of the works of Philip K Dick.

My specific interest is in the technical development of the android and how its creators attempted to simulate human intelligence. As a cognitive psychologist, with professionally respected access to the team, Duffy is well placed to provide some esoteric knowledge to an interested bystander like myself.

There were effectively 2 people responsible (or 2 team leaders), David Hanson and Andrew Olney, who were brought together by Professor Art Greasser, head of the Institute of Intelligent Systems, a research lab in the psychology building at the University of Memphis (hence the connection with the author). 

Hanson is actually an artist, and his specialty was building ‘heads’ with humanlike features and humanlike abilities to express facial emotions. His heads included mini-motors that pulled on a ‘skin’, which could mimic a range of facial movements, including talking.

Olney developed the ‘brains’ of the android that actually resided on a laptop and was connected by wires going into the back of the android’s head. Hanson’s objective was to make an android head that was so humanlike that people would interact with it on an emotional and intellectual level. For him, the goal was to achieve ‘empathy’. He had made at least 2 heads before the Philip K Dick project.

Even though the project got the ‘blessing’ of Dick’s daughters, Laura and Isa, and access to an inordinate amount of material, including transcripts of extensive interviews, they had mixed feelings about the end result, and, tellingly, they were ‘relieved’ when the head disappeared. It suggests that it’s not the way they wanted him to be remembered.

In a chapter called Life Inside a Laptop, Duffy gives a potted history of AI, specifically in relation to the Turing test, which challenges someone to distinguish an AI from a human. He also explains the 3 levels of processing that were used to create the android’s ‘brain’. The first level was what Olney called ‘canned’ answers, which were pre-recorded answers to obvious questions and interactions, like ‘Hi’, ‘What’s your name?’, ‘What are you?’ and so on. Another level was ‘Latent Semantic Analysis’ (LSA), which was originally developed in a lab in Colorado, with close ties to Graesser’s lab in Memphis, and was the basis of Grasser’s pet project, ‘AutoTutor’ with Olney as its ‘chief programmer’. AutoTutor was an AI designed to answer technical questions as a ‘tutor’ for students in subjects like physics.

To create the Philip K Dick database, Olney downloaded all of Dick’s opus, plus a vast collection of transcribed interviews from later in his life. The Author conjectures that ‘There is probably more dialogue in print of interviews with Philip K Dick than any other person, alive or dead.’

The third layer ‘broke the input (the interlocutor’s side of the dialogue) into sections and looked for fragments in the dialogue database that seemed relevant’ (to paraphrase Duffy). Duffy gives a cursory explanation of how LSA works – a mathematical matrix using vector algebra – that’s probably a little too esoteric for the content of this post.

In practice, this search and synthesise approach could create a self-referencing loop, where the android would endlessly riff on a subject, going off on tangents, that sounded cogent but never stopped. To overcome this, Olney developed a ‘kill switch’ that removed the ‘buffer’ he could see building up on his laptop. At one display at ComicCon (July 2005) as part of the promotion for A Scanner Darkly (a rotoscope movie by Richard Linklater, starring Keanu Reeves), Hanson had to present the android without Olney, and he couldn’t get the kill switch to work, so Hanson stopped the audio with the mouth still working and asked for the next question. The android simply continued with its monolithic monologue which had no relevance to any question at all. I think it was its last public appearance before it was lost. Dick’s daughters, Laura and Isa, were in the audience and they were not impressed.

It’s a very informative and insightful book, presented like a documentary without video, capturing a very quirky, unique and intellectually curious project. There is a lot of discussion about whether we can produce an AI that can truly mimic human intelligence. For me, the pertinent word in that phrase is ‘mimic’, because I believe that’s the best we can do, as opposed to having an AI that actually ‘thinks’ like a human. 

In many parts of the book, Duffy compares what Graesser’s team is trying to do with LSA with how we learn language as children, where we create a memory store of words, phrases and stock responses, based on our interaction with others and the world at large. It’s a personal prejudice of mine, but I think that words and phrases have a ‘meaning’ to us that an AI can never capture.

I’ve contended before that language for humans is like ‘software’ in that it is ‘downloaded’ from generation to generation. I believe that this is unique to the human species and it goes further than communication, which is its obvious genesis. It’s what we literally think in. The human brain can connect and manipulate concepts in all sorts of contexts that go far beyond the simple need to tell someone what they want them to do in a given situation, or ask what they did with their time the day before or last year or whenever. We can relate concepts that have a spiritual connection or are mathematical or are stories. In other words, we can converse in topics that relate not just to physical objects, but are products of pure imagination.

Any android follows a set of algorithms that are designed to respond to human generated dialogue, but, despite appearances, the android has no idea what it’s talking about. Some of the sample dialogue that Duffy presented in his book, drifted into gibberish as far as I could tell, and that didn’t surprise me.

I’ve explored the idea of a very advanced AI in my own fiction, where ‘he’ became a prominent character in the narrative. But he (yes, I gave him a gender) was often restrained by rules. He can converse on virtually any topic because he has a Google-like database and he makes logical sense of someone’s vocalisations. If they are not logical, he’s quick to point it out. I play cognitive games with him and his main interlocutor because they have a symbiotic relationship. They spend so much time together that they develop a psychological interdependence that’s central to the narrative. It’s fiction, but even in my fiction I see a subtle difference: he thinks and talks so well, he almost passes for human, but he is a piece of software that can make logical deductions based on inputs and past experiences. Of course, we do that as well, and we do it so well it separates us from other species. But we also have empathy, not only with other humans, but other species. Even in my fiction, the AI doesn’t display empathy, though he’s been programmed to be ‘loyal’.

Duffy also talks about the ‘uncanny valley’, which I’ve discussed before. Apparently, Hanson believed it was a ‘myth’ and that there was no scientific data to support it. Duffy appears to agree. But according to a New Scientist article I read in Jan 2013 (by Joe Kloc, a New York correspondent), MRI studies tell another story. Neuroscientists believe the symptom is real and is caused by a cognitive dissonance between 3 types of empathy: cognitive, motor and emotional. Apparently, it’s emotional empathy that breaks the spell of suspended disbelief.

Hanson claims that he never saw evidence of the ‘uncanny valley’ with any of his androids. On YouTube you can watch a celebrity android called Sophie and I didn’t see any evidence of the phenomenon with her either. But I think the reason is that none of these androids appear human enough to evoke the response. The uncanny valley is a sense of unease and disassociation we would feel because it’s unnatural; similar to seeing a ghost - a human in all respects except actually being flesh and blood. 

I expect, as androids, like the Philip K Dick simulation and Sophie, become more commonplace, the sense of ‘unnaturalness’ would dissipate - a natural consequence of habituation. Androids in movies don’t have this effect, but then a story is a medium of suspended disbelief already.

Sunday 10 May 2020

Logic, analysis and creativity

I’ve talked before about the apparent divide between arts and humanities, and science and technology. Someone once called me a polymath, but I don’t think I’m expert enough in any field to qualify. However, I will admit that, for most of my life, I’ve had a foot in both camps, to use a well-worn metaphor. At the risk of being self-indulgent, I’m going to discuss this dichotomy in reference to my own experiences.

I’ve worked in the engineering/construction industry most of my adult life, yet I have no technical expertise there either. Mostly, I worked as a planning and cost control engineer, which is a niche activity that I found I was good at. It also meant I got to work with accountants and lawyers as well as engineers of all disciplines, along with architects. 

The reason I bring this up is because planning is all about logic – in fact, that’s really all it is. At its most basic, it’s a series of steps, some of which are sequential and some in parallel. I started doing this before computers did a lot of the work for you. But even with computers, you have to provide the logic; so if you can’t do that, you can’t do professional planning. I make that distinction because it was literally my profession.

In my leisure time, I write stories and that also requires a certain amount of planning, and I’ve found there are similarities, especially when you have multiple plot lines that interweave throughout the story. For me, plotting is the hardest part of storytelling; it’s a sequential process of solving puzzles. And science is also about solving puzzles, all of which are beyond my abilities, yet I love to try and understand them, especially the ones that defy our intuitive sense of logic. But science is on a different level to both my professional activities and my storytelling. I dabble at the fringes, taking ideas from people much cleverer than me and creating a philosophical pastiche.

Someone on Quora (a student) commented once that studying physics exercised his analytical skills, which he then adapted to other areas of his life. It occurred to me that I have an analytical mind and that is why I took an interest in physics rather than the other way round. Certainly, my work required an analytical approach and I believe I also take an analytical approach to philosophy. In fact, I’ve argued previously that analysis is what separates philosophy from dogma. Anyway, I don’t think it’s unusual for us, as individuals, to take a skill set from one activity and apply it to another apparently unrelated one.

I wrote a post once about the 3 essential writing skills, being character development, evoking emotion and creating narrative tension. The key to all of these is character and, if one was to distil out the most essential skill of all, it would be to write believable dialogue, as if it was spontaneous, meaning unpremeditated, yet not boring or irrelevant to the story. I’m not at all sure it can be taught. Someone once said (Don Burrows) that jazz can’t be taught, because it’s improvisation by its very nature, and I’d argue the same applies to writing dialogue. I’ve always felt that writing fiction has more in common with musical composition than writing non-fiction. In both cases they can come unbidden into one’s mind, sometimes when one is asleep, and they’re both essentially emotive mediums. 

But science too has its moments of creativity, indeed sheer genius; being a combination of sometimes arduous analysis and inspired intuition.

Wednesday 8 April 2020

Secret heroes

A writer can get attached to characters, and it tends to sneak up on one (speaking for myself) meaning they are not necessarily the characters you expect to affect you.

 

All writers, who get past the ego phase, will tell you the characters feel like they exist separately to them. By the ego phase, I mean you’ve learned how to keep yourself out of the story, though you may suffer lapses – the best fiction is definitely not about you.

 

People will tell you that you use your own experience on which to base characters and events, and otherwise will base characters on people you know. I expect some writers might do that and I’ve even seen advice, if writing a screenplay, to imagine an actor you’ve seen playing the role. If I find myself doing that then I know I’ve lost the plot, literally rather than figuratively. 

 

I borrow names from people I’ve known, but the characters don’t resemble them at all, except in ethnicity. For example, if I have an Indian character, I will use an Indian name of someone I knew. We know that a name is not unique, so we know more than one John, for example, and we also know they have nothing in common.

 

I worked with someone once, who had a very unusual name, Essayas Alfa, and I used both his names in the same story. Neither character was anything like the guy I knew, except the character called Essayas was African and so was my co-worker, but one was a sociopath and the other was a really nice bloke. A lot of names I make up, including all the Kiri names, and even Elvene. I was surprised to learn it was a real name; at least, I got the gender right.

 

The first female character I ever created, when I was learning my craft, was based on someone I knew, though they had little in common, except their age. It was like I was using her as an actor for the role. I’ve never done that since. A lot of my main characters are female, which is a bit of a gamble, I admit. Creating Elvene was liberating and I’ve never looked back.

 

If you have dreams occupied by strangers, then characters in fiction are no different. One can’t explain it if you haven’t experienced it. So how can you get attached to a character who is a figment of your mind? Well, not necessarily in the way you think – it’s not an infatuation. I can’t imagine falling in love with a character I created, though I can imagine falling in love with an actor playing that character, because she’s no longer mine (assuming the character is female).

 

And I’ve got attached to male characters as well. These are the characters who have surprised me. They’ve risen above themselves, achieved something that I never expected them to. They weren’t meant to be the hero of the story, yet they excel themselves, often by making a sacrifice. They go outside their comfort zone, as we like to say, and become courageous, not by overcoming an adversary but by overcoming a fear. And then I feel like I owe them, as illogical as that sounds, because of what I put them through. They are my secret hero of the story. 


Sunday 9 February 2020

The confessions of a self-styled traveller in the world of ideas

Every now and then, on very rare occasions, you have a memory or a feeling that was so long ago that it feels almost foreign, like it was experienced by someone else. And, possibly it was, as I’m no longer the same person, either physically or in personality.

This particular memory was when I was a teenager and I was aflame with an idealism. It came to me, just today, while I was walking alongside a creek bed, so I’m not sure I can get it back now. It was when I believed I could pursue a career in science, and, in particular, physics. It was completely at odds with every other aspect of my life. At that time, I had very poor social skills and zero self-esteem. Looking back, it seems arrogant, but when you’re young you’re entitled to dream beyond your horizons, otherwise you don’t try.

This blog effectively demonstrates both the extent of my knowledge and the limits of my knowledge, in the half century since. I’ve been most fortunate to work with some very clever people. In fact, I’ve spent my whole working life with people cleverer than me, so I have no delusions.

I consider myself lucky to have lived a mediocre life. What do I mean by mediocre? Well, I’ve never been homeless, and I’ve never gone hungry and I’ve never been unable to pay my bills. I’m not one to take all that for granted; I think there is a good deal of luck involved in avoiding all of those pitfalls. Likewise, I believe I’m lucky not to be famous; I wouldn’t want my life under a microscope, whereby the smallest infraction of society’s rules could have me blamed and shamed on the world stage.

I’ve said previously that the people we admire most are those who seem to be able to live without a facade. I’m not one of those. My facade is that I’m clever: ever since my early childhood, I liked to spruik my knowledge in an effort to impress people, especially adults, and largely succeeded. I haven’t stopped, and this blog is arguably an extension of that impetus. But I will admit to a curiosity which was manifest from a very young age (pre high school), and that’s what keeps me engaged in the world of ideas. The internet has been most efficacious in this endeavour, though I’m also an avid reader of books and magazines, in the sciences, in particular.

But I also have a secret life in the world of fiction. And fiction is the best place to have a secret life. ELVENE is no secret, but it was written almost 2 decades ago. It was unusual in that it was ‘popular’. By popular, I don’t mean it was read by a multitude (it unequivocally wasn’t), but it was universally liked, like a ‘popular’ song. It had a dichotomous world: indigenous and futuristic. This was years before James Cameron’s Avatar, and a completely different storyline. I received accolades like, ‘I enjoyed every page’ and ‘I didn’t want it to end’ and ‘it practically played out like a movie in my head’.

ELVENE was an aberration – a one-off – but I don’t mind, seriously. My fiction has become increasingly dystopian. The advantage of sci-fi (I call mine, science-fantasy) is that you can create what-if worlds. In fact, an Australian literary scholar, Peter Nicholls, created The Encyclopedia of Science Fiction, and a TV doco was made of him called The What If Man.

Anyway, you can imagine isolated worlds, which evolve their own culture and government, not unlike what our world was like before sea and air travel compressed it. So one can imagine something akin to frontier territories where democracy is replaced by autocracy that can either be beneficiary or oppressive or something in between. So I have an autocracy, where the dictator limits travel both on and off his world. Where clones are exploited to become sex workers and people who live there become accustomed to this culture. In other words, it’s not that different to cultures in our past (and some might say, present). The dictator is less Adolf Hitler and more Donald Trump, though that wasn’t deliberate. Like all my characters, he takes on a life of his own and evolves in ways I don’t always anticipate. He’s not evil per se, but he knows how to manipulate people and he demands absolute loyalty, which is yet to be tested.

The thing is that you go where the story and the characters take you, and sometimes they take you into dark territory. But in the dark you look for light. “There’s a crack in everything; that’s how the light gets in” (Leonard Cohen). I confess I like moral dilemmas and I feel, I’ve not only created a cognitive dissonance for one of my characters, but, possibly, for myself as a writer. (Graham Greene was the master of the moral dilemma, but he’s in another class.)

Last year I saw a play put on by my good friend, Elizabeth Bradley, The Woman in the Window, for Canberra REP. It includes a dystopian future that features sex workers as an integral part of the society. It was a surprise to see someone else addressing a similar scenario. The writer was Kiwi, Alma De Groen, and she juxtaposed history (the dissident poet, Anna Akhmatova in Stalin’s Russia) with a dystopian future Australia.

I take a risk by having female protagonists prominent in all my fiction. It’s a risk because there is a lot of controversy about so-called ‘culture appropriation’. I increase that risk by portraying relationships from my female protagonists’ perspectives. However, there is always a sense that they all exist independently of me, which one can only appreciate if you willingly enter a secret world of fiction.

Friday 27 September 2019

Is the Universe conscious?

This is another question on Quora, and whilst it may seem trivial, even silly, I give it a serious answer.

Because it’s something we take for granted, literally every day of our lives, I find that many discussions on consciousness tend to gloss over its preternatural, epiphenomenal qualities (for want of a better description) and are often seemingly dismissive of its very existence. So let me be blunt: without consciousness, there is no reality. For you. At all.

My views are not orthodox, even heretical, but they are consistent with what I know and with the rest of my philosophy. The question has religious overtones, but I avoid all theological references.

This is the original question:

Is the universe all knowing/conscious?

And this is my answer:

I doubt it very much. If you read books about cosmology (The Book of Universes by John D Barrow, for example) you’ll appreciate how late consciousness arrived in the Universe. According to current estimates, it’s the last 520 million years of 13.8 billion, which is less than 4% of its age.

And as Barrow explains, the Universe needs to be of the mind-boggling scale we observe to allow enough time for complex life (like us) to evolve.

Consciousness is still a mystery, despite advances made in neuroscience. In the latest issue of New Scientist (21 Sep 2019) it’s the cover story: The True Nature of Consciousness; with the attached promise: We’re Finally Cracking the Greatest Mystery of You. But when you read the article the author (neuroscientist, Michael Graziano) seems to put faith in advances in AI achieving consciousness. It’s not the first time I’ve come across this optimism, yet I think it’s misguided. I don’t believe AI will ever become conscious, because it’s not supported by the evidence.

All the examples of consciousness that we know about are dependent on life. In other words, life evolved before consciousness did. With AI, people seem to think that the reverse will happen: a machine intelligence will become conscious and therefore it will be alive. It contradicts everything we have observed to date.

It’s based on the assumption that when a machine achieves a certain level of intelligence, it will automatically become conscious. Yet many animals of so-called lower intelligence (compared to humans) have consciousness and they don’t become more conscious if they become more intelligent. Computers can already beat humans at complex games and they improve all the time, but not one of them exhibits consciousness.

Slightly off-topic but relevant, because it demonstrates that consciousness is not dependent on just acquiring more machine intelligence.

I contend that consciousness is different to every other phenomena we know about, because it has a unique relationship with time. Erwin Schrodinger in his book, What is Life? made the observation that consciousness exists in a constant present. In other words, for a conscious observer, time is always ‘now’.

What’s more, I argue that it’s the only phenomena that does – everything else we observe becomes the past as soon as it happens - just take a photo to demonstrate.

This means that, without memory, you wouldn’t know you were conscious at all and there are situations where this has happened. People have been rendered unconscious, yet continue to behave as if they’re conscious, but later have no memory of it. I believe this is because their brain effectively stopped ‘recording’.

Consciousness occupies no space, even though it appears to be the consequence of material activity – specifically, the neurons in our brains. Because it appears to have a unique relationship with time and it can’t be directly measured, I’m not averse to the idea that it exists in another dimension. In mathematics, higher dimensions are not as aberrant as we perceive them, and I’ve read somewhere that neuron activity can be ‘modelled’ in a higher mathematical dimension. This idea is very speculative and I concede too fringe-thinking for most people.

As far as the Universe goes, I like to point out that reality (for us) requires both a physical world and consciousness - without consciousness there might as well be nothing. The Universe requires consciousness to be self-realised. This is a variant on the strong anthropic principle, originally expressed by Brandon Carter.

The weak anthropic principle says that only universes containing observers can be observed, which is a tautology. The strong anthropic principle effectively says that only universes, that allow conscious observers to emerge, can exist, which is my point about the Universe requiring consciousness to be self-realised. The Universe is not teleological (if you were to rerun the Universe, you’d get a different result) but the Universe has the necessary mathematical parameters to allow sentient life to emerge, which makes it quasi-teleological.

In answer to your question, I don’t think the Universe is conscious from its inception, but it has built into its long evolutionary development the inherent capacity to produce, not only conscious observers, but observers who can grasp the means to comprehend its workings and its origins, through mathematics and science.

Saturday 5 January 2019

What makes humans unique

Now everyone pretty well agrees that there is not one single thing that makes humans unique in the animal kingdom, but most people would agree that our cognitive abilities leave the most intelligent and social of species in our wake. I say ‘most’ because there are some, possibly many, who argue that humans are not as special as we like to think and there is really nothing we can do that other species can’t do. They would point out that other species, if not all advanced species, have language, and many produce art to attract a mate and build structures (like ants and beavers) and some even use tools (like apes and crows).

However, I find it hard to imagine that other species can think and conceptualise in a language the way we do or even communicate complex thoughts and intentions using oral utterances alone. To give other examples, I know of no other species that tells stories, keeps track of days by inventing a calendar based on heavenly constellations (like the Mayans) or even thinks about thinking. And as far as I know, we are the only species who literally invents a complex language that we teach our children (it’s not inherited) so that we can extend memories across generations. Even cultures without written scripts can do this using songs and dances and art. As someone said (John Hands in Cosmo Sapiens) we are the only species ‘who know that we know’. Or, as I said above, we are the only species that ‘thinks about thinking’.

Someone once pointed out to me that the only thing that separates us from all other species is the accumulation of knowledge, resulting in what we call civilization. He contended that over hundreds, even thousands of years, this had resulted in a huge gap between us and every other sentient creature on the planet. I pointed out to him that this only happened because we had invented the written word, based on languages, that allowed us to transfer memories across generations. Other species can teach their young certain skills, that may not be genetically inherited, but none can accumulate knowledge over hundreds of generations like we can. His very point demonstrated the difference he was trying to deny.

In a not-so-recent post, I delineated my philosophical ruminations into 23 succinct paragraphs, covering everything from science and mathematics to language, morality and religion.  My 16th point said:



Humans have the unique ability to nest concepts within concepts ad-infinitum, which mirror the physical world.

In another post from 2012, in answer to a Question of the Month in Philosophy  Now: How does language work?; I made the same point. (This is the only submission to Philosophy Now, out of 8 thus far, that didn’t get published.)

I attributed the above ‘philosophical point’ to Douglas Hofstadter, because he says something similar in his Pulitzer Prize winning book, Godel Escher Bach, but in reality, I had reached this conclusion before reading it.

It’s my contention that it is this ability that separates us from other species and that has allowed all the intellectual endeavours we associate with humanity, including stories, music, art, architecture, mathematics, science and engineering.

I will illustrate with an example that we are all familiar with, yet many of us struggle to pursue at an advanced level. I’m talking about mathematics, and I choose it because I believe it also explains why many of us fail to achieve the degree of proficiency we might prefer.

With mathematics we learn modules which we then use as a subroutine in a larger calculation. To give a very esoteric example, Einstein’s general theory of relativity requires at least 4 modules: calculus, vectors, matrices and the Lorentz transformation. These all combine in a metric tensor that becomes the basis of his field equations. The thing is, if you don’t know how to deal with any one of these, you obviously can’t derive his field equations. But the point is that the human brain can turn all these ‘modules’ into black boxes and then the black boxes can be manipulated at another level.

It’s not hard to see that we do this with everything, including writing an essay like I’m doing now. I raise a number of ideas and then try to combine them into a coherent thesis. The ‘atoms’ are individual words but no one tries to comprehend it at that level. Instead they think in terms of the ideas that I’ve expressed in words.

We do the same with a story, which becomes like a surrogate life for the time that we are under its spell. I’ve pointed out in other posts that we only learn something new when we integrate it into what we already know. And, with a story, we are continually integrating new information into existing information. Without this unique cognitive skill, stories wouldn’t work.

But more relevant to the current topic, the medium for a story is not words but the reader’s imagination. In a movie, we short-circuit the process, which is why they are so popular.

Because a story works at the level of imagination, it’s like a dream in that it evokes images and emotions that can feel real. One could imagine that a dog or a cat could experience emotions if we gave them a virtual reality experience, but a human story has the same level of complexity that we find in everyday life and which we express in a language. The simple fact that we can use language alone to conjure up a world with characters, along with a plot that can be followed, gives some indication of how powerful language is for the human species.

In a post I wrote on storytelling back in 2012, I referenced a book by Kiwi academic, Brian Boyd, who points out that pretend play, which we all do as children (though I suspect it’s now more likely done using a videogame console) gives us cognitive skills and is the precursor to both telling and experiencing stories. The success of streaming services indicates how stories are an essential part of the human experience.

While it’s self-evident that both mathematics and storytelling are two human endeavours that no other species can do (even at a rudimentary level) it’s hard to see how they are related.

People who are involved in computer programming or writing code, are aware of the value, even necessity, of subroutines. Our own brain does this when we learn to do something without having to think about it, like walking. But we can do the same thing with more complex tasks like driving a car or playing a musical instrument. The key point here is that they are all ‘motor tasks’, and we call the result ‘muscle memory’, as distinct from cognitive tasks. However, I expect it relates to cognitive tasks as well. For example, every time you say something it’s like the sentence has been pre-formed in your brain. We use particular phrases, all the time, which are analogous to ‘subroutines.’

I should point out that this doesn’t mean that computers ‘think’, which is a whole other topic. I’m just relating how the brain delegates tasks so it can ‘think’ about more important things. If we had to concentrate every time we took a step, we would lose the train of thought of whatever it was we were engaged in at the time; a conversation being the most obvious example.

The mathematics example I gave is not dissimilar to the idea of a ‘subroutine’. In fact, one can employ mathematical ‘modules’ into software, so it’s more than an analogy. So with mathematics we’ve effectively achieved cognitively what the brain achieves with motor skills at the subconscious level. And look where it has got us: Einstein’s general theory of relativity, which is the basis of all current theories of the Universe.

We can also think of a story in terms of modules. They are the individual scenes, which join together to form an episode, which form together to create an overarching narrative that we can follow even when it’s interrupted.

What mathematics and storytelling have in common is that they are both examples where the whole appears to be greater than the sum of its parts. Yet we know that in both cases, the whole is made up of the parts, because we ‘process’ the parts to get the whole. My point is that only humans are capable of this.

In both cases, we mentally build a structure that seems to have no limits. The same cognitive skill that allows us to follow a story in serial form also allows us to develop scientific theories. The brain breaks things down into components and then joins them back together to form a complex cognitive structure. Of course, we do this with physical objects as well, like when we manufacture a car or construct a building, or even a spacecraft. It’s called engineering.

Saturday 22 December 2018

When real life overtakes fiction

I occasionally write science fiction; a genre I chose out of fundamental laziness. I knew I could write in that medium without having to do any research to speak of. I liked the idea of creating the entire edifice - world, story and characters - from my imagination with no constraints except the bounds of logic.

There are many subgenres of sci-fi: extraterrestrial exploration, alien encounters, time travel, robots & cyborgs, inter-galactic warfare, genetically engineered life-forms; but most SF stories, including mine, are a combination of some of these. Most sci-fi can be divided into 2 broad categories – space opera and speculative fiction, sometimes called hardcore SF. Space operas, exemplified by the Star Wars franchise, Star Trek and Dr Who, generally take more liberties with the science part of science fiction.

I would call my own fictional adventures science-fantasy, in the mould of Frank Herbert’s Dune series or Ursula K Le Guin’s fiction; though it has to be said, I don’t compete with them on any level.

I make no attempt to predict the future, even though the medium seems to demand it. Science fiction is a landscape that I use to explore ideas in the guise of a character-oriented story. I discovered, truly by accident, that I write stories about relationships. Not just relationships between lovers, but between mother and daughter, daughter and father(s), protagonist and nemesis, protagonist and machine.

One of the problems with writing science fiction is that the technology available today seems to overtake what one imagines. In my fiction no one uses a mobile phone. I can see a future where people can just talk to someone in the ether, because they can connect in their home or in their car, without a device per se. People can connect via a holographic form of Skype, which means they can have a meeting with someone in another location. We are already doing this, of course, and variations on this theme have been used in Star Wars and other space operas. But most of the interactions I describe are very old fashioned face-to-face, because that's still the best way to tell a story.

If you watch (or read) crime fiction you’ll generally find it’s very suspenseful with violence not too far away. But if you analyze it, you’ll find it’s a long series of conversations, with occasional action and most of the violence occurring off-screen (or off-the-page). In other words, it’s more about personal interactions than you realise, and that’s what generally attracts you, probably without you even knowing it.

This is a longwinded introduction to explain why I am really no better qualified to predict future societies than anyone else. I subscribe to New Scientist and The New Yorker, both of which give insights into the future by examining the present. In particular, I recently read an article in The New Yorker (Dec, 17, 2018) by David Owen about facial-recognition, called Here’s Looking At You, that is already being used by police forces in America to target arrests without any transparency. Mozilla (in a podcast last year) described how a man had been misidentified twice, was arrested and subsequently lost his job and his career. I also read in last week’s New Scientist (15 Dec. 2018) how databases are being developed to know everything about a person, even what TV shows they watch and their internet use. It’s well known that in China there is a credit-point system that determines what buildings you can access and what jobs you can apply for. China has the most surveillance cameras anywhere in the world, and they intend to combine them with the latest facial recognition software.

Yuval Harari, in Homo Deus, talks about how algorithms are going to take over our lives, but I think he missed the mark. We are slowly becoming more Orwellian with social media already determining election results. In the same issue of New Scientist, journalist, Chelsea Whyte, asks: Is it time to unfriend the social network? with specific reference to Facebook’s recently exposed track-record. According to her: “Facebook’s motto was once ‘move fast and break things.’ Now everything is broken.” Quoting from the same article:

Now, the UK parliament has published internal Facebook emails that expose the mindset inside the company. They reveal discussions among staff over whether to collect users’ phone call logs and SMS texts through its Android app. “This is a pretty high-risk thing to do from a PR perspective but it appears that the growth team will charge ahead and do it.” (So said Product Manager Michael LeBeau in an email from 2015)

Even without Edward Snowden’s whistle-blowing expose, we know that governments the world over are collecting our data because the technological ability to do that is now available. We are approaching a period in our so-called civilised development where we all have an on-line life (if you are reading this) and it can be accessed by governments and corporations alike. I’ve long known that anyone can learn everything they need to know about me from my computer, and increasingly they don’t even need the computer.

In one of my fictional stories, I created a dystopian world where everyone had a ‘chip’ that allowed all conversations to be recorded so there was literally no privacy. We are fast approaching that scenario in some totalitarian societies. In Communist China under Mao, and Communist Soviet Union under Stalin, people found the circle of people they could trust got smaller and smaller. Now with AI capabilities and internet-wide databases, privacy is becoming illusory. With constant surveillance, all subversion can be tracked and subsequently prosecuted. Someone once said that only societies that are open to new ideas progress. If you live in a society where new ideas are censored then you will get stagnation.

In my latest fiction I’ve created another autocratic world, where everyone is tracked because everywhere they go they interact with very realistic androids who act as servants, butlers and concierges, but, in reality, keep track of what everyone’s doing. The only ‘futuristic’ aspect of this are the androids and the fact that I’ve set it on some alien world. (My worlds aren’t terra-formed; people live in bubbles that create a human-friendly environment.)

After reading these very recent articles in New Scientist and TNY, I’ve concluded that our world is closer to the one I’ve created in my imagination than I thought.


Addendum 1: This is a podcast about so-called Surveillance Capitalism, from Mozilla. Obviously, I use Google and I'm also on FaceBook, but I don't use Twitter. Am I part of the problem or part of the solution? The truth is I don't know. I try to make people think and share ideas. I have political leanings, obviously, but they're transparent. Foremost, I believe, that if you can't put your name to something you shouldn't post it.

Friday 17 August 2018

Aretha Franklin - undisputed Queen of Soul

25 March 1942 (Memphis, Tennessee) to 16 August 2018 (Detroit, Michigan)

Quote: Being a singer is a natural gift. It means I'm using to the highest degree possible the gift that God gave me to use. I'm happy with that.


I can't watch this video without tears. She lived through the civil rights years, won 18 Grammy Awards and was the first woman to be inducted into The Rock and Roll Hall of Fame. A gospel singer by origin, she was one of the Truly Greats.

There have been many tributes but Barack Obama probably sums it up best:

Aretha helped define the American experience. In her voice, we could feel our history, all of it and in every shade—our power and our pain, our darkness and our light, our quest for redemption and our hard-won respect. May the Queen of Soul rest in eternal peace. 

Sunday 17 June 2018

In defence of (Australia’s) ABC

I have just finished reading a book by the IPA titled Against Public Broadcasting; Why We Should Privatise the ABC and How to Do It (authors: Chris Berg and Sinclair Davidson).  The IPA is the Institute of Public Affairs, and according to Wikipedia ‘…is a conservative public policy think tank based in Melbourne Australia. It advocates free market economic policies such as privatisation and deregulation of state-owned enterprises, trade liberalisation and deregulated workplaces, climate change scepticism, the abolition of the minimum wage, and the repeal of parts of the Racial Discrimination Act 1975.’ From that description alone, one can see that the ABC represents everything IPA opposes.

There has long been a fractured relationship between the ABC and consecutive Liberal governments, but the situation has deteriorated recently, and I see it as a symptom of the polarisation of politics occurring everywhere in the Western world.

It should be obvious where I stand on this issue, so I am biased in the same way that the IPA is biased, though we are on opposing sides. A friend of mine and work colleague for nearly 3 decades, recently told me that I had lost ‘objectivity’ on political issues, and he was specifically referring to my stance on the ABC and offshore detention of refugees. I told him that ‘mathematics is the only intellectual endeavour that is truly objective’. Philosophy is not objective; ever since Socrates, it’s all about argument and argument axiomatically assumes alternative points of view.

The IPA’s book is generally well argued in as much as they provide economic rationales and counter arguments to the most common reasons given for keeping the ABC. I won’t go into these in depth, as I don’t have time; instead I will focus more on the ideological divide that I believe has led to this becoming a political issue.

One of the authors’ themes is that the ABC is anachronistic, which implies it no longer serves the purpose for which it was originally intended. But they go further in that they effectively argue that the policy of having a public broadcaster in the English BBC mould was flawed from the beginning and that Australia should have adopted the American model of a free market, so no public broadcaster in the first place.

The authors refer to a survey done by the Dix inquiry in 1981 as ‘the most comprehensive investigation into the ABC in the public broadcaster’s history.’ Dix gave emphasis to a number of population surveys, but tellingly the authors say that ‘audience surveys are a thin foundation on which to mount an argument for public broadcasting.’ I’m not sure they’d mount that argument if the audience survey had come out negative.

The fact is that throughout the entire history of Australian broadcasting, we have adopted a combined A and B (public and commercial) approach that seems to be complementary rather than conflicting. The IPA would argue that this view is erroneous. Given Australia’s small population base over an enormous territory (Australia is roughly the area of the US without Alaska, but with 60% of California’s population) this mix has proven effective. Arguably, with the Internet and on-line entertainment services, the world has changed. The point is that the ABC has adapted, but commercial entities claim that the ABC has an unfair advantage because it’s government subsidised. Be that the case, the ABC provides quality services that the other networks don’t provide (elaborated on below).

According to figures provided in their book, the ABC captures between 19% and 25% market share for both television and radio (the 19% is for prime time viewing, but they get 24-28% overall). Given that there are 3 other TV networks plus subscription services like Netflix and Stan, this seems a reasonable share. It would have been interesting to see what market share the other networks capture for a valid comparison. But if one of the commercial networks dominates with 30% or more, than the other 2 networks would have less share than the ABC. Despite this, the authors claim, in other parts of the book, that the ABC has a ‘fraction’ of the market. Well, ¼ is a sizeable fraction, all things considered.

One of the points the authors make is that there seems to be conflicting objectives in practice as well as theory. Basically, it is argued that the ABC provide media services that are not provided by the private sector, yet they compete with the private sector in areas like news, current affairs, dramas and child education programmes. Many people who oppose the ABC (not just the IPA) argue that the ABC should not compete with commercial entities. But they have the same market from which to draw consumers, so how can they not? Many of these same people will tell you that they never watch the ABC, which effectively negates their argument.

But the argument disguises an unexpressed desire for the ABC to become irrelevant by choice of content. What they are saying, in effect, is that the ABC should only produce programmes that nobody wants to watch or listen to. There is an inference that they don’t mind if the ABC exists as long as they don’t compete with other networks; in other words, as long as they just produce crap.

In some respects they don’t compete with other networks, because, as the authors say themselves, they produce ‘quality’ programmes. In fact, the authors, in an extraordinary piece of legerdemain say: “If private media outlets are producing only ‘commercial trash’, then that could very well be because the ABC has cornered the market for quality.” I thought this argument so hilarious, I call it an ‘own goal’.

It’s such a specious argument. The authors strongly believe that the market sorts everything out, so it’s just a matter of supply and demand. But here’s the thing: if the commercial networks don’t produce ‘quality’ programmes (to use the authors’ own nomenclature) when they have competition from the ABC, why would they bother when the ABC no longer exists?

For the authors this was a throwaway comment, but for me, it’s the raison d’etre of the ABC. The reason I oppose the abandonment of the ABC is because I don’t want mediocrity to rule unopposed.

Paul Barry, who has worked in both public and commercial television, recalls an occasion when he was covering a Federal election for a commercial network. He wanted to produce some analytical data, and his producer quickly squashed it, saying ‘no one wants to watch that crap’ or words to that effect. Barry said he quickly realised that he was dealing with a completely different audience. Many people call this elitist, and I agree. But if elitist means I want intellectual content in my viewing, then I plead guilty.

If the ABC was an abject failure, if it didn’t have reasonable market share, if it wasn’t held in such high regard by the general public, (including people who don’t use it; as pointed out by the authors themselves) there would appear to be no need to write a book proposing a rationale and finely tuned argument for its planned obsolescence. In other words, the book has been written principally to explain why a successful enterprise should be abandoned or changed at its roots. Since the ABC has been a continuing, evolving success story in the face of technological changes to media distribution, the argument to radically alter its operations, even abandon it, appears specious and suggests ulterior motives.

In fact, I would argue that it’s only because the ABC is so successful that its most virulent critics want to dismantle it and erase it from the Australian collective consciousness. For some people, including the IPA (I suspect), the reasons are ideological. They simply don’t want such a successful media enterprise that doesn’t follow their particular political and ideological goals to have the coverage and popularity that the ABC benefits from.

This brings us to the core of the issue: the ABC’s perceived political bias. Unlike most supporters of the ABC, I think this bias is real, but the authors themselves make the point that the ABC should not be privatised for ‘retribution’. The authors give specific examples where they believe political bias has been demonstrated but it’s hard to argue that it’s endemic. The ABC goes to lengths that most other services don’t, to acquire an opposing point of view. To give a contemporary example: 4 Corners, which is a leading investigative journalism programme, is currently running a 3 part series on Donald Trump and his Russian connections. The journalist, Sarah Ferguson, has lengthy interviews with the people under scrutiny (who have been implicated) and effectively gives them the right of reply to accusations made against them by media and those critical of their conduct. She seeks the counsel of experts, not all of whom agree, and lets the viewer make their own judgements. It’s a very professional dissection (by an outsider) of a major political controversy.

So-called political bias is subjective, completely dependent on the bias of the person making the judgment. I’m an exception in that I share the ABC’s bias yet acknowledge it. Most people who share the ABC’s bias (or any entity’s bias) will claim it’s not biased, but any entity (media or otherwise) with a different view to theirs will be biased according to them. From this perspective, I expect the IPA to consider the ABC biased, because they have a specific political agenda (as spelt out in the opening paragraph of this post) that is the opposite of the ABC’s own political inclinations. The authors acknowledge that intellectuals are statistically left leaning and journalists are predominantly intellectual.

To illustrate my point, the authors give 2 specific examples that they claim demonstrates the ABC’s lack of impartiality. One is that the ABC doesn’t give air time to climate change sceptics. But from my point of view, it’s an example of the ABC’s integrity in that they won’t give credibility to bogus science. In fact, they had a zealous climate change sceptic on a panel with Brian Cox, who annihilated him with facts from NASA. Not surprisingly, the sceptic argued the data was contaminated. Apparently, this embarrassment on national television of a climate change denier is an example of unacceptable political bias on the part of the ABC. The IPA, as mentioned earlier, is a peddler of climate change scepticism.

The other example mentioned by the authors is that the ABC doesn’t give enough support to the government’s policy of offshore detention. In fact, the ABC (and SBS) are the only mainstream media outlets in Australia that are openly critical of this policy, which is a platform of both major political parties, so political bias for one party over the other is not an issue in this case.

A few years ago, under Prime Minister Tony Abbott, laws were introduced to threaten health workers at Nauru and Manus Island (where asylum seekers are kept in detention) if they reported abuse. This was hypocritical in the extreme when health workers on mainland Australia are obliged by law to report suspected abuse. The ABC interviewed whistleblowers who risked jail, which the government of the day would have seen as a form of betrayal: giving a voice to people they wanted to silence.

As recently as last week there has been another death (of a 26 year old Iranian) but it never made it into any mainstream media report.

One only has to visit the web page of the publishers of the book, Connor Court Publishing, to see that they specialise in disseminating conservative political agendas like climate change scepticism and offshore detention.

To give a flavour of the IPA, there was recently a Royal Commission into the banking and finance sector which uncovered widespread corruption and rorting. On IPA’s website, I saw a comment about the hearings that compared them to a Soviet-style show trial. The ABC, it should be noted, reported the facts without emotive rhetoric but fielded comments from politicians on both sides of politics.

At the end of the book, the authors discuss how the ABC could be privatised. Basically, there are 2 alternatives: tender it to a private conglomerate (which could be overseas based) or put it to public shareholders, similar to what was done with Tesltra (a telecommunication company).  The IPA’s proposal is that they make the employees the shareholders so that they have full financial responsibility for their own performance. Their argument is that because they would be forced to appeal to a wider audience they would have to change their political stripes. In other words, they would need to appeal to populist movements that are gaining political momentum in all Western democracies, though they don’t specifically say this. This seems like an exercise in cynicism, as I’m unaware of any large complex media enterprise that is 'owned' by its employees. It seems to my inexpert eye like a recipe for failure, which I believe is their unstated objective.

Their best argument is that it costs roughly $1B a year that could be better spent elsewhere and that it’s an unnecessary burden on our national debt. This comes to 14c a day per capita, apparently. I see it as part of the government’s investment in arts and culture.

There is a larger context that the book glosses over, which is the role of media in keeping a democracy honest. The ABC is possibly unique in that it’s a taxpayer funded media that holds the government of the day to account (for both sides of politics). I think the authors’ arguments are ideologically motivated. In short, the book is a rational economic argument to undermine, if not destroy, an effective media enterprise that doesn’t reflect the IPA’s own political ambitions.

Thursday 10 May 2018

An explanation of my tattoos

I have 2 tattoos, one on each arm, which I admit represent the height of pretentiousness. On my left arm I have Euler’s famous identity, which links e, Ï€, i, 0 and 1 in a very simple yet unexpected relationship: eiÏ€ + 1 = 0

This has no meaning in the physical world, even though we know it’s true (I’ve demonstrated this in another post). For a start, i is not really a number (even though it’s defined by i = √-1) because you can’t have an i number of things. I prefer to think of it as a dimension that’s perpendicular to all other dimensions, because that’s how it’s represented graphically. In fact, mathematically, it’s not Real by definition. You have Real numbers and imaginary numbers and they are described in complex algebra as z = a + ib, where z has a Real component and an imaginary component. Notice that they don’t get mixed up, yet they do in Euler’s identity. Euler’s identity is so weird, for want of a better word, that it has a special status. Richard Feynman called it 'the most remarkable formula in mathematics'.

My point is that Euler’s identity only has meaning in an abstract realm or transcendental realm, which is apt, considering that Ï€ and e are called transcendental numbers, which means they can never be calculated in full. They can only exist in a transcendental realm – the Universe can’t contain them. Even God doesn’t know the last digit of Ï€ (or e, for that matter).

On my right arm I have Schrodinger’s equally famous equation, which I’ve also expounded upon in depth in another post. John Barrow called it 'the most important equation in mathematical physics': ih(Ï‘/Ï‘t)Ψ = HΨ

This is a poor representation but it’s close enough for my purposes. The tattoo on my arm is a much better rendition. Notice that it also includes the number i because complex algebra is essential to quantum mechanics and this is a seminal equation in QM. It is the complement or opposite of the equation on my left arm, in as much as it only has meaning in the physical world (the same as E =  mc2, for example). Outside the Universe it has no meaning at all; whereas Euler’s identity would still be true even if the Universe didn’t exist and there was no one to derive it. To quote John Barrow, quoting Dave Rusin:

Mathematics is the only part of science you could continue to do, if tomorrow the Universe ceased to exist.

Schrodinger derived his equation from a hunch; it’s not derived from anything we know (as Richard Feynman once pointed out). It describes the wave function of a particle that’s not yet 'observed', which makes it truly remarkable, and therefore it can only give us probabilities of finding it. Nevertheless, it’s been found to be very accurate in those probabilities. Schrodinger’s wave function is now incorporated into QED (quantum electrodynamics) which effectively describes everything we can see and touch and is arguably the most successful mathematical theory in physics, comparable only to Einstein’s general theory of relativity. In principle, you could have a Schrodinger equation for the entire universe, but you’d probably need a computer the size of the Universe to calculate it.

So on my left arm I have a mathematical connection to a transcendental (or Platonic) realm, and on my right arm I have a mathematical connection to the physical Universe.

But there is more, because Euler’s identity is the solution of an equation called Euler’s equation:  eiθ = cosθ + isinθ; which becomes Euler’s identity when θ = Ï€. The point is that this equation provides the key ingredient to Schrodinger’s wave function, ψ (psi, pronounced sy), so these equations are linked. The transcendental world is linked to the physical world, arguably without the need of human consciousness to make that link.


Footnote: A friend of mine wrote a poem about my tattoos.

Addendum: I came across this description by Clifford A Pickover in his opus, The Mαth βook:

Schrodinger's wave equation - which describes reality and events in terms of wave functions and probabilities - may be thought of as the evanescent substrate on which we all exist.

Sunday 8 April 2018

48hr Flash Fiction Challenge - 2018

 I entered this last year. It's actually called the Sci-Fi London Challenge, and the rules are pretty simple. They give you a title and a piece of dialogue plus an optional clue and you have to write a story in 2,000 words or less (I did it in 1,947). It opens 11am Sat and closes 1pm Mon (hence 48hr flash fiction). That's London time, so in reality it's from 8pm Sat to 10pm Mon Australian Eastern time, but it can easily be written in a day if you've got the bit between your teeth, otherwise you'll probably never do it. What I mean is either something comes to you or it doesn't, and if it doesn't then you're probably wasting your time.

Title: Where the grass still grows
Mandatory dialogue: Did you deliberately set out to make as much mess as possible?
Optional cue: New psychotropic drug creates telepathy/telekinesis

Getting the dialogue in was not a problem, but the title is a bit obscure. I allude to it in a very obtuse sort of way. Don't let a bad title get in the way of a good story, is what I told myself. The optional cue gave me some ideas but I went off in a completely different direction, as I tend to do.

Like last year's entry, this is not true sci-fi, more like Twilight Zone, which is appropriate given when and where I set the story. Personally, I think it's better than my last year's entry, but it's for others to judge.

Now some may think this a bit autobiographical because I grew up in a country town in this era and I was a science nerd in high school. Also we did have an eccentric science teacher who was really good with all kids, the bright ones and the ones who struggled. There was never any after school lab experiments but he did run extra classes for the lower level kids, not the high achievers. I still think that was rather remarkable. He failed me in chemistry in my final year to get my head out of my arse and it worked. But my fictional characters are all pure fiction. In my mind, they don't resemble anyone I know in real life. Characters come into my head like melodies and lyrics come into the heads of songwriters. That's my secret. Now you know.

Short stories need a twist in the tale, and this is no exception, except I didn't know what it was until I got there. In other words, I didn't know how it was going to end, and then it surprised me.

The formatting gets messed up, especially for dialogue, but I make the best of a bad situation. The submission manuscript is double-line spaced and it has proper formatting, with paragraph indentation, like you'd find in a novel. Below is my entry.



Davey lived alone with his mum, Irene to her friends; he had no memory of his father and he had no siblings. His mother never remarried. The favourite topic amongst his school friends was who was best: Elvis or the Beatles?

His best friend at school was Kevin; they were in Form 10. He secretly liked Penny, a girl in the year behind him, and on the rare occasions he had spoken to her, she was nice, but deliberately ignored him when her friends were around, so he avoided her.

 His favourite class was science. The teacher, Mr Robotham, always wore a white lab coat that was stained by experiments gone awry or possibly not; no one asked. He was thin and hawk nosed but was friendly and helpful, both to kids who were bright and kids who struggled.

Mr Robotham liked Davey, who was always asking extra-curricula questions, and he even lent him books, providing he told no one else. Mr Robotham sometimes allowed Davey to stay back after school and perform experiments, which he did most weeks, usually Wednesdays when everyone else was playing sport, and occasionally Kevin would join him.

On this occasion, Davey had assembled a massive apparatus, of tubes, beakers, flasks with stoppers and spaghetti-like hoses joining everything together. When he believed he had everything in order, he put one of the flasks, full of a yellowy liquid, on top of a Bunsen burner and started heating it up.
 Kevin looked a bit worried, ‘Do you know what you’re doing?’
‘Of course I do.’
‘So what are you making?’
He looked at Kevin with a wicked grin, ‘Let’s find out.’

Kevin watched the liquid boil and stepped back, while Davey put on a pair of safety glasses and watched to see if the liquid went up the tube as he hoped. Mr Robotham always made them wear safety glasses, no matter what they were doing in the lab, so it became second-nature.
Bang! The stopper in the flask went straight up and hit the ceiling and Davey found himself covered in the liquid.
‘Shit’, Kevin said.
Davey looked at his friend, whose eyes seemed to want to depart their sockets, and then down at his clothes covered in yellow goo. ‘Mum’s not going to be happy.’
Kevin couldn’t believe him. ‘Your Mum? Shit, what about Mr Robotham.’
‘I reckon he won’t be too happy either.’
As if to confirm his second-worst fears, Robotham came running into the lab. He must have heard the noise, Davey thought.
Robotham looked at Davey and put his hands on his shoulders, half-kneeling, ‘Are you alright?’
‘I’m fine. Sorry,’ he said in a small voice; he really wasn’t sure how Mr Robotham was going to react.
Robotham looked around at the aftermath, ‘Did you deliberately set out to make as much mess as possible?’
Davey looked up to him, ‘I’ll clean it up, Sir.’
But Mr Robotham surprised him, ‘No, you go home. Your mother is going to be so angry with me.’
Davey didn’t understand, ‘Why?’
‘Just go home,’ he looked at Kevin, who had been trying his best invisibility impersonation, ‘Both of you, before I change my mind.’

When he got home, his mother was so angry she didn’t say anything at first. But when she found her voice he surprised her, ‘My God, wait till I see Mr Robotham.’
‘It wasn’t his fault.’
‘Wasn’t it now? You go and run a bath. These clothes may be ruined for good.’
They ate their tea in silence and he wasn’t allowed to watch TV, so he went to bed in his room at the back of the house, next to hers. He found it hard to go to sleep.

At some point he woke up and found himself hovering above his bed; his sleeping form, on its back, below him. He could actually see himself breathing, yet he didn’t find it disconcerting; he found he was perfectly calm and he wondered if he had died.

Stranger still, he found he could move simply by will and he could travel through the wall into his mother’s room. He thought, I must be dreaming, so he wondered, in his scientifically minded way, if there was some way he could test that. He lowered himself towards the floor and looked at his mother’s alarm clock; the illuminated hands showed it was 20 past midnight. He thought of trying to wake his mother, but realised it would only scare her, so he went back through the wall to his own body and got very close to his face. He could see everything, all his pimples and the downy moustache that he hadn’t shaved when he’d had his bath. He could see his shoes on the floor, his cupboard; it didn’t feel like a dream, but he didn’t know what to do. Would he be able to return to his body? The idea of entering it by conscious will somehow seemed the wrong thing to do. He felt like he had a ghostly astral body, though he couldn’t see it, so he touched his own hand with the sense of his astral hand. His body shivered and his breathing stuttered and he realised that it was completely the wrong thing to do.

For the first time, he actually felt scared. What if I can’t return? He went back to his mother’s room and noticed that the clock now said 27 past so it seemed to confirm for him that it wasn’t a dream.
He wondered how far he could travel, so he literally went through the roof of his house and looked up to the stars above and down to the tree near their back fence. His mother had a vegetable garden and even some chooks in a yard, and he could see the back veranda and the backyard where the grass still grew. He entered the chook yard and some of them on their roosts seemed to wake as if they knew he was there but otherwise remained inert.

 The stars were especially bright and he noticed that he could see everything in shades but more delineated than he would normally. He noticed that he didn’t feel the cold or the air on his astral body and it occurred to him, that since he could go through walls he must be existing in another dimension. He would normally be able to smell the dew on the grass but he couldn’t. He realised that his only sense was sight for some reason. He couldn’t even hear anything. Again, his scientific mind came to his aid. He thought, I can interact with radiation but not with matter. He knew from his science classes that matter and light interacted but were quite different. One was made of atoms and the other was made of waves. Perhaps that’s what he was now: some astral waveform.
He travelled around above the town like he was some sort of night bird or a superhero. Some superhero, he thought, I can’t even touch anything.

He couldn’t resist the urge to visit the house of his friend, Kevin. He wondered if this ghostly manifestation was a consequence of his botched experiment and if so, did it affect Kevin? He entered Kevin’s house and observed all the appurtenances that he was familiar with: the kitchen table and chairs, the canisters on the shelf, the old white stove, matching fridge and stainless steel sink under the window with floral themed curtains.
It felt wrong to enter Kevin’s parents’ bedroom, but he had little compunction about visiting his friend’s. And there he was fast asleep, with his mouth open and Davey thought he was probably snoring only he couldn’t hear it.

He felt confident that Kevin wasn’t suffering the same disembodied state that he was, and rose back through the roof to survey the town. He now felt the urge to visit Penny’s house, even though it seemed wrong. On the other hand, he wanted her to be his friend and he told himself that she wouldn’t mind. He asked himself, Would I be able to tell her about it later? And he decided he could.
When he entered her bedroom she was sleeping on her side and he felt she looked so peaceful; he was glad he couldn’t wake her even if he wanted to. But it still felt awkward so he didn’t stay. Because it was a country town there was little movement and virtually no traffic until he saw the baker and the milkman getting ready to work. He knew then that dawn wouldn’t be that far off and he decided he needed to go home.

In the morning he had to watch with increasing anxiety as his mother tried to wake him and then become distraught. She called an ambulance and he followed his body to the hospital where he was attached to various machines and doctors and nurses came and examined him. All the while his mother went through moods of stoic patience, angry berating of medical staff and occasionally going to a toilet cubicle where she could cry without anyone seeing her.

Davey, in his extra-dimensional state, didn’t know what to do but wished he could just return to his body and bring everything back to normal. Later in the day his friend Kevin turned up and so did Mr Robotham, but his mother gave him a verbal barrage that Davey could only imagine the content, although he did lip-read some choice words that she usually only reserved for newsreaders on the TV. Robotham thought it best to leave, though he was obviously very upset. Davey wished he could tell them both that it wasn’t their fault. He felt unbelievably guilty for all the anguish he had caused, even though he had no idea how he had done it and wished, beyond everything else, he could restore the balance.

Very late in the day, probably after school, he was surprised to see Penny arrive and he was even more surprised to see her cry. She said something to him which he couldn’t make out, but he was deeply moved. She left some flowers behind, with a card. On it, he read: Dear Davey, Please get well. You are a special friend. All my love, Penny.

Davey followed her out of the hospital and wished above everything else he could communicate with her. When he came up behind her, she seemed to turn her head as if she knew he was there, but kept walking, and he didn’t follow.

His mother stayed and refused to go home. The nurses brought her food in the evening, and when she laid down on seats in the waiting area, one of them put a blanket over her. Davey felt so sad and he went into the room where his body was, all hooked up to the machines, and decided it best to stay with it.

In the morning, Davey woke up to find himself in a hospital bed. Nurses and doctors came running when the machines told them he was awake and his mother came in, her face covered in tears.
He looked at his mother, ‘What’s wrong?’
She came up to the bed and hugged him and sobbed like there was no tomorrow. When she released him she said, ‘Oh Davey, you had us all so worried. We didn’t know what happened to you.’
Davey couldn’t remember anything from when he went to bed in his own house, which was, unbeknownst to him, two nights ago.
Back at school everyone treated him differently. He never did extra-curricular lab experiments again. And Penny suddenly became his newest best friend.

Wednesday 31 January 2018

Ursula K Le Guin - 21 October 1929 to 22 January 2018

I need to say something about Ursula Le Guin, as she was an inspiration to a generation of writers of fantasy and science fiction, including nonentities like yours truly and celebrated award-winning masters of their art like Neil Gaiman, who presented her with a Life Time Achievement Award at the 2014 American National Book Awards.

Ursula Le Guin was something of an oddity in that she was a famously successful author in the fantasy and sci-fi genre when it was dominated by male authors, well before J.K. Rowling came on the scene.

Left Hand of Darkness and The Dispossessed are possibly her best known works along with the Earthsea quartet, which is my personal favourite.

Below is the speech by Neil Gaiman, who describes her influence on his own writing, and Ursula's 'thank you' speech, where she laments the state of publishing and its corrosive effect on artistic freedom, as she sees it.

It is fair to say that she had an influence on my own writing, and perhaps I am lucky to have avoided the corporate publishing machine, if they have the influence over one's creative work as she infers.



I like this quote attributed to her:
It is good to have an end to journey towards; but it is the journey that matters in the end.

Wednesday 24 January 2018

Science, humanities and politics

I was reading recently in New Scientist (20 Jan., 2018) about the divide between humanities and science, which most of us don’t even think about. In an unrelated article in The Weekend Australian Review (6-7 Jan., 2018) there was a review of a biography by Walter Isaacson, Leonardo da Vinci, whose subject is arguably the greatest polymath known to Western civilisation, and who clearly straddled that divide with consummate ease. One suspects that said divide didn’t really exist in Leonardo’s day and one wonders what changed.

Specialisation is one answer, but it’s not sufficient I would suggest. When trying to think of a more modern example, Isaac Asimov came to mind, though being a Sci-Fi writer myself, that’s not surprising. As well as being a very prolific writer (more than 500 books) he was professor of biochemistry at Boston University.

I’m no Asimov in either field, yet to some extent I believe I straddle this so-called divide without excelling in either science or arts. I can remember reading A Terrible Beauty by Peter Watson, which was an extraordinary compiled history of the 20th Century that focused on the ideas and the people who produced them rather than the politics or the many conflicts that we tend to associate with that century. The reason I mention this outstanding and well written tome is that I was struck by Watson’s ability to discuss art and science with equal erudition and acumen. Watson, from memory, was more of a journalist than a scholar, but this diverse scholasticism, for want of a better phrase, I thought a most unusual trait in the modern world.

As anyone who reads this blog has probably deduced, my primary ambition as a youth was to become a physicist. As someone who can look back over many decades of life, I’m not especially disappointed that I didn’t realise that ambition. My other youthful ambition was to become a writer of fiction and once again I’m not especially disappointed that I didn’t succeed. I can’t complain as I was able to make a decent living in the engineering and construction industry in a non-technical capacity. It allowed me to work on diverse projects and interact with very clever people on a professional level.

But this post is not about me, even though I’m trying to understand why I don’t perceive this divide (in quite the same way others do) that clearly delineates our society. We have technical people who make all the stuff we take for granted and then we have artistic people who make all the stuff that entertains us, which is so ubiquitous we tend to take that for granted as well. Of course, I haven’t mentioned the legions of sportspeople who become our heroes in whatever country we live in. They don’t fit into the categories of humanities and science yet they dominate our consciousness when they take to the field.

The other point that can’t be ignored is the politicisation of both humanities and science in the modern world. Artists are often, but not always, associated with left wing politics. People are often unaware that there is a genetic disposition to our political inclinations. I’m unusual in my family for leaning to the left, but I’m also unusual in having artistic proclivities that I inherited from my mother’s side. Artists have often in the past been associated with a bohemian lifestyle but also with being more open and tolerant of difference. One should remember that homosexuals have long been accepted in theatre in a way they weren’t in society at large, even when it was criminalised.

This is not to say that all artists are left wing, as they clearly aren’t, but it’s interesting that the left side of politics seems to be more generous towards the arts (at least in Australia) than their oppositional counterparts. But politics doesn’t explain the humanities science divide. Science has become politicised recently with the issue of climate change. According to the political right, climate change is a conspiracy and fraudulent propaganda by scientists to keep themselves in jobs. This came to a head in 2016 in Australia when, under a Turnbull Liberal government (still in office), a prominent, world-wide respected climatologist at CSIRO (John Church) was sacked and his department eviscerated on the excuse that the Paris Accord had found the answer to climate change and no more research was necessary – we needed solutions not more research. It should be pointed out that, subjected to international outrage, the sackings were reduced from over 100 to more like 30, but John Church still lost his job. This, in spite of the fact that “CSIRO has long led the world in modelling Southern Hemisphere climate.” (Peter Boyer, Independent Australia, 20 May 2016).

What I like to point out is that the politicisation of climate change is largely by non-scientists and not scientists. As far as most scientists are concerned science itself is largely politically neutral. Now I know many people will dispute this very viewpoint because science is generally seen as a tool to provide technological solutions which are to the benefit of society at large. And one might qualify that by specifying Western society, though Asia is also adopting technologies at an accelerated rate. In other words, science is politically driven to the extent that politicians decide what technologies would benefit us most. And I agree that, as far as most politicians are concerned, science is simply a tool in the service of economics.

But my point is that, contrary to the polemic of right wing politicians, all climatologists are not left wing political conspirators. Scientists studying climate change could be of any political persuasion. As far as they are concerned nature doesn’t have a political agenda, only humans do.

To take another couple of examples where the politics is on the opposite side yet equally anti-science. Genetically engineered crops are demonised by many people on the political left, who conflate science and technology with corporate greed. Likewise anti-vaccination activists are also associated with the political left. What all these anti-science proponents have in common is their collective ignorance of science. They all see science as a conspiratorial propaganda machine whilst never considering the role science has played in giving them the historically unprecedented lifestyle that they take for granted.

I’ve never talked about my job (what I do for a living) on this blog before, but I’m going to because it’s relevant in an oblique way. I’m a project planner on generally very high tech, complex manufacturing and infrastructure projects. There are 2 parts to my job: planning for the future (in the context of the project); and predicting the future. It should be obvious that you can’t do one without the other. I like to think I’m good at my job because over a period of decades I’ve become better at predicting the future in that particular context; it’s a combination of science and experience. Of course, my predictions are not always well received but I’ve found that integrity is more valuable in the long term than acquiescence.

The relevance of this professional vanity to the subject at hand is that science is very good at predicting natural events and this is the specific nature of the issue of climate change. The process of democracy, which we see as underpinning both our governments and our societies at large, effectively undermines scientific predictions when they are negative. Politicians know that it’s suicide at the polls to say anything negative which is why they only do so when it’s already happened.

To return to the New Scientist article that initiated this meditation, it’s actually a book review (Being Ecological by Timothy Morton) reviewed by Ben Collyer (which I haven’t read). According to Morton, as related by Collyer, it’s the divide between humanities and science that is part of the problem in that people are ignorant of the science that’s telling us the damage we are doing on a global scale. Collyer also reviews Our Oldest Task: Making Sense of Our Place in Nature by Eric T Freyfogle (not read by me either) and Collyer intermingles them in his discussion. Basically, since the emergence of agriculture and the dominant religions we see ourselves as separate from nature. This is a point that Jeremy Lent also makes in The Patterning Instinct (which I have read).

We call this the Anthropocene era and we are increasingly insulating ourselves from the natural world though technology, which I find a paradox. Why a paradox? Because technology is born out of science, and science, by definition and in practice, is the study of the natural world in all its manifestations. We are on an economically driven treadmill that delegates science to technological inventions whose prime purpose is to feed consumerism by promising us lives of unprecedented affluence. This is explored in recent books, Homo Deus by Yuval Noah Harari (which I recently reviewed) and Utopia for Realists by Rutger Bregman (which I’ve read but not reviewed). I would argue that we are making the wrong types of sacrifices in order to secure our future. A future that ignores the rest of nature or is premised on the unacknowledged belief that we are independent of nature cannot be sustained indefinitely. The collapse of civilisations in the past are testament to this folly.