Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Monday, 18 May 2020

An android of the seminal android storyteller

I just read a very interesting true story about an android built in the early 2000s based on the renowned sci-fi author, Philip K Dick, both in personality and physical appearance. It was displayed in public at a few prominent events where it interacted with the public in 2005, then was lost on a flight between Dallas and Las Vegas in 2006, and has never been seen since. The book is called Lost In Transit; The Strange Story of the Philip K Dick Android by David F Duffy.

You have to read the back cover to know it’s non-fiction published by Melbourne University Press in 2011, so surprisingly a local publication. I bought it from my local bookstore at a 30% discount price as they were closing down for good. They were planning to close by Good Friday but the COVID-19 pandemic forced them to close a good 2 weeks earlier and I acquired it at the 11th hour, looking for anything I might find interesting.

To quote the back cover:

David F Duffy was a postdoctoral fellow at the University of Memphis at the time the android was being developed... David completed a psychology degree with honours at the University of Newcastle [Australia] and a PhD in psychology at Macquarie University, before his fellowship at the University of Memphis, Tennessee. He returned to Australia in 2007 and lives in Canberra with his wife and son.

The book is written chronologically and is based on extensive interviews with the team of scientists involved, as well as Duffy’s own personal interaction with the android. He had an insider’s perspective as a cognitive psychologist who had access to members of the team while the project was active. Like everyone else involved, he is a bit of a sci-fi nerd with a particular affinity and knowledge of the works of Philip K Dick.

My specific interest is in the technical development of the android and how its creators attempted to simulate human intelligence. As a cognitive psychologist, with professionally respected access to the team, Duffy is well placed to provide some esoteric knowledge to an interested bystander like myself.

There were effectively 2 people responsible (or 2 team leaders), David Hanson and Andrew Olney, who were brought together by Professor Art Greasser, head of the Institute of Intelligent Systems, a research lab in the psychology building at the University of Memphis (hence the connection with the author). 

Hanson is actually an artist, and his specialty was building ‘heads’ with humanlike features and humanlike abilities to express facial emotions. His heads included mini-motors that pulled on a ‘skin’, which could mimic a range of facial movements, including talking.

Olney developed the ‘brains’ of the android that actually resided on a laptop and was connected by wires going into the back of the android’s head. Hanson’s objective was to make an android head that was so humanlike that people would interact with it on an emotional and intellectual level. For him, the goal was to achieve ‘empathy’. He had made at least 2 heads before the Philip K Dick project.

Even though the project got the ‘blessing’ of Dick’s daughters, Laura and Isa, and access to an inordinate amount of material, including transcripts of extensive interviews, they had mixed feelings about the end result, and, tellingly, they were ‘relieved’ when the head disappeared. It suggests that it’s not the way they wanted him to be remembered.

In a chapter called Life Inside a Laptop, Duffy gives a potted history of AI, specifically in relation to the Turing test, which challenges someone to distinguish an AI from a human. He also explains the 3 levels of processing that were used to create the android’s ‘brain’. The first level was what Olney called ‘canned’ answers, which were pre-recorded answers to obvious questions and interactions, like ‘Hi’, ‘What’s your name?’, ‘What are you?’ and so on. Another level was ‘Latent Semantic Analysis’ (LSA), which was originally developed in a lab in Colorado, with close ties to Graesser’s lab in Memphis, and was the basis of Grasser’s pet project, ‘AutoTutor’ with Olney as its ‘chief programmer’. AutoTutor was an AI designed to answer technical questions as a ‘tutor’ for students in subjects like physics.

To create the Philip K Dick database, Olney downloaded all of Dick’s opus, plus a vast collection of transcribed interviews from later in his life. The Author conjectures that ‘There is probably more dialogue in print of interviews with Philip K Dick than any other person, alive or dead.’

The third layer ‘broke the input (the interlocutor’s side of the dialogue) into sections and looked for fragments in the dialogue database that seemed relevant’ (to paraphrase Duffy). Duffy gives a cursory explanation of how LSA works – a mathematical matrix using vector algebra – that’s probably a little too esoteric for the content of this post.

In practice, this search and synthesise approach could create a self-referencing loop, where the android would endlessly riff on a subject, going off on tangents, that sounded cogent but never stopped. To overcome this, Olney developed a ‘kill switch’ that removed the ‘buffer’ he could see building up on his laptop. At one display at ComicCon (July 2005) as part of the promotion for A Scanner Darkly (a rotoscope movie by Richard Linklater, starring Keanu Reeves), Hanson had to present the android without Olney, and he couldn’t get the kill switch to work, so Hanson stopped the audio with the mouth still working and asked for the next question. The android simply continued with its monolithic monologue which had no relevance to any question at all. I think it was its last public appearance before it was lost. Dick’s daughters, Laura and Isa, were in the audience and they were not impressed.

It’s a very informative and insightful book, presented like a documentary without video, capturing a very quirky, unique and intellectually curious project. There is a lot of discussion about whether we can produce an AI that can truly mimic human intelligence. For me, the pertinent word in that phrase is ‘mimic’, because I believe that’s the best we can do, as opposed to having an AI that actually ‘thinks’ like a human. 

In many parts of the book, Duffy compares what Graesser’s team is trying to do with LSA with how we learn language as children, where we create a memory store of words, phrases and stock responses, based on our interaction with others and the world at large. It’s a personal prejudice of mine, but I think that words and phrases have a ‘meaning’ to us that an AI can never capture.

I’ve contended before that language for humans is like ‘software’ in that it is ‘downloaded’ from generation to generation. I believe that this is unique to the human species and it goes further than communication, which is its obvious genesis. It’s what we literally think in. The human brain can connect and manipulate concepts in all sorts of contexts that go far beyond the simple need to tell someone what they want them to do in a given situation, or ask what they did with their time the day before or last year or whenever. We can relate concepts that have a spiritual connection or are mathematical or are stories. In other words, we can converse in topics that relate not just to physical objects, but are products of pure imagination.

Any android follows a set of algorithms that are designed to respond to human generated dialogue, but, despite appearances, the android has no idea what it’s talking about. Some of the sample dialogue that Duffy presented in his book, drifted into gibberish as far as I could tell, and that didn’t surprise me.

I’ve explored the idea of a very advanced AI in my own fiction, where ‘he’ became a prominent character in the narrative. But he (yes, I gave him a gender) was often restrained by rules. He can converse on virtually any topic because he has a Google-like database and he makes logical sense of someone’s vocalisations. If they are not logical, he’s quick to point it out. I play cognitive games with him and his main interlocutor because they have a symbiotic relationship. They spend so much time together that they develop a psychological interdependence that’s central to the narrative. It’s fiction, but even in my fiction I see a subtle difference: he thinks and talks so well, he almost passes for human, but he is a piece of software that can make logical deductions based on inputs and past experiences. Of course, we do that as well, and we do it so well it separates us from other species. But we also have empathy, not only with other humans, but other species. Even in my fiction, the AI doesn’t display empathy, though he’s been programmed to be ‘loyal’.

Duffy also talks about the ‘uncanny valley’, which I’ve discussed before. Apparently, Hanson believed it was a ‘myth’ and that there was no scientific data to support it. Duffy appears to agree. But according to a New Scientist article I read in Jan 2013 (by Joe Kloc, a New York correspondent), MRI studies tell another story. Neuroscientists believe the symptom is real and is caused by a cognitive dissonance between 3 types of empathy: cognitive, motor and emotional. Apparently, it’s emotional empathy that breaks the spell of suspended disbelief.

Hanson claims that he never saw evidence of the ‘uncanny valley’ with any of his androids. On YouTube you can watch a celebrity android called Sophie and I didn’t see any evidence of the phenomenon with her either. But I think the reason is that none of these androids appear human enough to evoke the response. The uncanny valley is a sense of unease and disassociation we would feel because it’s unnatural; similar to seeing a ghost - a human in all respects except actually being flesh and blood. 

I expect, as androids, like the Philip K Dick simulation and Sophie, become more commonplace, the sense of ‘unnaturalness’ would dissipate - a natural consequence of habituation. Androids in movies don’t have this effect, but then a story is a medium of suspended disbelief already.

Sunday, 10 May 2020

Logic, analysis and creativity

I’ve talked before about the apparent divide between arts and humanities, and science and technology. Someone once called me a polymath, but I don’t think I’m expert enough in any field to qualify. However, I will admit that, for most of my life, I’ve had a foot in both camps, to use a well-worn metaphor. At the risk of being self-indulgent, I’m going to discuss this dichotomy in reference to my own experiences.

I’ve worked in the engineering/construction industry most of my adult life, yet I have no technical expertise there either. Mostly, I worked as a planning and cost control engineer, which is a niche activity that I found I was good at. It also meant I got to work with accountants and lawyers as well as engineers of all disciplines, along with architects. 

The reason I bring this up is because planning is all about logic – in fact, that’s really all it is. At its most basic, it’s a series of steps, some of which are sequential and some in parallel. I started doing this before computers did a lot of the work for you. But even with computers, you have to provide the logic; so if you can’t do that, you can’t do professional planning. I make that distinction because it was literally my profession.

In my leisure time, I write stories and that also requires a certain amount of planning, and I’ve found there are similarities, especially when you have multiple plot lines that interweave throughout the story. For me, plotting is the hardest part of storytelling; it’s a sequential process of solving puzzles. And science is also about solving puzzles, all of which are beyond my abilities, yet I love to try and understand them, especially the ones that defy our intuitive sense of logic. But science is on a different level to both my professional activities and my storytelling. I dabble at the fringes, taking ideas from people much cleverer than me and creating a philosophical pastiche.

Someone on Quora (a student) commented once that studying physics exercised his analytical skills, which he then adapted to other areas of his life. It occurred to me that I have an analytical mind and that is why I took an interest in physics rather than the other way round. Certainly, my work required an analytical approach and I believe I also take an analytical approach to philosophy. In fact, I’ve argued previously that analysis is what separates philosophy from dogma. Anyway, I don’t think it’s unusual for us, as individuals, to take a skill set from one activity and apply it to another apparently unrelated one.

I wrote a post once about the 3 essential writing skills, being character development, evoking emotion and creating narrative tension. The key to all of these is character and, if one was to distil out the most essential skill of all, it would be to write believable dialogue, as if it was spontaneous, meaning unpremeditated, yet not boring or irrelevant to the story. I’m not at all sure it can be taught. Someone once said (Don Burrows) that jazz can’t be taught, because it’s improvisation by its very nature, and I’d argue the same applies to writing dialogue. I’ve always felt that writing fiction has more in common with musical composition than writing non-fiction. In both cases they can come unbidden into one’s mind, sometimes when one is asleep, and they’re both essentially emotive mediums. 

But science too has its moments of creativity, indeed sheer genius; being a combination of sometimes arduous analysis and inspired intuition.

Wednesday, 8 April 2020

Secret heroes

A writer can get attached to characters, and it tends to sneak up on one (speaking for myself) meaning they are not necessarily the characters you expect to affect you.

 

All writers, who get past the ego phase, will tell you the characters feel like they exist separately to them. By the ego phase, I mean you’ve learned how to keep yourself out of the story, though you may suffer lapses – the best fiction is definitely not about you.

 

People will tell you that you use your own experience on which to base characters and events, and otherwise will base characters on people you know. I expect some writers might do that and I’ve even seen advice, if writing a screenplay, to imagine an actor you’ve seen playing the role. If I find myself doing that then I know I’ve lost the plot, literally rather than figuratively. 

 

I borrow names from people I’ve known, but the characters don’t resemble them at all, except in ethnicity. For example, if I have an Indian character, I will use an Indian name of someone I knew. We know that a name is not unique, so we know more than one John, for example, and we also know they have nothing in common.

 

I worked with someone once, who had a very unusual name, Essayas Alfa, and I used both his names in the same story. Neither character was anything like the guy I knew, except the character called Essayas was African and so was my co-worker, but one was a sociopath and the other was a really nice bloke. A lot of names I make up, including all the Kiri names, and even Elvene. I was surprised to learn it was a real name; at least, I got the gender right.

 

The first female character I ever created, when I was learning my craft, was based on someone I knew, though they had little in common, except their age. It was like I was using her as an actor for the role. I’ve never done that since. A lot of my main characters are female, which is a bit of a gamble, I admit. Creating Elvene was liberating and I’ve never looked back.

 

If you have dreams occupied by strangers, then characters in fiction are no different. One can’t explain it if you haven’t experienced it. So how can you get attached to a character who is a figment of your mind? Well, not necessarily in the way you think – it’s not an infatuation. I can’t imagine falling in love with a character I created, though I can imagine falling in love with an actor playing that character, because she’s no longer mine (assuming the character is female).

 

And I’ve got attached to male characters as well. These are the characters who have surprised me. They’ve risen above themselves, achieved something that I never expected them to. They weren’t meant to be the hero of the story, yet they excel themselves, often by making a sacrifice. They go outside their comfort zone, as we like to say, and become courageous, not by overcoming an adversary but by overcoming a fear. And then I feel like I owe them, as illogical as that sounds, because of what I put them through. They are my secret hero of the story. 


Tuesday, 31 March 2020

Plato’s 2400 year legacy

I’ve said this before, but it’s worth repeating: no one totally agrees with everything by someone else. In fact, we each of us change our views as we learn and progress and become exposed to new ideas. It’s okay to cherry-pick. In fact, it’s normal. All the giants in science and mathematics and literature and philosophy borrowed and built on the giants who went before them.

I’ve been reading about Plato in A.C. Grayling’s extensive chapter on him and his monumental status in Western philosophy (The History of Philosophy). According to Grayling, Plato was critical of his own ideas. His later writings challenged some of the tenets of his earlier writings. Plato is a seminal figure in Western culture; his famous Academy ran for almost 800 years, before the Christian Roman Emperor, Justininian, closed it down in 529 CE, because he considered it pagan. One must remember that it was during the Roman occupation of Alexandria in 414 that Hypatia was killed by a Christian mob, which many believe foreshadowed the so-called ‘Dark Ages’. 

Hypatia had good relations with the Roman Prefect of her time, and even had correspondence with a Bishop (Synesius of Cyrene), who clearly respected, even adored her, as her former student. I’ve read the transcript of some of his letters, care of Michael Deakin’s scholarly biography. Deakin is Honorary Research Fellow at the School of Mathematical Sciences of Monash University (Melbourne, Australia). Hypatia also taught a Neo-Platonist philosophy, including the works of Euclid, a former Librarian of Alexandria. On the other hand, the Bishop who is historically held responsible for her death (Cyril) was canonised. It’s generally believed that her death was a ‘surrogate’ attack on the Prefect.

Returning to my theme, the Academy of course changed and evolved under various leaders, which led to what’s called Neoplatonism. It’s worth noting that Augustine was influenced by Neoplatonism as well as Aquinas, because Plato’s perfect world of ‘forms’ and his belief in an immaterial soul lend themselves to Christian concepts of Heaven and life after death.

But I would argue that the unique Western tradition that combines science, mathematics and epistemology into a unifying discipline called physics has its origins in Plato’s Academy. It was a pre-requisite, specified by Plato, that people entering the Academy required a knowledge of mathematics. The one remnant of Plato’s philosophy, which stubbornly resists being relegated to history as an anachronism, is mathematical Platonism, though it probably means something different to Plato’s original concept of ‘forms’.

In modern parlance, mathematical Platonism means that mathematics has an independent existence to the human mind and even the Universe. To quote Richard Feynman (who wasn’t a Platonist) from his book, The Character of Physical Law in the chapter titled The Relation of Mathematics to Physics.

...what turns out to be true is that the more we investigate, the more laws we find, and the deeper we penetrate nature, the more this disease persists. Every one of our laws is a purely mathematical statement in rather complex and abstruse mathematics... Why? I have not the slightest idea. It is only my purpose to tell you about this fact.

The ’disease’ he’s referring to and the ‘fact’ he can’t explain is best expressed in his own words:

The strange thing about physics is that for the fundamental laws we still need mathematics.

To put this into context, he argues that when you take a physical phenomenon that you describe mathematically, like the collision between billiard balls, the fundaments are not numbers or formulae but the actual billiard balls themselves (my mundane example, not his). But when it comes to fundaments of fundamental laws, like the wave function in Schrodinger’s equation (again, my example), the fundaments remain mathematical and not physical objects per se.

In his conclusion, towards the end of a lengthy chapter, he says:

Physicists cannot make a conversation in any other language. If you want to learn about nature, to appreciate nature, it is necessary to understand the language that she speaks in. She offers her information only in one form.

I’m not aware of any physicist who would disagree with that last statement, but there is strong disagreement whether mathematical language is simply the only language to describe nature, or it’s somehow intrinsic to nature. Mathematical Platonism is unequivocally the latter.

Grayling’s account of Plato says almost nothing about the mathematical and science aspect of his legacy. On the other hand, he contends that Plato formulated and attempted to address three pertinent questions:

What is the right kind of life, and the best kind of society? What is knowledge and how do we get it? What is the fundamental nature of reality?

In the next paragraph he puts these questions into perspective for Western culture.

Almost the whole of philosophy consists in approaches to the related set of questions addressed by Plato.

Grayling argues that the questions need to be addressed in reverse order. To some extent, I’ve already addressed the last two. Knowledge of the natural world has become increasingly dependent on a knowledge of mathematics. Grayling doesn’t mention that Plato based his Academy on Pythagoras’s quadrivium: arithmetic, geometry, astronomy and music; after Plato deliberately sought out Pythagoras’s best student, Archytas of Terentum. Pythagoras is remembered for contending that ‘all is number’, though his ideas were more religiously motivated than scientific.

But the first question is the one that was taken up by subsequent philosophers, including his most famous student, Aristotle, who arguably had a greater and longer lasting influence on Western thought than his teacher. But Aristotle is a whole other chapter in Grayling’s book, as you’d expect, so I’ll stick to Plato. 

Plato argued for an ‘aristocracy’ government run by a ‘philosopher-king’, but based on a meritocracy rather than hereditary rulership. In fact, if one goes into details, he effectively argued for leadership on a eugenics basis, where prospective leaders were selected from early childhood and educated to rule.

Plato was famously critical of democracy (in his time) because it was instrumental in the execution of his friend and mentor, Socrates. Plato predicted that democracy led to either anarchy or the rule of the wealthy over the poor. In the case of anarchy, a strongman would logically take over and you'd have 'tyranny', which is the worst form of government (according to Plato). The former (anarchy) is what we’ve recently witnessed in so-called 'Arab spring' uprisings. 

The latter (rule by the wealthy) is what has arguably occurred in America, where lobbying by corporate interests increasingly shapes policies. This is happening in other ‘democracies’, including Australia. To give an example, our so-called ‘water policy’ is driven by prioritising the sale of ‘water rights’ to overseas investors over ecological and community needs; despite Australia being the driest continent in the world (after Antarctica). Keeping people employed is the mantra of all parties. In other words, as long as the populace is gainfully employed, earning money and servicing the economy, policy deliberations don’t need to take them into account.

As Clive James once pointed out, democracy is the exception, not the norm. Democracies in the modern world have evolved from a feudalistic model, predominant in Europe up to the industrial revolution, when social engineering ideologies like fascism and communism took over from monarchism. It arguably took 2 world wars before we gave up traditional colonial exploitation, and now we have exploitation of a different kind, which is run by corporations rather than nations. 

I acknowledge that democracy is the best model for government that we have, but those of us lucky enough to live in one tend to take if for granted. In Athens, in the original democracy (in Plato’s time) which was only open to males and excluded slaves, there was a broad separation between the aristocracy and the people who provided all the goods and services, including the army. One can see parallels to today’s world, where the aristocracy have been replaced by corporate leaders, and the interdependence and political friction between these broad categories remain. In the Athens Senate (according to historian, Philip Matyszak) if you weren’t an expert in the field you pontificated on, like ship building (his example) you were generally given short thrift by the Assembly.

I sometimes think that this is the missing link in today’s governance, which has been further eroded by social media. There are experts in today’s world on topics like climate change and species extinction and water conservation (to provide a parochial example) but they are often ignored or sidelined or censored. As recently as a couple of decades ago, scientists at CSIRO (Australia’s internationally renowned, scientific research organisation) were censored from talking about climate change, because they were bound by their conditions of employment not to publicly comment on political issues. And climate change was deemed a political issue, not a scientific one, by the then Cabinet, who were predominantly climate change deniers (including the incumbent PM).

In contrast, the recent bush fire crisis and the current COVID-19 crisis have seen government bodies, at both the Federal and State level, defer to expertise in their relevant fields. To return to my opening paragraph, I think we can cherry-pick some of Plato’s ideas in the context of a modern democracy. I would like to see governments focus more on expertise and long-term planning beyond a government’s term in office. We can’t have ‘philosopher kings’, but we do have ‘elite’ research institutions that can work with private industries in creating more eco-friendly policies that aren’t necessarily governed by the sole criterion of increasing GDP in the short term. I would like to see more bipartisanship rather than a reflex opposition to every idea that is proposed, irrespective of its merits.

Wednesday, 4 March 2020

Freeman Dyson: 15 December 1923 – 28 February 2020

I only learned of Dyson's passing yesterday, quite by accident. I didn't hear about it through any news service.

In this video, Dyson describes the moment on a Greyhound bus in 1948, when he was struck by lightning (to use a suitably vivid metaphor) which eventually gave rise to a Nobel prize in physics for Feynman, Schwinger and Tomanaga, but not himself.

It was the unification of quantum mechanics (QM) with Einstein's special theory of relativity. Unification with the general theory of relativity (GR) still eludes us, and Dyson heretically argues that it may never happen (in another video). Dyson's other significant contribution to physics was to prove (along with Andrew Leonard, in 1967) how the Pauli Exclusion Principle stops you from sinking into everything you touch.

I learned only a year or so ago that Dyson believes that QM is distinct from classical physics, contrary to accepted wisdom. A viewpoint I've long held myself. What's more, Dyson argues that QM can only describe the future and classical physics describes the past. Another view I thought I held alone. In his own words:

What really happens is that the quantum-mechanical description of an event ceases to be meaningful as the observer changes the point of reference from before the event to after it. We do not need a human observer to make quantum mechanics work. All we need is a point of reference, to separate past from future, to separate what has happened from what may happen, to separate facts from probabilities.




Addendum: I came across this excellent obituary in the New York Times.

Monday, 24 February 2020

Is Kant relevant to the modern world?

I recently wrote a comment on Quora that addresses this very question, but I need to backtrack a couple of decades. When I studied philosophy, I wrote an essay on Kant, around the same time I wrote my essay on Christianity and Buddhism

Not so long ago (over Christmas) I read AC Grayling’s The History of Philosophy, which, at 580+ pages is pretty extensive and even includes brief discussions on Hindu, Chinese, Islamic and sub-Saharan African philosophy. Any treatise you read on the history of Western philosophy will include Kant as one of the giants of the discipline. Grayling’s book, in particular, provides both historical and contextual perspectives. According to Grayling, Kant brought together the two ‘opposing’ branches of analytical philosophy of his time: empiricism and idealism.

I’ve read Critique of Pure Reason (in English, obviously) and it’s as obscure in places as Kant’s reputation presumes. Someone once claimed that Kant’s lectures were very popular and a lot less intimidating than his texts. If that is true, then one regrets that he didn’t live in the age of YouTube. But his texts, and subsequent commentaries on them, are all we have, including this one you’re about to read. I will include the original bibliography, as I did with my other ‘academic’ essay.

The essay was titled: What is transcendental idealism?


Kant, I believe, made two major contributions to philosophy: that there is a limit to what we can know; and that there is a difference between what we perceive and ‘things-in-themselves’. These two ideas are naturally related but they are not synonymous. Transcendental idealism arose out of Kant’s attempt to incorporate these ideas into an overall philosophy of knowledge or epistemology. Kant is extremely difficult to follow and this is not helped when many of the essays written on Kant are just as obtuse and difficult to understand as Kant himself. However there are parts of Kant’s Critique that are relatively plain and easy to follow. It is my intention to start with these aspects and work towards an exposition on transcendental idealism.

I think it is important to note that our understanding in science and psychology has increased considerably since Kant’s time, and this must influence any modern analysis of his epistemology. For example, in Kant’s time, it was Newton’s physics that provided the paradigm for empirical knowledge and therefore a deterministic universe seemed inevitable. With the discovery of quantum mechanics and Chaos theory, this is no longer the case, and Kant’s third 'antimony' on ‘freedom’ does not have the same relevance as it did in his time. A contemporary analogy to this might be materialism as the current paradigm for consciousness, because current theories are based on our knowledge of genetics, biochemistry and neuroscience, and the limitations of that knowledge. It is quite possible that future developments may overturn materialism as a paradigm because our knowledge of consciousness today is arguably no greater than our knowledge of physics was during Newton’s time.

In view of what we’ve learnt since Kant’s time, it seems to me that he had a remarkable, indeed almost prophetic insight, yet I cannot help but also believe that his philosophy contains a fundamental flaw. The fundamental flaw is his insistence that space and time are purely psychological phenomena, or in Kant’s own terms, that space and time are apriori ‘forms’ of the mind. ‘But this space and this time, and with them all appearances, are not in themselves things; they are nothing but representations and cannot exist outside our minds.’ One of my objectives, therefore, is to provide a resolution of this flaw with aspects of his philosophy that I find sound. Ironically, I believe that time and space give us the best insight into understanding Kant’s transcendental idealism, though not in a manner that he could have foreseen.

A philosophy of knowledge naturally includes knowledge acquisition, and for Kant, this required an analysis of human cognitive abilities. I believe this is a good place to start in understanding Kant. Kant realised that there are two aspects of knowledge acquisition in humans: what we gain directly through our senses or ‘sensibilities’ and what we ‘synthesise’ into concepts through ‘pure understanding’. Kant realised that this synthesis is in effect consciousness. Kant explains how concepts can go beyond experience, which is what he calls pure understanding. This in effect is transcendental idealism, which is speculative as opposed to empirical realism which is based on experience. Another perspective to this is that most animals, we assume, can synthesise knowledge at the sensibility level, otherwise they would not be able to interact with their environment, whereas humans can synthesise knowledge at another level altogether which I believe is Kant’s transcendental level. Note that Kant is not talking about metaphysical knowledge in his reference to the transcendental, but knowledge of the object-in-itself, a concept I will return to later.

Whether Kant realised it or not, this synthesis of concepts is also the way in which we remember things in the long term - that is through association of concepts. I’m talking about knowledge type memory rather than physiological type memory which allows us to remember how to do tasks, like driving a car or playing a musical instrument. These are different types of memory which are dependent on different physiological mechanisms within the brain. The point is that this synthesising of concepts is a memory function as well as a means of understanding. It is virtually impossible to remember new knowledge unless we synthesise it into existing knowledge.

Both in the Study Guide and in Allison’s essay on The Thing in Itself, perception of colour is used as an example of knowledge gained through the senses, and in the Study Guide is contrasted with space and time, which according to Kant are apriori knowledge, and therefore independent of experience. This leads to the problem I have with Kant, because space and time are also sensed by us, despite Kant’s objections that space and time are not entities. It should be pointed out that colour is purely a psychological phenomenon. In other words, colour, unlike space and time, does not exist outside the mind. In fact colour is probably the best example for explaining the difference between what we perceive (our ‘representations’) and ‘things-in-themselves’. Colour as it is-in-itself is a wavelength of light, and so is radar and radio waves and cosmic rays. It is believed that some animals can see in ultraviolet light so that for them ultraviolet light is a colour. Colour best explains Kant’s philosophical point that appearances or representations are not the same as the phenomenon as it exists-in-itself.

So colour only exists in the mind as the result of sensory perception, as Kant himself explained. It is not that appearances or representations of objects as perceived are different entities to what exists in the real world, but that we are only aware or can only sense specific attributes of these objects. This is an important point that is not often delineated.

So in what respects are space and time different? Space and time are different because they are the manifold in which the universe exists - without space and time there would be no universe, no physical universe anyway; no universe that we could perceive in an empirical sense, therefore no empirical realism. According to Kant however, space and time are apriori ‘forms’ that we impose on the universe. There are many aspects to this issue so let’s start with sensory perception. In regard to space, we have a sense in addition to the five known ones called proprioception, discovered by Sherrington in the 1890s. This is a sense that tells us where every part of our body is in space. Oliver Sacks in his book, The Man Who Mistook His Wife for a Hat, describes a case he called ‘The Disembodied Lady’, of a woman who lost this sense completely overnight. She was literally like a rag doll and had to learn to do even the most simple motor tasks, like sitting, anew. But of course we also sense space with our eyes, and all animals that depend on their dynamic abilities, from insects to birds, to mammals, have this ability. Bats and dolphins of course sense space with echo-location.

As for time, we have two means of sensing time. The most obvious is memory. Again Sacks describes the case of a man suffering retrograde amnesia, which in his book he called ‘The Lost Mariner’. Sacks met the man in 1975, but although he displayed above average intelligence, the man could create no new memories. In fact he was permanently stuck in 1945 when he had left the US Navy after the War. This is like being colour blind or deaf beyond a certain frequency. The other sense of time is through our eyes which capture images at a very specific rate. Without this ability we would not be able to detect motion. All photographs, to use an analogy, need time, no matter how small an increment, in order to be realised at all. Again different animals capture these images at different rates so they quite literally live at different speeds. Birds and many insects see the world in slow motion compared to us, whereas other animals like snails and sloths see it much faster. Sometimes in the event of trauma, like a car accident or an explosion, our internal clock changes its rate momentarily and we see things as if we are watching a slow motion film.

We sense space and time the same way we sense colours, sounds and smells. In fact our ability to sense space and time is a matter of life and death - just take a drive in traffic. The idea that we impose space and time on the universe is absurd unless one believes in solipsism which apparently Kant did not. For Kant time and space are apriori knowledge that is ‘given’. Our mind has an inbuilt sense of time and space, yes, but it is a necessary sense no different to our other senses so that we can interact with a world that exists in time and space. This is the distinction I make with Kant. The reason we have a sense of space and time is so the world inside our heads can match the world outside our heads, otherwise we could not do anything - we could not even walk outside our front doors. To argue otherwise, in my opinion, is disingenuous.

This contention on my part has consequences for Kant’s philosophy. According to the Encyclopaedia Britannica, Kant’s ‘Copernican revolution of philosophy’ is: ‘...the assumption not that man’s knowledge must conform to objects but that objects must conform to man’s apparatus of knowing.’  I would turn this argument on its head because it is my belief that the human mind is a mirror of the physical world and not the other way round. Michio Kaku and Jennifer Thompson in their book, Beyond Einstein, describe the hypothetical experience of meeting someone from a higher dimensional universe. They explain that whilst we can perceive things in 2 dimensions of space, if we lived in a 2 dimensional space, 3 dimensions would be incomprehensible to us. If we lived in a higher dimensional universe we would think in those higher dimensions. This is why we can’t create a higher dimensional universe in our imaginations but we can express it mathematically. This I believe also gives us an insight into transcendental idealism, but I will return to this point later.

In terms of our sensibilities, Kant is correct: our ability to perceive is limited by the cognitive powers of the human mind. We cannot see colours outside a certain range of wavelength of light or hear sounds outside a specific range of frequencies. But Kant goes further than this: he realised that our cognitive reasoning ability to understand the things-in-themselves is also limited. Kant quite correctly realised that there is a trap or an illusion, that we often perceive concepts which we synthesise through our reasoning ability as being derived from experience when they are not. We have these ideas in our head which we believe to match reality, but in truth we only think we understand reality and the thing-in-itself escapes us. This is the kernel in the midst of Kant’s philosophy which is worth preserving. Our knowledge acquisition is in fact an interaction between experience (the empirical) and theory (the transcendental). Kant himself showed an insight into this interaction in A95 when he refers to the synthesis of ‘sense, imagination and apperception’. 

All these faculties have a transcendental (as well as an empirical) employment which concerns the form alone, and is possible apriori.’  By ‘apriori’ and ‘form’, Kant of course is referring to space and time, but he is also referring to mathematical forms, as he explains on the next page in B128. There is then, this relationship between transcendental idealism and empirical realism; a relationship that is mediated principally through mathematics.

But there is another aspect of our knowledge acquisition that Kant never touched on and relates to the thing-in-itself. We have discovered that nature takes on completely different realities at different levels which means that the thing-in-itself is almost indefinable as a single entity. To describe something we have to conceptually isolate it in our minds. For example the human body is a single entity made up of millions of other entities called cells. It is virtually impossible to conceptualise these two levels of entities simultaneously. But the human mind has a very unique ability. We can create concepts within concepts, like words within sentences, or formulas within mathematical equations, or notes within music, and realise that on different levels all these things take on different meanings. So the human mind is uniquely placed to understand the universe in which we live, because it also takes on different meanings at different levels.

This is even true regarding the number of dimensions of the universe. Michio Kaku, whom I referred to earlier, informs us that according to M theory, the universe may very well exist at one level in 11 dimensions, but at our level of everyday existence, we can only perceive the 3 dimensions of space and the 1 dimension of time. This for me is the irony of Kant’s philosophy. Relativity theory and quantum mechanics suggest that space and time are not how we perceive them to be, which makes Kant’s concept of the thing-in-itself quite a prophetic insight. However Kant would never have conceived that space and time could exist as things-in-themselves at all, because for Kant, space and time are not entities. He is right in that they are not entities in the same way that objects are, but they are the absolutely essential components for the universe to exist at all.

Some people would argue that space and time are no more than mathematical entities, because that is the only way we can express space and time, as opposed to how we experience it. From this argument it could be suggested that by using mathematics we are imposing our sense of space and time on the universe, irrespective of all the arguments I have already made concerning how we are able to sense it. But what I find significant is that mathematical laws are not man made and that nature obeys them even if we weren’t here to express them. So I would argue that transcendental idealism is mathematics, even though I’m not at all sure if Kant would concur. I think Pythagoras showed remarkable insight when he claimed that all things are numbers, even though he was talking from a religious perspective. But metaphysics aside, Pythagoras was one of the first philosophers to understand that mathematics gives us a rare and unique insight into the natural world. What would he think today? What’s more I think Pythagoras would be quite agreeable in thinking that Kant’s transcendental idealism was indeed the world of mathematics.


Bibliography

Kaku M., Hyperspace, Oxford University Press, 1994.
Kaku M. & Thompson J., Beyond Einstein, Oxford University Press, 1997.
Kant I., Smith N. (trans.), Critique of Pure Reason, Macmillan, London, 1929.
Philosophy, The History of Western, Encyclopaedia Britannica, Vol.25, Edition 15, 1989, pp.742-69.
Reason And Experience, Theories of Knowledge B, Reader, Deakin University, Geelong, Australia, 1989.
Reason And Experience, Theories of Knowledge B, Study Guide, Deakin University, Geelong, Australia, 1989.
Ross K., Immanuel Kant, web page http://www.friesian.com/kant.htm
Sacks, O., The Man Who Mistook His Wife for a Hat, Picador, London, 1986.
Sternberg R., In Search of the Human Mind, Yale University, Harcourt Brace College Publishers, 1995.