Philosophy, at its best, challenges our long held views, such that we examine them more deeply than we might otherwise consider.
Paul P. Mealing
- Paul P. Mealing
- Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
Sunday 26 May 2019
Evolution of culture; a uniquely human adaption
He makes the point, which I’ve long known, that what separates us from all other species is that we have undertaken a cultural evolution that has long overtaken our biological evolution. This was accelerated by the invention of script, which allowed memories to be recorded and maintained over generations, some of which have lasted millennia. Of course, we already had this advantage even before we invented script, but script allowed an accumulation of knowledge that eventually led to the scientific revolution, which we’ve all benefited from since the enlightenment and has accelerated in the last 2 centuries particularly.
One of Harari’s recurring themes is that much of our lives are dependent on fictions and myths, and these have changed as part of our cultural evolution in a way that we don’t appreciate. Jeremy Lent makes similar observations in his excellent book, The Patterning Instinct, though he has a subtly different emphasis to Harari. Harari gives the impression that we are trapped in our social norms and gives examples to make his case. He points out that past societies were very hierarchical and everyone literally knew their place and lived within that paradigm. In fact, the consequences of trying to live outside one’s social constraints could be dire, even fatal. The current paradigm, at least in Western societies, is one of ‘individualism’, which he also explored in his follow-up book, with the warning that it could be eroded, if not eliminated, by the rise of AI, but I won’t discuss that here.
He effectively argues that these ‘fictions’, that we live by, rule out the commonly held belief that we can change our circumstances or that there is an objective morality that we can live by. In other words, he claims our lives are ruled by myths that we accept without question, and the only thing that changes are the myths themselves.
I take his point, but throughout history - at least from around 500BC - there have been iconoclasts who have challenged the reigning paradigm of their time. I will mention four: Socrates, Jesus, Buddha and Confucius. The curious point is that none of these wrote anything down (we only have their ‘sayings’) yet they are still iconic figures more than 2,000 years after their time. What they have in common is that they all challenged the prevailing ‘myth’ (to use Harari’s term) that there was a ‘natural order’ whereby those who ruled were ordained by gods, compared to those who served.
They all suffered for their subversions: Jesus and Socrates were executed, Confucius was exiled into poverty and the Buddha was threatened but not killed. Jesus challenged the church of his day, and that was the logical cause of his execution, not the blasphemy that he claimed to be ‘the son of god’. A lot of words were put in Jesus’ mouth, especially in the Bible. Jesus stood up for the disenfranchised and was critical of the church and the way it exploited the poor. He wouldn’t have been the only rabble-rouser of his time in Roman occupied Palestine but he was one of the most charismatic.
Buddha challenged the caste system in India as unjust, which made him logically critical of the religious-based norms of his time. He challenged the ‘myth’ that Harari claims everyone would have accepted without question.
Confucius was critical of appointments based on birth rather than merit and argued that good rulers truly served their people, rather than the other way round. Not surprisingly, his views didn’t go down very well with the autocracy of his time. He allegedly proposed the dictum of reciprocity: ‘Don’t do to others what you wouldn’t want done to yourself’. An aphorism also attributed to Jesus, which has more pertinence if one considers that it crosses class boundaries.
As for Socrates, I think he was the original existentialist in that he made a special plea to authenticity: ‘To live with honour in this world, actually be what you try to appear to be.’ Socrates got into trouble for supposedly poisoning the minds of the young, but what he really did was to make people challenge the pervading paradigm of his time, including the dominion of gods. He challenged people to think for themselves through argument, which is the essence of philosophy to this day.
To be fair to Harari, he gives specific attention to the feminist paradigm (my term, not his, as I don’t see it as a fiction or a myth). But I do agree that money, which determines so much in our societies, is based on a very convenient fiction and a great deal of trust. Actually, some level of trust is fundamental to a functioning society. In fact, I’ve argued elsewhere that, without trust: truth, justice and freedom all become forfeit.
The feminist paradigm is very recent, yet essential to our future. I recently saw an interview with Melinda Gates (currently in Australia) who made the salient point that it’s contraception that allows women to follow a destiny independent of men. Not surprisingly, it’s the ‘independent of men’ bit that has created, and continues to create, the greatest obstacle to their emancipation.
One of the more interesting discussions, I found, was Harari’s argument that political ideologies are really religions. I guess it depends on how you define religion. This is how Harari defines it, simultaneously giving a rationale to his thesis:
If religion is a system of human norms and values that is founded on belief in a superhuman order, then Soviet Communism was no less a religion than Islam.
I’m not convinced that political ideologies are dependent on a belief in a ‘superhuman order’, but they are premised on abstract ideas of uncontested ‘truth’, and, in that sense, they are like religions.
Contrary to what many people think, political thinking of ‘right’ and ‘left’ are largely determined by one’s genes, although environment also plays a role. Basically, personality traits like conscientiousness and goal-oriented leadership over people-based leadership are what are considered right-leaning traits; and agreeableness and openness (to new ideas) are considered left-leaning traits. Neuroticism would probably also be considered a left-leaning trait. Notice that all the left-leaning traits are predominant in artistic or creative people and this is generally reflected in their politics.
Curiously, twin studies have shown that a belief in God is also, at least partly, a genetically inherited trait. But I don’t believe there is any correlation between these two belief systems: God and politics. I know of people on the political right who are atheists and I know people on the political left who are theists.
I know that in America there seems to be a correlation between the political right and Christian fundamentalism, but I think that’s an Americanism. In Australia, it has little impact. We’ve very recently elected a Pentecostal as PM (Prime Minister) but I don’t believe that had any bearing on his election. We’ve had two atheist PMs in my lifetime (one of whom was very popular indeed), which would be unthinkable in America. The truth is that in some cultures religion is bound irretrievably with politics, and it can be hard for anyone who’s lived their entire lives in that culture to imagine there are political regimes where religion is a non-issue.
And this brings me to Harari’s next contentious point:
Even though liberal humanism sanctifies humans, it does not deny the existence of God, and is, in fact, founded on monotheistic beliefs.
Again, I think this is a particular American perspective. I would argue that liberal humanism has arisen from an existentialist philosophy, even though most people, who advocate and follow it, have probably never studied existential philosophy. There was a cultural revolution in Western societies in the generation following World War 2, and I was a part of it. Basically, we rejected the Christian institutions we were raised in, and embraced the existentialist paradigm that the individual was responsible for their own morality and their own destiny. No where was this more evident than in the rise of feminism, aided ineluctably by on-demand contraception.
So contrary to Harari’s argument, I think the humanist individualism that defines our age (in the West) was inextricably linked to the rejection of the Church. None of us knew what existentialism was, but, when I encountered it academically later in life, I recognised it as the symptomatic paradigm of my generation. We had become existentialists without being ideologically indoctrinated.
I feel Harari is on firmer ground when he discusses the relationship between the scientific revolution and European colonial expansion. I’ve argued previously, when discussing Jeremy Lent’s The Patterning Instinct, that Western European philosophy begat the scientific revolution because, under Galileo, Kepler and Newton, they discovered the relationship between mathematics and the movements of stellar objects – the music of the spheres, to paraphrase the ancient Greeks. The Platonic world of mathematics held the key to understanding the heavens. Subsequent centuries progressed this mathematical paradigm even further with the discovery of electromagnetic waves, then quantum mechanics and general relativity, leading to current theories of elementary nuclear particles and QED (quantum electrodynamics).
But Harari makes the case that exploration of foreign lands and peoples went hand-in-hand with scientific exploration of flora, fauna and archaeological digs. He argues that only Europeans acknowledged that we were ignorant of the wider world, which led to a desire for knowledge, rather than an acceptance that what our myths didn’t tell us was not worth knowing or exploring. Science had the same philosophy: that our ignorance would lead us to always search for new theories and new explanations, rather than accept the religious dogma that knowledge outside the Bible was not worthy of consideration.
So I would agree there was a synergy here, that was both destructive and empowering, depending on whether you were the European conqueror or the people being subjugated and ruthlessly exploited for the expansion of empire.
Probably the best part of the book is Harari’s description of capitalism and how it has shaped history in the last 400 years. He explains how and why it works, and why it’s been so successful. He also points out its flaws and its dark side. The book is worth reading for this section alone. He also explains how the free market, if left to its own devices, would lead to slavery. Instead, we have the exploitation of labour in third world countries, which is the next best thing, or the next worse thing, depending on your point of view.
This logically leads to a discussion on the consumerism paradigm that drives almost everything we do in modern society. Economic growth is totally dependent on it, but, ecologically, it’s a catastrophe in progress.
One of his more thought-provoking insights is in regard to how communal care-taking in law enforcement, health, education, even family dynamics, has been taken over by state bureaucracies. If one reads the neo-Confucian text, the I Ching, one finds constant analogies between family relationships and relationships in the Court (which means government officialdom). It should be pointed out that the I Ching predates Confucius, but contemporary texts (Richard Willem’s translation) have a strong Confucian flavour.
I can’t help but wonder if this facilitated China’s adoption of Communism almost as a state religion. Family relationships and loyalties still hold considerable sway in Asian politics and businesses. Nepotism is much more prevalent in Asian countries than in the West, I would suggest.
One of my bones of contention with Harari in Homo Deus was his ideas on happiness and how it’s basically a consequence of biochemistry. As someone who has lived for more than half a century in the modern post-war world, I feel I’m in a position to challenge his simplistic view that people’s ‘happiness setting’ doesn’t change as a consequence of external factors. To quote from Sapiens:
Buying cars and writing novels do not change our biochemistry. They can change it for a fleeting moment, but it is soon back to its set point.
Well, it works for me. Nothing has given me greater long term happiness than writing a novel and getting it into the public arena – the fact that it’s been a total financial failure is, quite frankly, irrelevant. I really can’t explain that, but it’s probably been the single most important, self-satisfying event of my life. I can die happy. Also I enjoy driving possibly more than any other activity, so owning a car means more to me than just having personal transport. I used to ride motorcycles, so maybe that explains it.
I grew up in a volatile household, which I’ve delineated elsewhere, and when I left home, the first 6 years were very depressing indeed. Over decades I turned all that around, so I think Harari’s ‘happiness setting’ is total bullshit.
But my biggest disagreement with Harari, which I alluded to before, is my advocacy for existentialist philosophy which he replaces with ‘the religion of liberal individualism’. Even though I can see similarities with Buddhism, I wouldn’t call existentialism a religion. Harari pre-empts this objection by claiming all ideologies, be they political or cultural, are no different to any religion. However, I have another objection of my own, which is that when Harari talks about religion, he is really talking about dogma.
In an issue of Philosophy Now (Issue 127, Aug/Sep 2018), Sandy Grant, who is a philosopher at University of Cambridge, defines dogma as an ‘appeal to authority without critical thinking’. I’ve previously defined philosophy as ‘argument augmented by analysis’, which is the antithesis of dogma. In fact, I would go so far as to say that philosophy has been historically an antidote to religion, going all the way back to Socrates.
Existentialism is a humanist philosophy (paraphrasing Sartre) but it requires self-examination and a fundamental honesty to oneself, which is the opposite of the narcissism implied in Harari’s religion of self-obsession, which he euphemistically calls ‘liberal individualism’.
Harari is cynical, if not dismissive, about the need for purpose in life, yet I would argue that it’s fundamental. I would recommend Viktor Frankl’s Man’s Search for Meaning. Frankl was a holocaust survivor and psychologist, who argued that we find meaning in relationships, projects and adversity. In fact, I would contend that the whole meaning of life is about dealing with adversity, which is why it is the theme of every work of fiction ever recorded.
If I go back to the title of this post, which I think is what Harari’s book is all about, there is a hierarchy of ‘needs’ (not Maslow’s) that a society must provide to ensure what Harari calls ‘happiness’, which is not so much economical as psychological. Back in July 2015, I wrote one of my 400 word mini-essays in response to a Question of the Month in Philosophy Now. The only relevant part is my conclusion, which effectively says that a functioning society is based on trust.
You can’t have truth without trust; you can’t have justice without truth; you can’t have freedom without justice; and you can’t have happiness without freedom.
I think that succinctly answers Harari’s thesis on happiness. Biochemistry may play a role, but people won’t find happiness if all those prerequisites aren’t met, unless, of course, said people are part of a dictatorship’s oligarchy.
A utopian society would allow everyone to achieve their potential – that’s the ideal. The most important consequence of an existentialist approach is that you don’t forfeit your aspirations for the sake of family or nation or church or some other abstract ideal that Harari calls religion.
While on this subject, I will quote from another contributor to Philosophy Now (Issue 110, Oct/Nov 2015), Simon Clarke, who is talking about John Stuart Mill, but who expresses my point of view better than I can.
An objectively good life, on Mill’s (Aristotelian) view, is one where a person has reached her potential, realizing the powers and abilities she possesses. According to Mill, the chief essential requirement for personal well-being is the development of individuality. By this he meant the development of a person’s unique powers, abilities, and talents, to their fullest potential.
Thursday 9 May 2019
The Universe's natural units
We can do quantum field theory just fine on the curved spacetime background of general relativity.
Then he adds this caveat:
What we have so far been unable to do in a convincing manner is turn gravity itself into a quantum field theory.
These carefully selected quotes are from a recent post by Toth on Quora where he is a regular contributor. His area of expertise is in cosmology, including the study of black holes. On another post he explains how the 2 theories are mathematically ‘incompatible’ (my term, not his):
The equation is Einstein’s field equation for gravitation, the equation that is, in many ways, the embodiment of general relativity:
Rμν−12Rgμν=8πGTμν.
The left-hand side of this equation represents a quantity formed from the spacetime metric, which determines the “deformation of spacetime”. The right-hand side of this equation is a quantity that is formed from the energy, momentum, angular momentum and internal stresses and pressure of matter.
He then goes on to explain that, while the RHS of the equation can be reformulated in QM nomenclature, the LHS can’t. There is a way out of this, which is to ‘average’ the QM side of the equation to get it into units compatible with the classical side, and this is called ‘semi-classical gravity’. But, again, in his own words:
…it is hideously inelegant, essentially an ad-hoc averaging of the equation that is really, really ugly and is not derived from any basic principle that we know.
Anyway, the point of this mini-exposition is that there is a mathematical conflict, if not an incompatibility, inherent in Einstein’s equation itself. One side of the equation can be expressed quantum mechanically and the other side can’t. What’s more, the resolution is to ‘bastardise’ the QM side to make it compatible with the classical side.
You may be wondering what all this has to do with the title of this post. The fundamental constant at the heart of general relativity is, of course, G, the same constant that Newton used in his famous formula:
On the other hand, the fundamental constant used in QM is Planck’s constant, h, most famously used by Einstein to explain the photo-electric effect. It was this paper (not his paper on relativity) that garnered Einstein his Nobel prize. It’s best known by Planck’s equation:
E = hf
Where E is energy and f is the frequency of the photon. You may or may not know that Planck determined h empirically by studying hot body radiation, where he used it to resolve a particularly difficult thermodynamics problem. From Planck’s perspective, h was a mathematical invention and had no bearing on reality.
G was also determined empirically, by Cavendish in 1798 (well after Newton) and, of course, is used to mathematically track the course of the planets and the stars. There is no obvious or logical connection between these 2 constants based on their empirical origins.
There is a third constant I will bring into this discussion, which is c, the constant speed of light, which also involves Einstein, via his famous equation:
E = mc2
Now, having set the stage, I will invoke the subject of this post. If one uses Planck units, also known as ‘natural units’, one can see how these 3 constants are interrelated.
I will introduce another Quora contributor, Jeremiah Johnson (a self-described ‘physics theorist’) to explain:
The way we can arrive at these units of Planck Length and Planck Time is through the mathematical application of non-dimensionalization. What this does is take known constants and find what value each fundamental unit should be set to so they all equal one. (See below.)
Toth (whom I referenced earlier) makes the salient point that many people believe that the Planck units represent the physical smallest component of spacetime, and are therefore evidence, if not proof, that the Universe is inherently granular. But, as Toth points out, spacetime could still be continuous (or non-discrete) and the Planck units represent the limits of what we can know rather than the limits of what exists. I’ve written about the issue of ‘discreteness’ of the Universe before and concluded that it’s not (which, of course, doesn’t mean I’m right).
Planck units in ‘free space’ are the Universe’s ‘natural units’. They are literally the smallest units we can theoretically measure, therefore lending themselves to being the metrics of the Universe.
The Planck length is
1ℓP=1.61622837∗10−35m
And Planck time is
1tP=5.3911613∗10−44s
If you divide one by the other you get:
1ℓP/1tP=299,792,458m/s
Which of course, is the speed of light. As Johnson quips: “Isn’t that cool?”
Now, Max Planck derived these ‘natural units’ himself by looking at 5 equations and adjusting the scale of the units so as they would not only be consistent across the equations, but would non-dimensionalise the constants so they all equal 1 (as Johnson described above).
In fact, the definition of the Plank units (except charge) includes both G and
The point is that I was able to derive G from h using Planck units. The Universe lends itself to portraying a consistency across metrics and natural phenomena based on units derived from constants that represent the extremes of scale, h and G. The constant, c, is also part of the derivation, and is essential to the dimension of time. It’s not such a mystery when one realises that the ‘units’ are derived from empirically determined constants
Addendum: For a comprehensive, yet easy-to-read, historical account, I’d recommend John D. Barrow’s book, The Constants of Nature; From Alpha to Omega.
Addendum (13 April 2023): According to Toth, the Planck units don't provide a limit on anything:
...the Planck scale is not an inherent limit of anything. It is simply the set of “natural” units that characterize Nature.
So the Planck scale is not a physical limit or a limit on what can be observed; rather, it’s a limitation of the theory that we use to describe the quantum world.
Read his erudite exposition on the subject here:
https://www.quora.com/Why-is-the-Planck-length-the-smallest-measurable-length-Why-cant-it-be-smaller/answer/Viktor-T-Toth-1
Friday 3 May 2019
What is the third way?
The ‘third way’ referenced in the question is basically a reference to an alternative societal paradigm to capitalism and communism. I expect that most, if not all responses will be variations on a 'middle way'. But if there is a completely out-of-the-box answer, I’ll be curious to read it. So, maybe the way the question is addressed will be just as important, if not more important, than the proposed resolution.
I think this is the most difficult question Philosophy Now has thrown at us in the decade or two I’ve been reading it. I think there definitely will be a third way by the end of this century, but I’m not entirely sure what it will be. Is that a copout? No, I’m going to attempt to forecast the future by looking at the past.
If one goes back before the industrial revolution, no one would have predicted that feudalism would not continue forever. But the industrial revolution unintentionally spawned two social experiments: communism and capitalism that spanned the 20th Century. I think one can fairly say that capitalism ultimately prevailed, because all communist inspired revolutions became State-run oligarchies that led to the worst excesses in totalitarianism.
What’s more, we saw more societal and technological change in the 20th Century than all previous history. There is no reason to believe that the 21st Century won’t be even more transformative. We are currently going through a technological revolution in every way analogous to the industrial revolution of the 19th Century, and it will be just as socially disruptive and economically challenging.
Capitalism has become so successful globally, especially in the high-tech industries, that corporations are starting to eclipse governments in their influence and power, and, to some extent, now embody the feudal system we thought we’d left behind. I’m referring to third world countries providing exploited labour and resources for the affluent elite, which includes me.
There is an increasing need to stop the wasteful production of goods on the altar of economic growth. It’s not only damaging the environment, it increases the gap between those who consume and those who produce. So a global economy would give the wealth to those who produce and not just those who are their puppet masters. This would require equitable wealth distribution on a global scale, not just nationally.
Future technologies will become more advanced to the point that there will be a symbiosis between humans and machines, and this will have a dramatic impact on economic drivers. A universal basic income, which is unthinkable now, will become a necessity because so many jobs will be AI executed.
People and their ideas are only considered progressive in hindsight. But what was radical in the past often becomes the status quo in the present; and voila: no one can imagine it any other way.
Addendum: I changed the last sentence of the third-last paragraph before I sent it off.
Friday 26 April 2019
What use is philosophy?
Leafing through its pages, I came across the Letters section and saw my name. I had written a letter that I had forgotten about. It was in response to an article (referenced below), in the previous issue, about whether philosophy had lost its relevance in the modern world. Did it still have a role in the 21st Century of economic paradigms and technological miracles?
There are many aspects to Daniel Kaufman’s discussion on The Decline & Rebirth of Philosophy (Issue 130, Feb/Mar 2019, pp. 34-7), but mine is the perspective of an ‘outsider’, in as much as I’m not an academic in any field and I’m not a professional philosopher.
I think the major problem with philosophy, as it’s practiced today as an academic activity, is that it doesn’t fit into the current economic paradigm which specifically or tacitly governs all value judgements of a profession or an activity. In other words, it has no perceived economic value to either corporations or governments.
On the other hand, everyone can see the benefits of science in the form of the technological marvels they use every day, along with all the infrastructure that they quite literally couldn’t live without. Yet I would argue that science and philosophy are joined at the hip. Plato’s Academy was based on Pythagoras’s quadrivium: arithmetic, geometry, astronomy and music. In Western culture, science, mathematics and philosophy have a common origin.
The same people who benefit from the ‘magic’ of modern technology are mostly unaware of the long road from the Enlightenment to the industrial revolution, the formulation of the laws of thermodynamics, followed closely by the laws of electromagnetism, followed by the laws of quantum mechanics, upon which every electronic device depends.
John Wheeler, best known for coining the term, ‘black hole’ (in cosmology) said:
We live on an island of knowledge surrounded by a sea of ignorance. As our island of knowledge grows, so does the shore of our ignorance.
I contend that the ‘island of knowledge’ is science and the ‘shore of ignorance’ is philosophy. Philosophy is at the frontier of knowledge and because the ‘sea of ignorance’ is infinite, there will always be a role for it. Philosophy is not divorced from science and mathematics; it’s just not obviously in the guise it once was.
The marriage between science and philosophy in the 21st Century is about how we are going to live on a planet with limited resources. We need a philosophy to guide us into a collaborative global society that realises we need Earth more than it needs us.
Thursday 4 April 2019
Is time a psychological illusion or a parameter of the Universe?
I’ve recently read Paul Davies latest book, The Demon in the Machine (released in Feb) and I would highly recommend it.
We have reached a stage in politics and media generally that you are either for or against a person, an idea or an ideology. Anyone who studies philosophy in any depth realises that there are many points of view on a single topic. There are many voices that I admire yet there is not one that I completely agree with on everything they announce or proclaim or theorise about.
Paul Davies new book is a case in point. This book is very intellectually stimulating, even provocative, which is what I expect and is what makes it worth reading. Within its 200 plus pages, there was one short, well-written and erudite passage where I found myself in serious disagreement. It was his discussion on time and its relation to our perceptions.
He starts with Einstein’s well known quote: ‘The past, present and future is only a stubbornly persistent illusion.’ It’s important to put this into its proper context. Einstein wrote this in a letter to a mother of a friend who had recently died. It was written, of course, not only to console her, but to reveal his own conclusions arising from his theories of relativity and their inherent effect on time.
A consequence of Einstein’s theory was that simultaneity was dependent on the observer, so it was possible that 2 observers could disagree on the sequence of events occurring (depending on their respective frames of reference). Note that this is only true if there is no causal relationship between these events.
Also, Einstein believed in what’s now called a ‘block universe’ whereby the future is as fixed as the past. Some physicists still argue this, in the same way that some (if not many) argue that we live in a computer simulation (Davies, it should be pointed out, definitely does not).
I’m getting off the track, because what Davies argues is that the so-called ‘arrow of time’ is an ‘illusion’, as is the ‘flow of time’. He goes so far as to contentiously claim that time can’t be measured. His argument is simple: if time was to ‘slow down’ or ‘speed up’ everything, from your heart rate to atomic clocks would do so as well, so there is no way to perceive it or measure it. He argues that you can’t measure time against time: “It has to be ‘One second per second’ – a tautology!” However, as Davies well knows, Einstein’s theory of relativity tells us that you can measure the ‘rate of time’ of one clock against another, and this is done and allowed for in GPS calculations. See my post on the special theory of relativity where I describe this very phenomenon.
Davies argues that there is no ‘backwards or forwards in time’ and the arrow of time is a ‘misnomer’, a metaphor we use to describe a psychological phenomenon. According to him, it’s our persistent belief in a continuity of self that creates the illusion of ‘time passing’. But I think he has it back-to-front. (I’ll return to this later.)
So, if there is no direction of time and no flow of time, how do we describe it? Well, one way is to talk about whether phenomena are symmetrical or asymmetrical in time. In other words, if you were to reverse a sequence of events would you get back to where you started, or is that even possible? Davies argues that entropy or the second law of thermodynamics accounts for this perception. But here’s the thing: without time, motion would not exist and causation would not exist; both of which we witness all the time. And if time does not ‘pass’ or ‘flow’, then what does it do?
Mathematically, time is a dimension, which even has a smallest unit, called ‘Planck time’. Davies says it’s not measurable, but we do, even to the extent that we derive an age of the Universe. John Barrow, in his The Book of Universes, even provides an estimate in ‘Planck units’. Mathematically, we provide 4 co-ordinates for any event in the Universe – 3 of space and 1 of time. And, obviously, they can all change, but time is unique in that it appears to change continuously.*
And time is ‘fluid’ for want of a better word. Its ‘rate’ can change in gravity and relativistically because the speed of light is constant. The speed of light is the only thing that stops everything from happening at once, and for a photon, time is zero. A photon traverses the entire universe in zero time (from the photon’s perspective).
But for the rest of us, time is a constraint created by light. Everything you observe has already happened because it always takes a finite amount of time (from your perspective) for the photon to reach you and nothing can travel faster than light (because it travels in zero time). This is the paradox, but it’s the relationship between light and time that governs our understanding of the Universe. If something speeds up relative to something else (you), then the light it emits increases in frequency if it’s coming towards you and decreases if it’s moving away. Obviously, the very fact that you can measure its frequency means you can measure its velocity (relative to you), which is meaningless without the dimension of time.
So note that all observations (involving light) mean that everything you perceive is in the past – it’s impossible to see into the future. So the ‘arrow of time’, that Davies specifically calls a ‘misnomer’, is a pertinent description of this everyday perception – we can only observe time in one direction, which is the past.
Davies explains our perception of time as a neurological effect:
It is incontestable that we possess a very strong psychological impression that our awareness is being swept along on an unstoppable current of time, and it is perfectly legitimate to seek a scientific explanation for the feeling that time passes. The explanation of this familiar psychological flux is, in my view, to be found in neuroscience, not physics. (emphasis in the original.)
I’ve argued previously that perhaps it is only consciousness that exists in a constant present. It is certainly true that only consciousness can perceive time as a dynamic entity. Everything around us becomes instantly the past like we are standing in a river where we can’t see upstream. It is for this reason that the concepts of past, present and future are uniquely perceived by a conscious mind. Davies effectively argues that this is the sole representation of time: that ‘time passing’ only exists in our minds and not in reality. But if our minds exist in a constant present (relative to everything else) then time does pass us by; and past, present and future is not an illusion, but a consequence of consciousness interacting with reality.
There are causal events that occur around us all the time, but, like a photographic image, they become past events as soon as they happen. I believe there is a universal ‘now’, otherwise the idea of the age of the Universe makes no sense. But, possibly, only conscious entities ride this constant now, which is why everything else is dynamically going past us in a literal, not just a psychological, sense. This is where Davies and I disagree.
Meanwhile, the future exists in light beams yet to be seen. Quantum mechanically, a photon is a wave function (ψ) that’s in the future of whatever it interacts with. A photon is only observed in retrospect, along with its path, and that’s true for all quantum events, including the famous double slit experiment. As Freeman Dyson points out, QM gives us probabilities which are in the future. To paraphrase: ‘quantum mechanics describes the future and classical physics describes the past’. Most physicists (including Davies, I suspect) would disagree. The orthodox view is that classical physics is a special case of quantum mechanics and, in quantum cosmology, time mathematically disappears.
Footnote: I should point out that Paul Davies is someone I’ve greatly admired and respected for many years.
*Paradoxically, at the event horizon of a black hole, time stops and we enter the world of quantum gravity. The evidence for black holes are accretion disks
where the matter from a companion star forms a ring at the event
horizon and emits high energy radiation as a result, which can be
observed. However, from everything I've read, we need new physics to understand what happens beyond the event horizon of a black hole.
Addendum: I've since resolved this paradox to my satisfaction: it's space that crosses the event horizon at c. Then I learned that Kip Thorne effectively provided the same explanation, demonstrated with graphics, in Scientific American in 1967. He cited David Finkelstein who demonstrated it mathematically in 1958.
Friday 15 February 2019
3 rules for humans
Someone asked the question: what would the equivalent 3 laws for humans be, analogous to Asimov’s 3 laws for robotics?
The 3 laws of robotics (without looking them up) are about avoiding harm to humans within certain constraints and then avoiding harm to robots or itself. It’s hierarchical with humans' safety being at the top, or the first law (from memory).
So I submitted an answer, which I can no longer find, so maybe someone took the post down. But it got me thinking, and I found that what I came up with was more like a manifesto than laws per se; so they're nothing like Asimov’s 3 laws for robotics.
In the end, my so-called laws aren't exactly what I submitted but they are succinct and logically consistent, with enough substance to elaborate upon.
1. Don’t try or pretend to be something you’re not
This is a direct attempt at what existentialists call ‘authenticity’, but it’s as plain as one can make it. I originally thought of something Socrates apparently said:
To live with honour in this world, actually be what you try to seem to be.
And my Rule No1 (preferable to law) is really another way of saying the same thing, only it’s more direct, and it has a cultural origin as well. As a child, growing up, ‘having tickets on yourself’, or ‘being up yourself’, to use some local colloquialisms, was considered the greatest sin. So I grew up with a disdain for pretentiousness that became ingrained. But there is more to it than that. I don’t believe in false modesty either.
There is a particular profession where being someone you’re not is an essential skill. I’m talking about acting. Spying also comes to mind, but the secret there I believe is to become invisible, which is the opposite to what James Bond does. That’s why John Le Carre’s George Smiley seems more like the real thing than 007 does. Going undercover, by the way, is extremely stressful and potentially detrimental to your health – just ask anyone who’s done it.
But actors routinely become someone they’re not. Many years ago, I used to watch a TV programme called The Actor’s Studio, where well known actors were interviewed, and I have to say that many of them struck me with their authenticity, which seems like a contradiction. But an Australian actress, Kerry Armstrong, once pointed out that acting requires leaving your ego behind. It struck me that actors know better than anyone else what the difference is between being yourself and being someone else.
I’m not an actor but I create characters in fiction, and I’ve always believed the process is mentally the same. Someone once said that ‘acting requires you to say something as if you’ve just thought of it, and not everyone can do that.’ So it’s spontaneity that matters. Someone else once said that acting requires you to always be in the moment. Writing fiction, I would contend, requires the same attributes. Writing, at least for me, requires you to inhabit the character, and that’s why the dialogue feels spontaneous, because it is. But paradoxically, it also requires authenticity. The secret is to leave yourself out of it.
The Chinese hold modesty in high regard. The I Ching has a lot to say about modesty, but basically we all like and admire people who are what they appear to be, as Socrates himself said.
We all wear masks, but I think those rare people who seem most comfortable without a mask are those we intrinsically admire the most.
2. Honesty starts with honesty to yourself
It’s not hard to see that this is directly related to Rule 1. The truth is that we can’t be honest to others if we are not honest to ourselves. It should be no surprise that sociopathic narcissists are also serial liars. Narcissists, from my experience, and from what I’ve read, create a ‘reality distortion field’ that is often at odds with everyone else except for their most loyal followers.
There is an argument that this should be Rule 1. They are obviously interdependent. But Rule 1 seems to be the logical starting point for me. Rule 2 is a consequence of Rule 1 rather than the other way round.
Hugh Mackay made the observation in his book, Right & Wrong: How to Decide for Yourself, that ‘The most damaging lies are the ones we tell ourselves’. From this, neurosis is born and many of the ills that beleaguer us. Self-honesty can be much harder than we think. Obviously, if we are deceiving ourselves, then, by definition, we are unaware of it. But the real objective of self-honesty is so we can have social intercourse with others and all that entails.
So you can see there is a hierarchy in my rules. It goes from how we perceive ourselves to how others perceive us, and logically to how we interact with them.
But before leaving Rule 2, I would like to mention a movie I saw a few years back called Ali’s Wedding, which was an Australian Muslim rom-com. Yes, it sounds like an oxymoron but it was a really good film, partly because it was based on real events experienced by the filmmaker. The music by Nigel Weslake was so good, I bought the soundtrack. It’s relevance to this discussion is that the movie opens with a quote from the Quran about lying. It effectively says that lies have a habit of snowballing; so you dig yourself deeper the further you go. It’s the premise upon which the entire film is based.
3. Assume all humans have the same rights as you
This is so fundamental, it could be Rule 1, but I would argue that you can’t put this into practice without Rules 1 and 2. It’s the opposite to narcissism, which is what Rules 1 and 2 are attempting to counter.
One can see that a direct consequence is Confucius’s dictum: ‘Don’t do to others what you wouldn’t want done to yourself’; better known in the West as the Golden Rule: ‘Do unto others as you would have others do unto you’; and attributed to Jesus of course.
It’s also the premise behind the United Nations Bill of Human Rights. All these rules are actually hard to live by, and I include myself in that broad statement.
A couple of years back when I wrote a post in response to the question: Is morality objective? I effectively argued that Rule No3 is the only objective morality.
Friday 8 February 2019
Some people might be offended by this
I was reminded of the cultural difference between America and Australia, when it comes to religion. A difference I was very aware of when I lived and worked in America over a combined period of 9 months, including New Jersey, Texas and California.
It’s hard to imagine any mainstream magazine or newspaper having this discussion in Australia, or, if they did, it would be more academic. I was in the US post 9/11 – in fact, I landed in New York the night before. I remember reading an editorial in a newspaper where people were arguing about whether the victims of the attack would go to heaven or not. I thought: how ridiculous. In the end, someone quoted from the Bible, as if that resolved all arguments – even more ridiculous, from my perspective.
I remember reading in an altogether different context someone criticising a doctor for facilitating prayer meetings in a Jewish hospital because the people weren’t praying to Jesus, so their prayers would be ineffective. This was a cultural shock to me. No one discussed these issues or had these arguments in Australian media. At least, not in mainstream media, be it conservative or liberal.
Reading Cunningham’s article reminded me of all this because he talks about how real hell is for many people. To be fair, he also talks about how hell has been sidelined in secular societies. In Australia, people don’t discuss their religious views that much, so one can’t be sure what people really believe. But I was part of a generation that all but rejected institutionalised religion. I’ve met many people from succeeding generations who have no knowledge of biblical stories, whereas for me, it was simply part of one’s education.
One of the best ‘modern’ examples of hell or the underworld I found was in Neil Gaiman’s Sandman graphic novel series. It’s arguably the best graphic novel series written by anyone, though I’m sure aficionados of the medium may beg to differ. Gaiman borrowed freely from a range of mythologies, including Orpheus, the Bible (in particular the story of Cain and Abel) and even Shakespeare. His hero has to go to Hell and gets out by answering a riddle from its caretaker, the details of which I’ve long forgotten, but I remember thinking it to be one of those gems that writers of fiction (like me) envy.
Gaiman also co-wrote a book with Terry Pratchett called Good Omens: The Nice and Accurate Prophecies of Agnes Nutter (1990) which is a great deal of fun. The premise, as described in Wikipedia: ‘The book is a comedy about the birth of the son of Satan, the coming of the End Times.’ Both authors are English, which possibly allows them a sense of irreverence that many Americans would find hard to manage. I might be wrong, but it seems to me that Americans take their religion way more seriously than the rest of us English-speaking nations, and this is reflected in their media.
And this brings me back to Cunningham’s article because it’s written in a cultural context that I simply don’t share. And I feel that’s the crux of this issue. Religion and all its mental constructs are cultural, and hell is nothing if not a mental construct.
My own father, whom I’ve written about before, witnessed hell first hand. He was in the Field Ambulance Corp in WW2 so he retrieved bodies in various states of beyond-repair from both sides of the conflict. He also spent 2.5 years as a POW in Germany. I bring this up, because when I was a teenager he told me why he didn’t believe in the biblical hell. He said, in effect, he couldn’t believe in a ‘father’ who sent his children to everlasting torment. I immediately saw the sense in his argument and I rejected the biblical god from that day on. This is the same man, I should point out, who believed it was his duty that I should have a Christian education. I thank him for that, otherwise I’d know nothing about it. When I was young I believed everything I was taught, which perversely made it easier to reject when I started questioning things. I know many people who had the same experience. The more they believed, the stronger their rejection.
I recently watched an excellent 3 part series, available on YouTube, called Testing God, which is really a discussion about science and religion. It was made by the UK’s Channel 4 in 2001, and includes some celebrity names in science, like Roger Penrose, Paul Davies and Richard Dawkins, and theologians as well; in particular, theologians who had become, or been, scientists.
In the last episode they interviewed someone who suffered horrendously in the War – he was German, and a victim of the fire-storm bombing. Contrary to many who have had similar experiences he found God, whereas, before, he’d been an atheist. But his idea of God is of someone who is patiently waiting for us.
I’ve long argued that God is subjective not objective. If humans are the only connection between the Universe and God, then, without humans, there is no reason for God to exist. There is no doubt in my mind that God is a projection, otherwise there wouldn’t be so many variants. Xenophanes, who lived in the 5th century BC, famously said:
The Ethiops say that their gods are flat-nosed and black,
While the Thracians say that theirs have blue eyes and red hair.
Yet if cattle or horses or lions had hands and could draw,
And could sculpt like men, then the horses would draw their gods
Like horses, and cattle like cattle; and each they would shape
Bodies of gods in the likeness, each kind, of their own.
At the risk of offending people even further, the idea that the God one finds in oneself is the Creator of the Universe is a non sequitur. My point is that there are two concepts of God which are commonly conflated. God as a Creator and God as a mystic experience, and there is no reason to believe that they are one and the same. In fact, the God as experience is unique to the person who has it, whilst God as Creator is, by definition, outside of space and time. One does not logically follow from the other.
In another YouTube video altogether, I watched an interview with Freeman Dyson on science and religion. He argues that they are quite separate and there is only conflict when people try to adapt religion to science or science to religion. In fact, he is critical of Einstein because Dyson believes that Einstein made science a religion. Einstein was influenced by Spinoza and would have argued, I believe, that the laws of physics are God.
John Barrow in one his books (Pi in the Sky) half-seriously suggests that the traditional God could be replaced by mathematics.
This brings me to a joke, which I’ve told elsewhere, but is appropriate, given the context.
What is the difference between a physicist and a mathematician?
A physicist studies the laws that God chose for the Universe to obey.
A mathematician studies the laws that God has to obey.
Einstein, in a letter to a friend, once asked the rhetorical question: Do you think God had a choice in creating the laws of the Universe?
I expect that’s unanswerable, but I would argue that if God created mathematics he had no choice. It’s not difficult to see that God can’t make a prime number non-prime, nor can he change the value of pi. To put it more succinctly, God can’t exist without mathematics, but mathematics can exist without God.
In light of this, I expect Freeman Dyson would accuse me of the same philosophical faux pas as Einstein.
As for hell, it’s a cultural artefact, a mental construct devised to manipulate people on a political scale. An anachronism at best and a perverse psychological contrivance at worst.
Thursday 24 January 2019
Understanding Einstein’s special theory of relativity
Imagine if a flight to the moon was no different to flying half way round the world in a contemporary airliner. In my scenario, the ‘shuttle’ would use an anti-gravity drive that allows high accelerations without killing its occupants with inertial forces. In other words, it would accelerate at hyper-speeds without anyone feeling it. I even imagined this when I was in high school, believe it or not.
The craft would still not be able to break the speed of light but it would travel fast enough that relativistic effects would be observable, both by the occupants and anyone remaining on the Earth or at its destination, the Moon.
So what are those relativistic effects? There is a very simple equation for velocity, and this is the only equation I will use to supplement my description.
Where v is the velocity, s is the distance travelled and t is the time or duration it takes. You can’t get much simpler than that. Note that s and t have an inverse relationship: if s gets larger, v increases, but if t gets larger, v decreases.
But it also means that for v to remain constant, if s gets smaller then so must t.
For the occupants of the shuttle, getting to the moon in such a short time means that, for them, the distance has shrunk. It normally takes about 3 days to get to the Moon (using current technology), so let’s say we manage it in 10 hrs instead. I haven’t done the calculations, because it depends on what speeds are attained and I’m trying to provide a qualitative, even intuitive, explanation rather than a technical one. The point is that if the occupants measured the distance using some sort of range finder, they’d find it was measurably less than if they did it using a range finder on Earth or on the Moon. It also means that whatever clocks they were carrying (including their own heartbeats) they would show that the duration was less, completely consistent with the equation above.
For the people on the Moon awaiting their arrival, or those on Earth left behind, the duration would be consistent with the distance they would measure independently of the craft, which means the distance would be whatever it was all of the time (allowing for small variances created by any elliptic eccentricity in its orbit). That means they would expect the occupants’ clocks to be the same as theirs. So when they see the discrepancy in the clocks it can only mean that time elapsed slower for the shuttle occupants compared to the moon’s inhabitants.
Now, many of you reading this will see a conundrum if not a flaw in my description. Einstein’s special theory of relativity infers that for the occupants of the shuttle, the clocks of the Moon and Earth occupants should also have slowed down, but when they disembark, they notice that they haven’t. That’s because there is an asymmetry inherent in this scenario. The shuttle occupants had to accelerate and decelerate to make the journey, whereas the so-called stationary observers didn’t. This is the same for the famous twin paradox.
Note that from the shuttle occupants’ perspective, the distance is shorter than the moon and Earth inhabitants’ measurements; therefore so is the time. But from the perspective of the moon and Earth inhabitants, the distance is unchanged but the time duration has shortened for the shuttle occupants compared to their own timekeeping. And that is special relativity theory in a nutshell.
Footnote: If you watch videos explaining the twin paradox, they emphasise that it’s not the acceleration that makes the difference (because it’s not part of the Lorentz transformation). But the acceleration and deceleration is what creates the asymmetry that one ‘moved’ respect to another that was ‘stationary’. In the scenario above, the entire solar system doesn’t accelerate and decelerate with respect to the shuttle, which would be absurd. This is my exposition on the twin paradox.
Addendum 1: Here is an attempted explanation of Einstein’s general theory of relativity, which is slightly more esoteric.
Addendum 2: I’ve done a rough calculation and the differences would be negligible, but if I changed the destination to Mars, the difference in distances would be in the order of 70,000 kilometres, but the time difference would be only in the order of 10 seconds. You could, of course, make the journey closer to lightspeed so the effects are more obvious.
Addendum 3: I’ve read the chapter on the twin paradox in Jim Al-Khalili’s book, Paradox: The Nine Greatest Enigmas in Physics. He points out that during the Apollo missions to the moon, the astronauts actually aged more (by nanoseconds) because the time increase by leaving Earth’s gravity was greater than any special relativistic effects experienced over the week-long return trip. Al-Khalili also explains that the twin who makes the journey, endures less time because the distance is shorter for them (as I expounded above). But, contrary to the YouTube lectures (that I viewed) he claims that it’s the acceleration and deceleration creating general relativistic effects that creates the asymmetry.
Saturday 12 January 2019
Are natural laws reality?
There is an aspect of this that would seem to support their point of view, and that’s the fact that our discoveries are never complete. We can always find circumstances where the laws don’t apply or new laws are required. The most obvious examples are Einstein’s general theory of relativity replacing Newton’s universal theory of gravity, and quantum mechanics replacing Newtonian mechanics.
I’ve discussed these before, but I’ll repeat myself because it’s important to understand why and how these differences arise. One of the conditions that Einstein set himself when he created his new theory of gravity was that it reduced to Newton’s theory when relativistic effects were negligible. This feat is quite astounding when one considers that the mathematics, involved in both theories, appear on the surface, to have little in common.
In respect to quantum mechanics, I contend that it is distinct from classical physics and the mathematics reflects that. I should point out that no one else agrees with this view (to my knowledge) except Freeman Dyson.
Newtonian mechanics has other limitations as well. In regard to predicting the orbits of the planets, it quickly becomes apparent that as one increases the number of bodies the predictions become more impossible over longer periods of time, and this has nothing to do with relativity. As Jeremy Lent pointed out in The Patterning Instinct, Newtonian classical physics doesn’t really work for the real world, long term, and has been largely replaced by chaos theory. Both classical and quantum physics are largely ‘linear’, whereas nature appears to be persistently non-linear. This means that the Universe is unpredictable and I’ve discussed this in some detail elsewhere.
Nature obeys different rules at different levels. The curious thing is that we always believe that we’ve just about discovered everything there is to know, then we discover a whole new layer of reality. The Universe is worlds within worlds. Our comprehension of those worlds is largely dependent on our knowledge of mathematics.
Some people (like Gregory Chaitin and Stephen Wolfram) even think that there is something akin to computer code underpinning the entire Universe, but I don’t. Computers can’t deal with chaotic non-linear phenomena because one needs to calculate to infinity to get the initial conditions that determine the phenomenon’s ultimate fate. That’s why even the location of the solar system’s planets are not mathematically guaranteed.
Below is a draft of the letter I wrote to Philosophy Now in response to Raymond Tallis’s scepticism about natural laws. It’s not the one I sent.
Quantities actually exist in the real world, in nature, and they come in specific ratios and relationships to each other; hence the 'natural laws'. They are not fictions, we did not make them up, they are not products of our imaginations.
Having said that, the wave function in quantum mechanics is a product of Schrodinger's imagination, and some people argue that it is a fiction. Nevertheless, it forms the basis of QED (quantum electrodynamics) which is the most successful empirically verified scientific theory to date, so they may actually be real; it's debatable. Einstein's field equations, based on tensors, are also arguably a product of his imagination, but, according to Einstein's own admission, the mathematics determined his conclusion that space-time is curved, not the other way around. Also his famous equation, E= mc2, is mathematically derived from his special theory of relativity and was later confirmed by experimental evidence. So sometimes, in physics, the map is discovered before the terrain.
The last line is a direct reference to Tallis’s own throwaway line that mathematical physicists tend to ‘confuse the map for the terrain’.
Saturday 5 January 2019
What makes humans unique
However, I find it hard to imagine that other species can think and conceptualise in a language the way we do or even communicate complex thoughts and intentions using oral utterances alone. To give other examples, I know of no other species that tells stories, keeps track of days by inventing a calendar based on heavenly constellations (like the Mayans) or even thinks about thinking. And as far as I know, we are the only species who literally invents a complex language that we teach our children (it’s not inherited) so that we can extend memories across generations. Even cultures without written scripts can do this using songs and dances and art. As someone said (John Hands in Cosmo Sapiens) we are the only species ‘who know that we know’. Or, as I said above, we are the only species that ‘thinks about thinking’.
Someone once pointed out to me that the only thing that separates us from all other species is the accumulation of knowledge, resulting in what we call civilization. He contended that over hundreds, even thousands of years, this had resulted in a huge gap between us and every other sentient creature on the planet. I pointed out to him that this only happened because we had invented the written word, based on languages, that allowed us to transfer memories across generations. Other species can teach their young certain skills, that may not be genetically inherited, but none can accumulate knowledge over hundreds of generations like we can. His very point demonstrated the difference he was trying to deny.
In a not-so-recent post, I delineated my philosophical ruminations into 23 succinct paragraphs, covering everything from science and mathematics to language, morality and religion. My 16th point said:
Humans have the unique ability to nest concepts within concepts ad-infinitum, which mirror the physical world.
In another post from 2012, in answer to a Question of the Month in Philosophy Now: How does language work?; I made the same point. (This is the only submission to Philosophy Now, out of 8 thus far, that didn’t get published.)
I attributed the above ‘philosophical point’ to Douglas Hofstadter, because he says something similar in his Pulitzer Prize winning book, Godel Escher Bach, but in reality, I had reached this conclusion before reading it.
It’s my contention that it is this ability that separates us from other species and that has allowed all the intellectual endeavours we associate with humanity, including stories, music, art, architecture, mathematics, science and engineering.
I will illustrate with an example that we are all familiar with, yet many of us struggle to pursue at an advanced level. I’m talking about mathematics, and I choose it because I believe it also explains why many of us fail to achieve the degree of proficiency we might prefer.
With mathematics we learn modules which we then use as a subroutine in a larger calculation. To give a very esoteric example, Einstein’s general theory of relativity requires at least 4 modules: calculus, vectors, matrices and the Lorentz transformation. These all combine in a metric tensor that becomes the basis of his field equations. The thing is, if you don’t know how to deal with any one of these, you obviously can’t derive his field equations. But the point is that the human brain can turn all these ‘modules’ into black boxes and then the black boxes can be manipulated at another level.
It’s not hard to see that we do this with everything, including writing an essay like I’m doing now. I raise a number of ideas and then try to combine them into a coherent thesis. The ‘atoms’ are individual words but no one tries to comprehend it at that level. Instead they think in terms of the ideas that I’ve expressed in words.
We do the same with a story, which becomes like a surrogate life for the time that we are under its spell. I’ve pointed out in other posts that we only learn something new when we integrate it into what we already know. And, with a story, we are continually integrating new information into existing information. Without this unique cognitive skill, stories wouldn’t work.
But more relevant to the current topic, the medium for a story is not words but the reader’s imagination. In a movie, we short-circuit the process, which is why they are so popular.
Because a story works at the level of imagination, it’s like a dream in that it evokes images and emotions that can feel real. One could imagine that a dog or a cat could experience emotions if we gave them a virtual reality experience, but a human story has the same level of complexity that we find in everyday life and which we express in a language. The simple fact that we can use language alone to conjure up a world with characters, along with a plot that can be followed, gives some indication of how powerful language is for the human species.
In a post I wrote on storytelling back in 2012, I referenced a book by Kiwi academic, Brian Boyd, who points out that pretend play, which we all do as children (though I suspect it’s now more likely done using a videogame console) gives us cognitive skills and is the precursor to both telling and experiencing stories. The success of streaming services indicates how stories are an essential part of the human experience.
While it’s self-evident that both mathematics and storytelling are two human endeavours that no other species can do (even at a rudimentary level) it’s hard to see how they are related.
People who are involved in computer programming or writing code, are aware of the value, even necessity, of subroutines. Our own brain does this when we learn to do something without having to think about it, like walking. But we can do the same thing with more complex tasks like driving a car or playing a musical instrument. The key point here is that they are all ‘motor tasks’, and we call the result ‘muscle memory’, as distinct from cognitive tasks. However, I expect it relates to cognitive tasks as well. For example, every time you say something it’s like the sentence has been pre-formed in your brain. We use particular phrases, all the time, which are analogous to ‘subroutines.’
I should point out that this doesn’t mean that computers ‘think’, which is a whole other topic. I’m just relating how the brain delegates tasks so it can ‘think’ about more important things. If we had to concentrate every time we took a step, we would lose the train of thought of whatever it was we were engaged in at the time; a conversation being the most obvious example.
The mathematics example I gave is not dissimilar to the idea of a ‘subroutine’. In fact, one can employ mathematical ‘modules’ into software, so it’s more than an analogy. So with mathematics we’ve effectively achieved cognitively what the brain achieves with motor skills at the subconscious level. And look where it has got us: Einstein’s general theory of relativity, which is the basis of all current theories of the Universe.
We can also think of a story in terms of modules. They are the individual scenes, which join together to form an episode, which form together to create an overarching narrative that we can follow even when it’s interrupted.
What mathematics and storytelling have in common is that they are both examples where the whole appears to be greater than the sum of its parts. Yet we know that in both cases, the whole is made up of the parts, because we ‘process’ the parts to get the whole. My point is that only humans are capable of this.
In both cases, we mentally build a structure that seems to have no limits. The same cognitive skill that allows us to follow a story in serial form also allows us to develop scientific theories. The brain breaks things down into components and then joins them back together to form a complex cognitive structure. Of course, we do this with physical objects as well, like when we manufacture a car or construct a building, or even a spacecraft. It’s called engineering.
Saturday 22 December 2018
When real life overtakes fiction
There are many subgenres of sci-fi: extraterrestrial exploration, alien encounters, time travel, robots & cyborgs, inter-galactic warfare, genetically engineered life-forms; but most SF stories, including mine, are a combination of some of these. Most sci-fi can be divided into 2 broad categories – space opera and speculative fiction, sometimes called hardcore SF. Space operas, exemplified by the Star Wars franchise, Star Trek and Dr Who, generally take more liberties with the science part of science fiction.
I would call my own fictional adventures science-fantasy, in the mould of Frank Herbert’s Dune series or Ursula K Le Guin’s fiction; though it has to be said, I don’t compete with them on any level.
I make no attempt to predict the future, even though the medium seems to demand it. Science fiction is a landscape that I use to explore ideas in the guise of a character-oriented story. I discovered, truly by accident, that I write stories about relationships. Not just relationships between lovers, but between mother and daughter, daughter and father(s), protagonist and nemesis, protagonist and machine.
One of the problems with writing science fiction is that the technology available today seems to overtake what one imagines. In my fiction no one uses a mobile phone. I can see a future where people can just talk to someone in the ether, because they can connect in their home or in their car, without a device per se. People can connect via a holographic form of Skype, which means they can have a meeting with someone in another location. We are already doing this, of course, and variations on this theme have been used in Star Wars and other space operas. But most of the interactions I describe are very old fashioned face-to-face, because that's still the best way to tell a story.
If you watch (or read) crime fiction you’ll generally find it’s very suspenseful with violence not too far away. But if you analyze it, you’ll find it’s a long series of conversations, with occasional action and most of the violence occurring off-screen (or off-the-page). In other words, it’s more about personal interactions than you realise, and that’s what generally attracts you, probably without you even knowing it.
This is a longwinded introduction to explain why I am really no better qualified to predict future societies than anyone else. I subscribe to New Scientist and The New Yorker, both of which give insights into the future by examining the present. In particular, I recently read an article in The New Yorker (Dec, 17, 2018) by David Owen about facial-recognition, called Here’s Looking At You, that is already being used by police forces in America to target arrests without any transparency. Mozilla (in a podcast last year) described how a man had been misidentified twice, was arrested and subsequently lost his job and his career. I also read in last week’s New Scientist (15 Dec. 2018) how databases are being developed to know everything about a person, even what TV shows they watch and their internet use. It’s well known that in China there is a credit-point system that determines what buildings you can access and what jobs you can apply for. China has the most surveillance cameras anywhere in the world, and they intend to combine them with the latest facial recognition software.
Yuval Harari, in Homo Deus, talks about how algorithms are going to take over our lives, but I think he missed the mark. We are slowly becoming more Orwellian with social media already determining election results. In the same issue of New Scientist, journalist, Chelsea Whyte, asks: Is it time to unfriend the social network? with specific reference to Facebook’s recently exposed track-record. According to her: “Facebook’s motto was once ‘move fast and break things.’ Now everything is broken.” Quoting from the same article:
Now, the UK parliament has published internal Facebook emails that expose the mindset inside the company. They reveal discussions among staff over whether to collect users’ phone call logs and SMS texts through its Android app. “This is a pretty high-risk thing to do from a PR perspective but it appears that the growth team will charge ahead and do it.” (So said Product Manager Michael LeBeau in an email from 2015)
Even without Edward Snowden’s whistle-blowing expose, we know that governments the world over are collecting our data because the technological ability to do that is now available. We are approaching a period in our so-called civilised development where we all have an on-line life (if you are reading this) and it can be accessed by governments and corporations alike. I’ve long known that anyone can learn everything they need to know about me from my computer, and increasingly they don’t even need the computer.
In one of my fictional stories, I created a dystopian world where everyone had a ‘chip’ that allowed all conversations to be recorded so there was literally no privacy. We are fast approaching that scenario in some totalitarian societies. In Communist China under Mao, and Communist Soviet Union under Stalin, people found the circle of people they could trust got smaller and smaller. Now with AI capabilities and internet-wide databases, privacy is becoming illusory. With constant surveillance, all subversion can be tracked and subsequently prosecuted. Someone once said that only societies that are open to new ideas progress. If you live in a society where new ideas are censored then you will get stagnation.
In my latest fiction I’ve created another autocratic world, where everyone is tracked because everywhere they go they interact with very realistic androids who act as servants, butlers and concierges, but, in reality, keep track of what everyone’s doing. The only ‘futuristic’ aspect of this are the androids and the fact that I’ve set it on some alien world. (My worlds aren’t terra-formed; people live in bubbles that create a human-friendly environment.)
After reading these very recent articles in New Scientist and TNY, I’ve concluded that our world is closer to the one I’ve created in my imagination than I thought.
Addendum 1: This is a podcast about so-called Surveillance Capitalism, from Mozilla. Obviously, I use Google and I'm also on FaceBook, but I don't use Twitter. Am I part of the problem or part of the solution? The truth is I don't know. I try to make people think and share ideas. I have political leanings, obviously, but they're transparent. Foremost, I believe, that if you can't put your name to something you shouldn't post it.
Thursday 22 November 2018
The search for ultimate truth is unattainable
Someone lent me a really good philosophy book called Ultimate Questions by Bryan Magee. To quote directly from the back fly leaf cover: “Bryan Magee has had an unusually multifaceted career as a professor of philosophy, music and theatre critic, BBC broadcaster and member of [British] Parliament.” It so happens I have another of his books, The Story of Philosophy, which is really a series of interviews with philosophers about philosophers, and I expect it’s a transcription of radio podcasts. Magee was over 80 when he wrote Ultimate Questions, which must be prior to 2016 when the book was published.
This is a very thought-provoking book, which is what you'd expect from a philosopher. To a large extent, and to my surprise, Magee and I have come to similar positions on fundamental epistemological and ontological issues, albeit by different paths. However, there is also a difference, possibly a divide, which I’ll come to later.
Where to start? I’ll start at the end because it coincides with my beginning. It’s not a lengthy tome (120+ pages) and it’s comprised of 7 chapters or topics, which are really discussions. In the last chapter, Our Predicament Summarized, he emphasises his view of an inner and outer world, both of which elude full comprehension, that he’s spent the best part of the book elaborating on.
As I’ve discussed previously, the inner and outer world is effectively the starting point for my own world view. The major difference between Magee and myself are the paths we’ve taken. My path has been a scientific one, in particular the science of physics, encapsulating as it does, the extremes of the physical universe, from the cosmos to the infinitesimal.
Magee’s path has been the empirical philosophers from Locke to Hume to Kant to Schopenhauer and eventually arriving at Wittgenstein. His most salient and persistent point is that our belief that we can comprehend everything there is to comprehend about the ‘world’ is a delusion. He tells an anecdotal story of when he was a student of philosophy and he was told that the word ‘World’ comprised not only what we know but everything we can know. He makes the point, that many people fail to grasp, that there could be concepts that are beyond our grasp in the same way that there are concepts we do understand but are nevertheless beyond the comprehension of the most intelligent of chimpanzees or dolphins or any creature other than human. None of these creatures can appreciate the extent of the heavens the way we can or even the way our ancient forebears could. Astronomy has a long history. Even indigenous cultures, without the benefit of script, have learned to navigate long distances with the aid of the stars. We have a comprehension of the world that no other creature has (on this planet) so it’s quite reasonable to assume that there are aspects of our world that we can’t imagine either.
Because my path to philosophy has been through science, I have a subtly different appreciation of this very salient point. I wrote a post based on Noson Yanofsky’s The Outer Limits of Reason, which addresses this very issue: there are limits in logic, mathematics and science, and there always will be. But I’m under the impression that Magee takes this point further. He expounds, better than anyone else I’ve read, that there are actual limits to what our brains can, not only perceive, but conceptualise, which leads to the possibility, most of us ignore, that there are things beyond our kin completely and always.
As Magee himself states, this opens the door to religion, which he discusses at length, yet he gives this warning: “Anyone who sets off in honest and serious pursuit of truth needs to know that in doing that he is leaving religion behind.” It’s a bit unfair to provide this quote out of context, as it comes at the end of a lengthy discussion, nevertheless, it’s the word ‘truth’ that gives his statement cogency. My own view is that religion is not an epistemology, it’s an experience. What’s more it’s an experience (including the experience of God) that is unique to the person who has it and can’t be shared with anyone else. This puts individual religious experience at odds with institutionalised religions, and as someone pointed out (Yuval Harari, from memory) this means that the people who have religious experiences are all iconoclasts.
I’m getting off the point, but it’s relevant in as much that arguments involving science and religion have no common ground. I find them ridiculous because they usually involve pitting an ancient text (of so-called prophecy) against modern scientific knowledge and all the technology it has propagated, which we all rely upon for our day-to-day existence. If religion ever had an epistemological role it has long been usurped.
On the other hand, if religion is an experience, it is part of the unfathomable which lies outside our rational senses, and is not captured by words. Magee contends that the best one can say about an afterlife or the existence of a God, is that ‘we don’t know’. He calls himself an agnostic but not just in the narrow sense relating to a Deity, but in the much broader sense of acknowledging our ignorance. He discusses these issues in much more depth than my succinct paraphrasing implies. He gives the example of music as something we experience that can’t be expressed in words. Many people have used music as an analogy for religious experience, but, as Magee points out, music has a material basis in instruments and a score and sound waves, whereas religion does not.
Coincidentally, someone today showed me a text on Socrates, from a much larger volume on classical Greece. Socrates famously proclaimed his ignorance as the foundation of his wisdom. In regard to science, he said: “Each mystery, when solved, reveals a deeper mystery.” This statement is so prophetic; it captures the essence of science as we know it today, some 2500 years after Socrates. It’s also the reason I agree with Magee.
John Wheeler conceived a metaphor, that I envisaged independently of him. (Further evidence that I’ve never had an original idea.)
We live on an island of knowledge surrounded by a sea of ignorance.
As our island of knowledge grows, so does the shore of our ignorance.
I contend that the island is science and the shoreline is philosophy, which implies that philosophy feeds science, but also that they are inseparable. By philosophy, in this context, I mean epistemology.
To give an example that confirms both Socrates and Wheeler, the discovery and extensive research into DNA provides both evidence and a mechanism for biological evolution from the earliest life forms to the most complex; yet the emergence of DNA as providing ‘instructions’ for the teleological development of an organism is no less a mystery looking for a solution than evolution itself.
The salient point of Wheeler's metaphor is that the sea of ignorance is infinite and so the island grows but is never complete. In his last chapter, Magee makes the point that truth (even in science) is something we progress towards without attaining. “So rationality requires us to renounce the pursuit of proof in favour of the pursuit of progress.” (My emphasis.) However, 'the pursuit of proof’ is something we’ve done successfully in mathematics ever since Euclid. It is on this point that I feel Magee and I part company.
Like many philosophers, when discussing epistemology, Magee hardly mentions mathematics. Only once, as far as I can tell, towards the very end (in the context of the quote I referenced above about ‘proof’) he includes it in the same sentence as science, logic and philosophy as inherited from Descartes, where he has this to say: “It is extraordinary to get people, including oneself, to give up this long-established pursuit of the unattainable.” He is right in as much as there will always be truths, including mathematical truths, that we can never know (refer my recent post on Godel, Turing and Chaitin). But there are also innumerable (mathematical) truths that we have discovered and will continue to discover into the future (part of the island of knowledge). As Freeman Dyson points out, 'Mathematics is forever', whilst discussing the legacy of Srinivasa Ramanujan's genius. In other words, mathematical truths don't become obsolete in the same way that science does.
I don’t know what Magee’s philosophical stance is on mathematics, but not giving it any special consideration tells me something already. I imagine, from his perspective, it serves no special epistemological role, except to give quantitative evidence for the validity of competing scientific theories.
In one of his earlier chapters, Magee talks about the ‘apparatus’ we have in the form of our senses and our brain that provide a limited means to perceive our external world. We have developed technological means to augment our senses; microscopes and telescopes being the most obvious. But we now have particle accelerators and radio telescopes that explore worlds we didn’t even know existed less than a century ago.
Mathematics, I would contend, is part of that extended apparatus. Riemann’s geometry allowed Einstein to perceive a universe that was ‘curved’ and Euler’s equation allowed Schrodinger to conceive a wave function. Both of these mathematically enhanced ‘discoveries’ revolutionised science at opposite ends of the epistemological spectrum: the cosmological and the subatomic.
Magee rightly points out our almost insignificance in both space and time as far as the Universe is concerned. We are figuratively like the blink of an eye on a grain of sand, yet reality has no meaning without our participation. In reference to the internal and external worlds that formulate this reality, Magee has this to say: “But then the most extraordinary thing is that the world of interaction between these two unintelligibles is rationally intelligible.” Einstein famously made a similar point: "The most incomprehensible thing about the Universe is that it’s comprehensible.”
One can’t contemplate that statement, especially in the context of Einstein’s iconic achievements, without considering the specific and necessary role of mathematics. Raymond Tallis, who writes a regular column in Philosophy Now, and for whom I have great respect, nevertheless downplays the role of mathematics. He once made the comment that mathematical Platonists (like me) 'make the error of confusing the map for the terrain.’ I wrote a response, saying: ‘the use of that metaphor infers the map is human-made, but what if the map preceded the terrain.’ (The response wasn’t published.) The Universe obeys laws that are mathematically in situ, as first intimated by Galileo, given credence by Kepler, Newton, Maxwell; then Einstein, Schrodinger, Heisenberg and Bohr.
I’d like to finish by quoting Paul Davies:
We have a closed circle of consistency here: the laws of physics produce complex systems, and these complex systems lead to consciousness, which then produces mathematics, which can then encode in a succinct and inspiring way the very underlying laws of physics that gave rise to it.
This, of course, is another way of formulating Roger Penrose’s 3 Worlds, and it’s the mathematical world that is, for me, the missing piece in Magee’s otherwise thought-provoking discourse.
Last word: I’ve long argued that mathematics determines the limits of our knowledge of the physical world. Science to date has demonstrated that Socrates was right: the resolution of one mystery invariably leads to another. And I agree with Magee that consciousness is a phenomenon that may elude us forever.
Addendum: I came across this discussion between Magee and Harvard philosopher, Hilary Putnam, from 1977 (so over 40 years ago), where Magee exhibits a more nuanced view on the philosophy of science and mathematics (the subject of their discussion) than I gave him credit for in my post. Both of these men take their philosophy of science from philosophers, like Kant, Descartes and Hume, whereas I take my philosophy of science from scientists: principally, Paul Davies, Roger Penrose and Richard Feynman, and to a lesser extent, John Wheeler and Freeman Dyson; I believe this is the main distinction between their views and mine. They even discuss this 'distinction' at one point, with the conclusion that scientists, and particularly physicists, are stuck in the past - they haven't caught up (my terminology, not theirs). They even talk about the scientific method as if it's obsolete or anachronistic, though again, they don't use those specific terms. But I'd point to the LHC (built decades after this discussion) as evidence that the scientific method is alive and well, and it works. (I intend to make this a subject of a separate post.)
Friday 9 November 2018
Can AI be self-aware?
And it’s pseudo self-awareness in that it’s make-believe, in the same way that pseudo science is make-believe science, meaning pretend science, like creationism. We have a toy that we pretend exhibits self-awareness. So it is we who do the make-believe and pretending, not the toy.
If you watch the video you’ll see that they have 3 robots and they give them a ‘dumbing pill’ (meaning a switch was pressed) so they can’t talk. But one of them is not dumb and they are asked: “Which pill did you receive?” One of them dramatically stands up and says: “I don’t know”. But then waves its arm and says, “I’m sorry, I know now. I was able to prove I was not given the dumbing pill.”
Obviously, the entire routine could have been programmed, but let’s assume it’s not. It’s a simple TRUE/FALSE logic test. The so-called self-awareness is a consequence of the T/F test being self-referential – whether it can talk or not. It verifies that it’s False because it hears its own voice. Notice the human-laden words like ‘hears’ and ‘voice’ (my anthropomorphic phrasing). Basically, it has a sensor to detect sound that it makes itself, which logically determines whether the statement, it’s ‘dumb’, is true or false. It says, ‘I was not given a dumbing pill’, which means its sound was not switched off. Very simple logic.
I found an on-line article by Steven Schkolne (PhD in Computer Science at Caltech), so someone with far more expertise in this area than me, yet I found his arguments for so-called computer self-awareness a bit misleading, to say the least. He talks about 2 different types of self-awareness (specifically for computers) – external and internal. An example of external self-awareness is an iphone knowing where it is, thanks to GPS. An example of internal self-awareness is a computer responding to someone touching the keyboard. He argues that “machines, unlike humans, have a complete and total self-awareness of their internal state”. For example, a computer can find every file on its hard drive and even tell you its date of origin, which is something no human can do.
From my perspective, this is a bit like the argument that a thermostat can ‘think’. ‘It thinks it’s too hot or it thinks it’s too cold, or it thinks the temperature is just right.’ I don’t know who originally said that, but I’ve seen it quoted by Daniel Dennett, and I’m still not sure if he was joking or serious.
Computers use data in a way that humans can’t and never will, which is why their memory recall is superhuman compared to ours. Anyone who can even remotely recall data like a computer is called a savant, like the famous ‘Rain Man’. The point is that machines don’t ‘think’ like humans at all. I’ll elaborate on this point later. Schkolne’s description of self-awareness for a machine has no cognitive relationship to our experience of self-awareness. As Schkone says himself: “It is a mistake if, in looking for machine self-awareness, we look for direct analogues to human experience.” Which leads me to argue that what he calls self-awareness in a machine is not self-awareness at all.
A machine accesses data, like GPS data, which it can turn into a graphic of a map or just numbers representing co-ordinates. Does the machine actually ‘know’ where it is? You can ask Siri (as Schkolne suggests) and she will tell you, but he acknowledges that it’s not Siri’s technology of voice recognition and voice replication that makes your iphone self-aware. No, the machine creates a map, so you know where ‘You’ are. Logically, a machine, like an aeroplane or a ship, could navigate over large distances with GPS with no humans aboard, like drones do. That doesn’t make them self-aware; it makes their logic self-referential, like the toy robot in my introductory example. So what Schkolne calls self-awareness, I call self-referential machine logic. Self-awareness in humans is dependent on consciousness: something we experience, not something we deduce.
And this is the nub of the argument. The argument goes that if self-awareness amongst humans and other species is a consequence of consciousness, then machines exhibiting self-awareness must be the first sign of consciousness in machines. However, self-referential logic, coded into software doesn’t require consciousness, it just requires machine logic suitably programmed. I’m saying that the argument is back-to-front. Consciousness can definitely imbue self-awareness, but a self-referential logic coded machine does not reverse the process and imbue consciousness.
I can extend this argument more generally to contend that computers will never be conscious for as long as they are based on logic. What I’d call problem-solving logic came late, evolutionarily, in the animal kingdom. Animals are largely driven by emotions and feelings, which I argue came first in evolutionary terms. But as intelligence increased so did social skills, planning and co-operation.
Now, insect colonies seem to put the lie to this. They are arguably closer to how computers work, based on algorithms that are possibly chemically driven (I actually don’t know). The point is that we don’t think of ants and bees as having human-like intelligence. A termite nest is an organic marvel, yet we don’t think the termites actually plan its construction the way a group of humans would. In fact, some would probably argue that insects don’t even have consciousness. Actually, I think they do. But to give another well-known example, I think the dance that bees do is ‘programmed’ into their DNA, whereas humans would have to ‘learn’ it from their predecessors.
There is a way in which humans are like computers, which I think muddies the waters, and leads people into believing that the way we think and the way machines ‘think’ is similar if not synonymous.
Humans are unique within the animal kingdom in that we use language like software; we effectively ‘download’ it from generation to generation and it limits what we can conceive and think about, as Wittgenstein pointed out. In fact, without this facility, culture and civilization would not have evolved. We are the only species (that we are aware of) that develops concepts and then manipulates them mentally because we learn a symbol-based language that gives us that unique facility. But we invented this with our brains; just as we invent symbols for mathematics and symbols for computer software. Computer software is, in effect, a language and it’s more than an analogy.
We may be the only species that uses symbolic language, but so do computers. Note that computers are like us in this respect, rather than us being like computers. With us, consciousness is required first, whereas with AI, people seem to think the process can be reversed: if you create a machine logic language with enough intelligence, then you will achieve consciousness. It’s back-to-front, just as self-referential logic creating self-aware consciousness is back-to-front.
I don't think AI will ever be conscious or sentient. There seems to be an assumption that if you make a computer more intelligent it will eventually become sentient. But I contend that consciousness is not an attribute of intelligence. I don't believe that more intelligent species are more conscious or more sentient. In other words, I don't think the two attributes are concordant, even though there is an obvious dependency between consciousness and intelligence in animals. But it’s a one way dependency; if consciousness was dependent on intelligence then computers would already be conscious.
Addendum: The so-called Turing test is really a test for humans, not robots, as this 'interview' with 'Sophia' illustrates.