Addendum: One of the interesting points that is raised in this programme is the
fact that we could feed the world now - it's a case of redistribution and waste
management, not production. No clearer example exists where our economic
paradigms are in conflict with our global needs. The wealth gap simply forbids
it.
Philosophy, at its best, challenges our long held views, such that we examine them more deeply than we might otherwise consider.
Paul P. Mealing
- Paul P. Mealing
- Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
Saturday, 8 June 2013
Why there should be more science in politics
This programme aired on
ABC's Catalyst last Thursday illustrates this very well. Not only are
scientists best equipped to see the future on global terms, they are best
equipped to find solutions. I think there is a complacency amongst both
politicians and the public-at-large that science will automatically rescue us
from the problems inherent in our global species' domination. But it seems to
me that our economic policies and our scientific future-seeing are at odds.
Infinite economic growth dependent on infinite population growth is not
sustainable. As the programme intimates, the 21st Century will be a crunch
point, and whilst everyone just assumes that science and technology will see us
through, it's only the scientists who actually acknowledge the problem.
Monday, 3 June 2013
Sequel to ELVENE
People who have read ELVENE invariably ask:
where’s the next one? Considering Elvene was first published in 2006, it’s been
a long time coming. Firstly, I was aware that I couldn’t possibly live up to
expectations – sequels rarely do – and I also knew that I would probably never
write a book as good as Elvene again.
There are many tensions inherent in
storytelling but none are more challenging than the contradictory goals of
realising readers’ expectations and providing surprises. Both are necessary for
a satisfactory rendition of a story and often have to be achieved
simultaneously.
So the sequel to ELVENE both opens and
closes with surprises, yet the journey’s end is rarely in doubt. I was often
tempted to abandon this exercise and let people imagine their own outcome from
the previous novel. That would have been the safe thing to do. But as I
progressed, especially in the second half, I was motivated by the opposite
desire: to write a sequel so that no one else would write it.
For most writers, the feeling is that the
story already exists, like the statue trapped in the marble, and, as the
writer, I’m simply the first person to read it. For much of the exercise I
wrote it as a serial to myself, not knowing what was going to happen next. This
is an approach many writers take – it provides the spontaneity that makes our
art come alive – even if I already knew how it was going to end (actually I didn't).
Footnote: I should point out that you don't need to have read ELVENE to read the sequel - it works as a standalone story. All the backstory you need is incorporated into the opening scenes.
Footnote: I should point out that you don't need to have read ELVENE to read the sequel - it works as a standalone story. All the backstory you need is incorporated into the opening scenes.
Sunday, 19 May 2013
Is the universe a computer?
In New
Scientist (9 February 2013, pp.30-31) Ken Wharton presented an abridged
version of his essay, The universe is not
a computer, which won him third prize in the 2012 Foundational Questions
Institute essay contest. Wharton is a quantum physicist at San Jose University,
California. I found it an interesting and well-written article that not only
put this question into an historical perspective, but addressed a fundamental
metaphysical issue that’s relevant to the way we do science and view the
universe itself. It also made me revisit Paul Davies’ The Goldilocks Enigma, because he addresses the same issue and
more.
Firstly, Wharton argues that Newton changed
fundamentally the way we do science when he used his newly discovered
(invented) differential calculus (which he called fluxions) to describe the
orbits of the planets in the solar system, and simultaneously confirmed, via
mathematics, that the gravity that keeps our feet on the ground is the very
same phenomenon that keeps the Earth in orbit around the sun. This of itself
doesn’t mean the universe is a computer, but Wharton argues that Newton’s use
of mathematics to uncover a natural law of the universe created a precedent in
the way we do physics and subliminally the way we perceive the universe.
Wharton refers to a ‘Newtonian schema’ that
tacitly supports the idea that because we predict future natural phenomena via
calculation, perhaps the universe itself behaves in a similar manner. To quote:
‘But even though we’ve moved well beyond
Newtonian physics, we haven’t moved beyond the new Newtonian schema. The
universe, we almost can’t help but imagine, is some cosmic computer that
generates the future from the past via some master “software” (the laws of
physics) and some initial input (the big bang).’
Wharton is quick to point out that this is
not the same thing as believing that the universe is a computer simulation –
they are entirely different issues – Paul Davies and David Deutsch make the
same point in their respective books (I reviewed Deutsch’s book, The Fabric of Reality, in September
2012, and Davies I discuss below). In fact, Deutsch argues that the universe is
a ‘cosmic computer’ and Davies argues that it isn’t, but I’m getting ahead of
myself.
Wharton’s point is that this belief is a
tacit assumption underlying all of physics: ‘…where
our cosmic computer assumption is so deeply ingrained that we don’t even
realise we are making it.’
A significant part of Wharton’s article
entails an exposition on the “Lagragian”, which has dominated physics in the
last century, though it was first formulated by Joseph Louis Lagrange in 1788
and foreseen, in essence, by Pierre de Fermat (in the previous century)
when he proposed the ‘least time’ principle for refracted light. A ray of light
will always take the path of least time when it goes between mediums – like air
and water or air and glass. James Gleick, in his biography of Richard Feynman, GENIUS, gives the example of a lifesaver
having to run at an angle along a beach and then swim through surf to reach a
swimmer in trouble. The point is that there is a path of ‘least time’ for the
lifesaver, amongst an infinite number of paths he could take. The 2 extremes
are that he could run perpendicularly into the surf and swim diagonally to the
swimmer or he could run diagonally to the surf at the point opposite the
swimmer and swim perpendicularly to him or her. Somewhere in between these 2
extremes there is an optimum path that would take least time (Wharton uses the
same analogy in his article). In the case of light, travelling obliquely
through 2 different mediums at different speeds, the light automatically takes
the path of ‘least time’. This was ‘de Fermat’s principle’ even though he
couldn’t prove it at the time he formulated it.
Richard Feynman, in particular, used this
principle of ‘least action’, as it’s called, to formulate his integral path method
of quantum mechanics. In fact, as Brian Cox and Jeff Fershaw point out in The Quantum Universe (reviewed December,
2011) Planck’s constant, h, is expressed
in units of ‘least action’, and Feynman famously derived Schrodinger’s equation
from a paper that Paul Dirac wrote on ‘The Lagrangian in Quantum Mechanics’. Feynman also described the
significance of the principle, as applied to gravity, in Six-Not-So-Easy Pieces - in effect, it dictates the path of a body
in a gravitational field. In a nutshell, the ‘least action’ is the difference
between the kinetic and potential energy of the body. Nature contrives that it
will always be a minimum, hence the description, ‘principle of least action’.
A bit of a detour, but
it seems to be a universal principle that appears in every area of physics.
It’s relevance to Wharton’s thesis is that ‘…physicists
tend to view it as a mathematical trick rather than an alternative framework
for how the universe might really work.’
However, Wharton
argues that the mathematics of a ‘Lagrangian-friendly formulation of quantum theory
[proposed by him] could be taken literally’. So Wharton is not eschewing
mathematics or natural laws in mathematical guise (which is what a Lagrangian really
is); he’s contending that the Newtonian schema no longer applies to quantum
mechanics because of its inherent uncertainty and the need for a ‘…”collapse”, when all the built-up
uncertainty suddenly emerges into reality.’
David Deutsch, for
those who are familiar with his ideas, overcomes this obstacle by contending
that we live in a quantum multiverse, so there is no ‘collapse’, just a number
of realities, all consequences of the multiverse behaving like a cosmic quantum
computer. I’ve discussed this and my particular contentions with it in another post.
Paul Davies discusses
these same issues in the context of the universe’s evolution and all the
diverse philosophical views that such a discussion encompasses. Davies devotes
many pages of print to this topic and to present it in a few paragraphs is a
travesty, but that’s what I’m going to do. In particular, Davies equates
mathematical Platonism with Wharton’s Newtonian schema, though he doesn’t specifically
reference Newton. He provides a compelling argument that a finite universe
can’t possibly do calculus-type calculations requiring infinite elements of
information. And that’s the real schema (or paradigm) that modern physics seems
to embrace: that everything in the universe from quantum phenomena to
thermodynamics to DNA can be understood in terms of information; in ‘bits’,
which makes the computer analogy not only relevant but impossible to ignore.
Personally, I think the computer analogy is apposite only because we live in
the ‘computer age’. It’s not only the universe that is seen as a computer, but
also the human brain (and other species, no doubt). The question I always ask
is: where is the software? But that’s another topic.
DNA, to all intents
and purposes, is a form of natural software where the code is expressed in
amino acids and the hardware are proteins that are constructed and manipulated
on a daily basis. DNA is a set of instructions to build a functioning
biological organism – it’s as teleological as nature gets. A large part of
Davies’ discussion entails teleology and its effective expulsion from science
after Darwin, but the construction of every living organism on the planet is
teleological even though its evolution is not. Another detour, though not an
irrelevant one.
Davies argues that
he’s not a Platonist, whilst acknowledging that most physicists conduct science
in the Platonist tradition, even if they don’t admit it. Specifically, Davies
challenges the Platonist precept that the laws of nature exist independently of
the universe. Instead, he supports John Wheeler’s philosophy that ‘the laws of
the universe emerged… “higgledy-piggledy”… and gradually congealed over time.’
I disagree with Davies, fundamentally on this point, not because the laws of
the universe couldn’t have evolved over time, but because there is simply more
mathematics than the universe needs to exist.
Davies also discusses
at length the anthropic principle, both the weak and strong versions, and calls
Deutsch’s version the ‘final anthropic principle’. Davies acknowledges that the
strong version is contrary to the scientific precept that the universe is not
teleological, yet, like me, points out the nihilistic conclusion (my term, not
his) of a universe without consciousness. Davies overcomes this by embracing
Wheeler’s philosophical idea that we are part of a cosmological quantum loop –
an intriguing but not physically impossible concept. In fact, Davies’ book is
as much a homage to Wheeler as it is an expression of his own philosophy.
My own view is much closer to RogerPenrose’s that there are 3 worlds: the mental, the Platonic and the physical;
and that they can be understood in a paradoxical cyclic loop. By Platonic, he
means mathematical, which exists independently of humanity and the universe,
yet we only comprehend as a product of the human mind, which is a product of
the physical universe, which arose from a set of mathematical laws – hence the
loop. In my view this doesn’t make the universe a computer. I agree with
Wharton on this point, but I see quantum mechanics as a substrate of the
physical universe that existed before the universe as we know it evolved. This
is consistent with the Hartle-Hawking cosmological view that the universe had
no beginning in time as well as being consistent with Davies’ exposition that the
‘…vanishing of time for the entire universe becomes very explicit in quantum
cosmology, where the time variable simply drops out of the quantum
description.’
I’ve discussed this cosmological viewpoint
before, but if the quantum substrate exists outside of time, then Wheeler’s
and Davies’ version of the anthropic principle suddenly becomes more tenable.
Addendum: I wrote another post on this in 2018, which I feel is a stronger argument, and, in particular, includes the role of chaos.
Addendum: I wrote another post on this in 2018, which I feel is a stronger argument, and, in particular, includes the role of chaos.
Saturday, 11 May 2013
Analogy; the unique cognitive mechanism for learning
Douglas Hofstadter and Emmanuel Sander have recently co-authored a book, Surfaces
and Essences: Analogy as the fuel and fire of thinking (no, I haven’t read
it). Hofstadter famously won a Pulitzer Prize in 1979 for Godel Escher Bach, which I reviewed in February 2009, and is
professor of cognitive and computer science at Indiana University, Bloomington,
while Sander is professor of psychology at the University of Paris.
They’ve summarised their philosophy and
insights in a 4 page article in last week’s New
Scientist (4 May 2013, pp. 30-33) titled The forgotten fuel of our minds. Basically, they claim that analogy
is the fundamental engine behind our supra-natural cognitive abilities
(relative to other species) and their argument resonates with views I’ve
expressed numerous times myself. But they go further and claim that we use
analogies all the time, without thinking, in our everyday social interactions
and activities.
Personally, I think there are 2 aspects to
this, so I will discuss them separately before bringing them together. To take
the last point first, in psychology one learns about ‘schemas’ and ‘scripts’,
and I think they’re very relevant to this topic. To quote from Vaughan and Hogg
(professor of psychology, University of Auckland and professor of psychology,
University of Queensland, respectively) in their Introduction to Social Psychology, a schema is a ‘Cognitive
structure that represents knowledge about a concept or type of stimulus, including
its attributes and the relations among those attributes’ (Fiske and Taylor,
1991) and a script is ‘A schema about an event.’
Effectively, a schema is what we bring to
every new interaction that we experience and, not surprisingly, it is based on
what we’ve experienced before. We even have a schema for the self, which we
continually evaluate and revise dependent on feedback from others and our sense
of purpose, not to mention consequential achievements and failures. A ‘script’
is the schema we have for interactions with others and examples include how we
behave in a restaurant or in a work place or in the home. The relevance to
Hofstadter’s and Sander’s article is that they explain these same psychological
phenomena as analogies, and they also make the point that they are dependent on
past experiences.
I’ve made the point in other posts, that we
only learn new knowledge when we can integrate it into existing knowledge. A
good example is when we look up a word in a dictionary – it will only make
sense to us if it’s explained using words we already know. Mathematics is
another good example because it’s clearly a cumulative epistemological
endeavour. One can’t learn anything about calculus if one doesn’t know algebra.
This is why the gap between what one is expected to know and what one can
acquire gets more impossible in esoteric subjects if one fails to grasp basic
concepts. This fundamental cognitive ability, that we use everyday, is
something that other species don’t seem to possess. To give a more prosaic example,
we all enjoy stories, be it in books or on stage or in movies or TV. A story
requires us to continually integrate new knowledge into existing knowledge and
yet we do it with little conscious effort. We can even drop it and pick it up
later with surprising efficacy.
And this is why analogy is the method of
choice when it comes to explaining something new. We all do it and we all
expect it. When someone is explaining something - not unlike what I’m doing now
- we want examples and analogies, and, when it comes to esoteric topics (like calculus) I do my best to deliver. In other words, analogy allows us to explain
(and understand) something new based on something we already know. And this is
the relationship with schemas and scripts, because we axiomatically use
existing schemas and scripts when we are confronted with a new experience,
modifying them to suit as we proceed and learn.
But there is another aspect to analogy,
which is not discussed explicitly by Hofstadter and Sander in their article, and
that is metaphor (though they use metaphors as examples while still calling
them analogies). Metaphor is undoubtedly a uniquely human cognitive trait. And
metaphor is analogy in compact form. It’s also one of the things that separates
us from AI, thus far. In my own speculative fiction, I’ve played with this idea
by creating an exceptional AI, then tripping ‘him’ up (yes, I gave him a
gender) using metaphor as cliche.
To be fair to Hofstadter and Sander, there
is much more to their discourse than I’ve alluded to above.
Thursday, 2 May 2013
Ashamed to be Australian
This is an eye-opening documentary that the Australian government is doing it’s best to keep
out-of-sight, out-of-mind. It’s criminal in anyone’s language: the detention of
refugees off-shore with little or no recourse to legal representation.
The story reveals
hidden-camera footage as well as interviews with people who spent time there
and were distressed at what they observed. As one young Salvation Army
volunteer observes, the government has spent millions of dollars to punish and
hide these people from public view – the detainees know this themselves.
The proclaimed
objective, according to the government, is that the detention is a deterrent
to other people seeking asylum, yet, as the programme reveals, there is no
evidence to support this. The more likely objective is purely political, as the
major parties are in a psychological-power struggle to prove who is the most
ruthless and hard-minded (i.e. immoral) in dealing with asylum seekers. It’s
all about winning the xenophobic vote in the next election.
These detention
centres are mental illness factories, as 2010 Australian of the year, Professor Patrick Mcgorry, so aptly
described them. That was under a Liberal government but the Labor government
has proven that its policies are just as criminal and arguably less humane.
Addendum: Getup have a petition to close Manus Island detention centre. Thanks to Kay Hart for sending it to me.
Addendum: Getup have a petition to close Manus Island detention centre. Thanks to Kay Hart for sending it to me.
Tuesday, 23 April 2013
In memory of Chrissy Amphlett: 1959 - 2013
And Chrissey's wicked sense of humour, on the same show (host is the incomparable Julia Zemiro). This may offend some people but I find it hilarious, and yes, it was broadcast on free-to-air TV on a Saturday night.
Subscribe to:
Posts (Atom)