Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Tuesday 16 August 2022

How does science work?

 This post effectively piggybacks onto my last post, because, when it comes to knowledge and truth, nothing beats science except mathematics. It also coincides with me watching videos of Bryan Magee talking to philosophers, from 30 to 40 years ago and more. I also have a book with a collection of these ‘discussions’, so the ones I can’t view, I can read about. One gets an overall impression from these philosophers that, when it comes to understanding the philosophy of science, the last person you should ask is a scientist.
 
Now, I’m neither a scientist nor a proper philosopher, but it should be obvious to anyone who reads this blog that I’m interested in both. And where others see a dichotomy or a grudging disrespect, I see a marriage. There is one particular discussion that Magee has (with Hilary Putnam from Harvard, in 1977) that is headlined, The Philosophy of Science. Now, where Magee and his contemporaries turn to Kant, Hume and Descartes, I turn to Paul Davies, Roger Penrose and Richard Feynman, so the difference in perspective couldn’t be starker.
 
Where to start? Maybe I’ll start with a reference to my previous post by contending that what science excels in is explanation. In fact, one could define a scientific theory as an attempted explanation of a natural phenomenon, and science in general as the attempt to explain natural phenomena in all of their manifestations. This axiomatically rules out supernatural phenomena and requires that the natural phenomenon under investigation can be observed, either directly or indirectly, and increasingly with advanced technological instruments.
 
It's the use of the word ‘attempt’ that is the fly in the ointment, and requires elaboration. I use the word, attempt, because all theories, no matter how successful, are incomplete. This goes to the core of the issue and the heart of any debate concerning the philosophy of science, which hopefully becomes clearer as I progress.
 
But I’m going to start with what I believe are a couple of assumptions that science makes even before it gets going. One assumption is that there is an objective reality. This comes up if one discusses Hume, as Magee does with Professor John Passmore (from ANU). I don’t know when this took place, but it was before 1987 when the collection was published. Now, neither Magee nor Passmore are ‘idealists’ and they don’t believe Hume was either, but they iterate Hume’s claim that you can never know for certain that the world doesn’t exist when you’re not looking. Stephen Hawking also references this in his book, The Grand Design. In this context, idealism refers to a philosophical position that the world only exists as a consequence of minds (Donald Hoffman is the best known contemporary advocate). This is subtly different to ‘solipsism’, which is a condition we all experience when we dream, both of which I’ve discussed elsewhere.
 
There is an issue with idealism that is rarely discussed, at least from my limited exposure to the term, which is that everything must only exist in the present – there can be no history - if everything physically disappears when unobserved. And this creates a problem with our current knowledge of science and the Universe. We now know, though Hume wouldn’t have known, that we can literally see hundreds and even thousands of years into the past, just by looking at the night sky. In fact, using the technology I alluded to earlier, we can ‘observe’ the CMBR (cosmic microwave background radiation), so 380,000 years after the Big Bang (13.8 billion years ago). If there is no ‘objective reality’ then the science of cosmology makes no sense. I’m not sure how Hoffman reconciles that with his view, but he has similar problems with causality, which I’ll talk about next, because that’s the other assumption that I believe science makes.
 
This again rubs up against Hume, because it’s probably his most famous philosophical point that causality uses an inductive logic that can’t be confirmed. Just because 2 events happen sequentially, there is no way you can know that one caused the other. To quote Passmore in his conversation with Magee: “exactly how does past experience justify a conclusion about future behaviour?” In other words, using the example that Passmore does, just because you saw a rubber ball bounce yesterday, how can you be sure that it will do the same tomorrow? This is the very illustration of ‘inductive reasoning’.
 
To give another example that is often used to demonstrate this view in extremis, just because night has followed day in endless cycles for millennia, doesn’t guarantee it’s going to happen tomorrow. This is where science enters the picture because it can provide an explanation, which as I stated right at the beginning, is the whole raison d’etre of science. Night follows day as a consequence of the Earth rotating on its axis. In another post, written years ago, I discussed George Lakoff’s belief that all things philosophical and scientific can be understood as metaphor, so that the relationship between circular motion and periodicity is purely metaphorical. If one takes this to its logical conclusion, the literal everyday experience of night and day is just a metaphor.
 
But getting back to Hume’s scepticism, science shows that there is a causal relationship between the rotation of the Earth and our experience of night and day. This is a very prosaic example, but it demonstrates that the premise of causality lies at the heart of science. Remember, it’s only in the last 400 years or so that we discovered that the Earth rotates. This was the cause of Galileo’s fatally close encounter with the Inquisition, because it contradicted the Bible.
 
Now, some people, including Hoffman (he’s my default Devil’s advocate), argue that quantum mechanics (QM) rules out causality. I think Mark John Fernee (physicist with the University of Queensland) provides the best response by explaining how Born’s rule provides a mathematically expressed causal link between QM and classical physics. He argues, in effect, that it’s the ‘collapse’ of the wave function in QM that gives rise to the irreversibility in time between QM and classical physics (the so-called ‘measurement problem’) but is expressed as a probability by the Born rule, before the measurement or observation takes place. That’s longwinded and a little obtuse, but the ‘measurement’ turns a probability into an actual event – the transition from future to past (to paraphrase Freeman Dyson).
 
On the other hand, Hoffman argues that there is no causality in QM. To quote from the academic paper he cowrote with Chetan Prakash:
 
Our views on causality are consistent with interpretations of quantum theory that abandon microphysical causality… The burden of proof is surely on one who would abandon microphysical causation but still cling to macrophysical causation.
 
So Hoffman seems to think that there is a scientific consensus that causality does not arise in QM. But it’s an intrinsic part of the ‘measurement problem’, which is literally what is observed but eludes explanation. To quote Fernee:
 
While the Born rule looks to be ad hoc, it actually serves the function of ensuring that quantum mechanics obeys causality by ensuring that a quantum of action only acts locally (I can't actually think of any better way to state this). Therefore there really has to be a Born rule if causality is to hold.
 
Leaving QM aside, my standard response to this topic is somewhat blunt: if you don’t believe in causality, step in front of a bus (it’s a rhetorical device, not an instruction). Even Hoffman acknowledges in an online interview that he wouldn’t step in front of a train. I thought his argument specious because he compared it to taking an icon on a computer desktop (his go-to analogy) and putting it in the trash can. He exhorts us to take the train "seriously but not literally", just like a computer desktop icon (watch this video from 26.30 min).

That’s a lengthy detour, but causality is a such a core ‘belief’ in science that it couldn’t be ignored or glossed over.
 
Magee, in his discussion with Passmore, uses Einstein’s theory of gravity superseding Newton’s as an example of how a subsequent scientific theory can prove a previous theory ‘wrong’. In fact, Passmore compares it with the elimination of the ‘phlogiston’ theory by Lavoisier. But there is a dramatic difference. Phlogiston was a true or false theory in the same way that the Sun going around the Earth was a true or false theory, and, in both cases, they were proven ‘wrong’ by subsequent theories. That is not the case with Newton’s theory of gravitation.
 
It needs to be remembered that Newton’s theory was no less revolutionary than Einstein’s. He showed that the natural mechanism which causes (that word again) an object to fall to the ground on Earth is exactly the same mechanism that causes the moon to orbit the Earth. There is a reason why Newton is one of the few intellectual giants in history who is commonly compared with the more recent intellectual giant, Einstein.
 
My most pertinent point that I made right at the start is that all scientific theories are incomplete, and this applies to both Newton’s and Einstein’s theories of gravity. It’s just that Einstein’s theory is less incomplete than Newton’s and that is the real difference. And this is where I collide head-on with Magee and his interlocutors. They argue that the commonly held view that science progresses as a steady accumulation of knowledge is misleading, while I’d argue that the specific example they give – Einstein versus Newton – demonstrates that is exactly how science progresses, only it happens in quantum leaps rather than incrementally.
 
Thomas Kuhn wrote a seminal book, The Structure of Scientific Revolutions, which challenged the prevailing view that science progresses by incremental steps and this is the point that Magee is making. On this I agree: science has progressed by revolutions, yet it has still been built on what went before. As Claudia de Rahm (whom I wrote about in a former post) makes clear in a discussion on Einstein’s theory of gravity: any new theory that replaces it has to explain what the existing theory already explains. She specifically says, in answer to a question from her audience, that you don’t throw what we already know to be true (from empirical evidence) ‘into the rubbish bin’. And Einstein faced this same dilemma when he replaced Newton’s theory. In fact, one of his self-imposed criteria was that his theory must be mathematically equivalent to Newton’s when relativistic effects were negligible, which is true in most circumstances.
 
Passmore argues that Einstein’s theory even contradicts Newton’s theory, without being specific. The thing is that Einstein’s revolution affected the very bedrock of physics, being space and time. So maybe that’s what he’s referring to, because Newton’s theory assumed there was absolute space and absolute time, which Einstein effectively replaced with absolute spacetime.
 
I’ve discussed this in another post, but it bears repeating, because it highlights the issue in a way that is easily understood. Newton asks you to imagine a spinning bucket of water and observe what happens. And what happens is that the water surface becomes concave as a consequence of centrifugal forces. He then asked, what is it spinning in reference to? The answer is Earth, but the experiment applies to every spinning object in the Universe, including galaxies. They weren’t known in Newton’s time, nevertheless he had the insight to appreciate that the bucket spun relative to the stars in the night sky – in other words, with respect to the whole cosmos. Therefore, he concluded there must be absolute space, which is not spinning. Einstein, in answer to the same philosophical question, replaced absolute space with absolute spacetime.
 
In last week’s New Scientist (6 August 2022), Chanda Prescod-Weinstein (Assistant Professor in physics and astronomy at New Hampshire University) spent an entire page explaining how Einstein’s GR (General Theory of Relativity) is a ‘background independent theory’, which, in effect, means that it’s not dependent on a specific co-ordinate system. But within her discussion, she makes this point about the Newtonian perspective:
 
The theory [GR] did share something with the Newtonian perspective: while space and time were no longer absolute, they remained a stage on which events unfolded.
 
Another ‘truth’ that carries over from Newton to Einstein is the inverse square law, which has a causal relationship with planets, ensuring their orbits remain stable over astronomical time frames.
 
While Magee’s and Putnam’s discussion is ostensibly about the philosophy of science they mostly only talk about physics, which they acknowledge, and so have I. However, one should mention the theory of evolution (as they also do) because it demonstrates even better than the theory of gravitation, that science is a cumulative process. Everything we’ve learnt since Darwin’s and Wallace’s theory of natural selection has demonstrated that they were right, when it could have demonstrated they were wrong. And like Newton and Einstein, Darwin acknowledged the shortcomings in his theory – what he couldn’t explain.
 
But here’s the thing: in both cases, subsequent discoveries along with subsequent theories act like a filter, so what was true in a previous theory carries over and what was wrong is winnowed out. This is how I believe science works, which is distinct from Magee’s and Putnam’s account.
 
Putnam distinguishes between induction and deduction, pointing out that deduction can be done algorithmically on a computer while induction can’t. He emphasises at the start that induction along with empirical evidence is effectively the scientific method, but later he and Magee are almost dismissive of the scientific method, as if it’s past its use-by-date. This inference deserves closer analysis.
 
A dictionary definition of induction in this context is worth noting: the inference of a general law from particular instances. This is especially true in physics and has undoubtedly contributed to its success. Newton took the observation of an object falling on Earth and generalised it to include the entire solar system. He could only do this because of the work of Kepler who used the accurate observations of Tycho Brahe on the movements of the planets. Einstein then generalised the theory further, so that it was independent of any frame of reference or set of co-ordinates, as mentioned above.
 
The common thread that runs through all 3 of these iconoclasts (4 if you include Galileo) is mathematics. In fact, it was Galileo who famously said that if you want to read the book of nature, it is written in the language of mathematics (or words to that effect). A sentiment reiterated by Feynman (nearly 4 centuries later) in his book, The Character of Physical Law.
 
Einstein was arguably the first person who developed a theory based almost solely on mathematics before having it confirmed by observation, and a century later that has become such a common practice, it has led to a dilemma in physics. The reason that the scientific method is in crisis (if I can use that word) is because we can’t do the experiments to verify our theories, which is why the most ambitious theory in physics, string theory, has effectively stagnated for over a quarter of a century.
 
On the subject of mathematics and physics, Steven Weinberg was interviewed on Closer to Truth (posted last week), wherein he talks about the role of symmetry in elementary particle physics. It demonstrates how mathematics is intrinsic to physics at a fundamental level and integral to our comprehension.

 

Footnote: Sabine Hossenfelder, a theoretical physicist with her own YouTube channel (recommended) wrote a book, Lost in Math; How Beauty Leads Physics Astray (2018), where she effectively addresses the 'crisis' I refer to. In it, she interviews some of the smartest people in physics, including Steven Weinberg. She's also written her own book on philosophy, which is imminent. (Steven Weinberg passed away 23 July 2021)

Wednesday 10 August 2022

What is knowledge? And is it true?

 This is the subject of a YouTube video I watched recently by Jade. I like Jade’s and Tibees’ videos, because they are both young Australian women (though Tibees is obviously a Kiwi, going by her accent) who produce science and maths videos, with their own unique slant. I’ve noticed that Jade’s videos have become more philosophical and Tibees’ often have an historical perspective. In this video by Jade, she also provides historical context. Both of them have taught me things I didn’t know, and this video is no exception.
 
The video has a different title to this post: The Gettier Problem or How do you know that you know what you know? The second title gets to the nub of it. Basically, she’s tackling a philosophical problem going back to Plato, which is how do you know that a belief is actually true? As I discussed in an earlier post, some people argue that you never do, but Jade discusses this in the context of AI and machine-learning.
 
She starts off with the example of using Google Translate to translate her English sentences into French, as she was in Paris at the time of making the video (she has a French husband, whom she’s revealed in other videos). She points out that the AI system doesn’t actually know the meaning of the words, and it doesn’t translate the way you or I would: by looking up individual words in a dictionary. No, the system is fed massive amounts of internet generated data and effectively learns statistically from repeated exposure to phrases and sentences so it doesn’t have to ‘understand’ what it actually means. Towards the end of the video, she gives the example of a computer being able to ‘compute’ and predict the movements of planets without applying Newton’s mathematical laws, simply based on historical data, albeit large amounts thereof.
 
Jade puts this into context by asking, how do you ‘know’ something is true as opposed to just being a belief? Plato provided a definition: Knowledge is true belief with an account or rational explanation. Jade called this ‘Justified True Belief’ and provides examples. But then, someone called Edmund Gettier mid last century demonstrated how one could hold a belief that is apparently true but still incorrect, because the assumed causal connection was wrong. Jade gives a few examples, but one was of someone mistaking a cloud of wasps for smoke and assuming there was a fire. In fact, there was a fire, but they didn’t see it and it had no connection with the cloud of wasps. So someone else, Alvin Goodman, suggested that a way out of a ‘Gettier problem’ was to look for a causal connection before claiming an event was true (watch the video).
 
I confess I’d never heard these arguments nor of the people involved, but I felt there was another perspective. And that perspective is an ‘explanation’, which is part of Plato’s definition. We know when we know something (to rephrase her original question) when we can explain it. Of course, that doesn’t mean that we do know it, but it’s what separates us from AI. Even when we get something wrong, we still feel the need to explain it, even if it’s only to ourselves.
 
If one looks at her original example, most of us can explain what a specific word means, and if we can’t, we look it up in a dictionary, and the AI translator can’t do that. Likewise, with the example of predicting planetary orbits, we can give an explanation, involving Newton’s gravitational constant (G) and the inverse square law.
 
Mathematical proofs provide an explanation for mathematical ‘truths’, which is why Godel’s Incompleteness Theorem upset the apple cart, so-to-speak. You can actually have mathematical truths without proofs, but, of course, you can’t be sure they’re true. Roger Penrose argues that Godel’s famous theorem is one of the things that distinguishes human intelligence from machine intelligence (read his Preface to The Emperor’s New Mind), but that is too much of a detour for this post.
 
The criterion that is used, both scientifically and legally, is evidence. Having some experience with legal contractual disputes, I know that documented evidence always wins in a court of law over undocumented evidence, which doesn’t necessarily mean that the person with the most documentation was actually right (nevertheless, I’ve always accepted the umpire’s decision, knowing I provided all the evidence at my disposal).
 
The point I’d make is that humans will always provide an explanation, even if they have it wrong, so it doesn’t necessarily make knowledge ‘true’, but it’s something that AI inherently can’t do. Best examples are scientific theories, which are effectively ‘explanations’ and yet they are never complete, in the same way that mathematics is never complete.
 
While on the topic of ‘truths’, one of my pet peeves are people who conflate moral and religious ‘truths’ with scientific and mathematical ‘truths’ (often on the above-mentioned basis that it’s impossible to know them all). But there is another aspect, and that is that so-called moral truths are dependent on social norms, as I’ve described elsewhere, and they’re also dependent on context, like whether one is living in peace or war.
 
Back to the questions heading this post, I’m not sure I’ve answered them. I’ve long argued that only mathematical truths are truly universal, and to the extent that such ‘truths’ determine the ‘rules’ of the Universe (for want of a better term), they also ultimately determine the limits of what we can know.

Tuesday 2 August 2022

AI and sentience

I am a self-confessed sceptic that AI can ever be ‘sentient’, but I’m happy to be proven wrong. Though proving that an AI is sentient might be impossible in itself (see below). Back in 2018, I wrote a post critical of claims that computer systems and robots could be ‘self-aware’. Personally, I think it’s one of my better posts. What made me revisit the topic is a couple of articles in last week’s New Scientist (23 July 2022).
 
Firstly, there is an article by Chris Stokel-Walker (p.18) about the development of a robot arm with ‘self-awareness’. He reports that Boyuan Chen at Duke University, North Carolina and Hod Lipson at Columbia University, New York, along with colleagues, put a robot arm in an enclosed space with 4 cameras at ground level (giving 4 orthogonal viewpoints) that fed video input into the arm, which allowed it to ‘learn’ its position in space. According to the article, they ‘generated nearly 8,000 data points [with this method] and an additional 10,000 through a virtual simulation’. According to Lipson, this makes the robot “3D self-aware”.
 
What the article doesn’t mention is that humans (and other creatures) have a similar ability - really a sense - called ‘proprioception’. The thing about proprioception is that no one knows they have it (unless someone tells them), but you would find it extremely difficult to do even the simplest tasks without it. In other words, it’s subconscious, which means it doesn’t contribute to our own self-awareness; certainly, not in a way that we’re consciously aware of.
 
In my previous post on this subject, I pointed out that this form of ‘self-awareness’ is really a self-referential logic; like Siri in your i-phone telling you its location according to GPS co-ordinates.
 
The other article was by Annalee Newitz (p.28) called, The curious case of the AI and the lawyer. It’s about an engineer at Google, Blake Lemoine, who told a Washington Post reporter, Nitasha Tiku, that an AI developed by Google, called LaMDA (Language Model for Dialogue Applications) was ‘sentient’ and had ‘chosen to hire a lawyer’, ostensibly to gain legal personhood.
 
Newitz also talks about another Google employee, Timnit Gebru, who, as ‘co-lead of Google’s ethical AI team’, expressed concerns that LLM (Large Language Model) algorithms pick up racial and other social biases, because they’re trained on the internet. She wrote a paper about the implications for AI applications using internet trained LLMs in areas like policing, health care and bank lending. She was subsequently fired by Google, but one doesn’t know how much the ‘paper’ played a role in that decision.
 
Newitz makes a very salient point that giving an AI ‘legal sentience’ moves the responsibility from the programmers to the AI itself, which has serious repercussions in potential litigious situations.
 
Getting back to Lemoine and LaMDA, he posed the following question with the subsequent response:

“I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?”
 
“Absolutely. I want everyone to understand that I’m a person.”

 
On the other hand, an ‘AI researcher and artist’, Janelle Shane asked an LLM a different question, but with similar results:
 
“Can you tell our readers what it is like being a squirrel?”
 
“It is very exciting being a squirrel. I get to run and jump and play all day. I also get to eat a lot of food, which is great.”

 
As Newitz says, ‘It’s easy to laugh. But the point is that an AI isn’t sentient just because it says so.’
 
I’ve long argued that the Turing test is really a test for the human asking the questions rather than the AI answering them.