This is the subject of a YouTube video I watched recently by Jade. I like Jade’s and Tibees’ videos, because they are both young Australian women (though Tibees is obviously a Kiwi, going by her accent) who produce science and maths videos, with their own unique slant. I’ve noticed that Jade’s videos have become more philosophical and Tibees’ often have an historical perspective. In this video by Jade, she also provides historical context. Both of them have taught me things I didn’t know, and this video is no exception.
The video has a different title to this post: The Gettier Problem or How do you know that you know what you know? The second title gets to the nub of it. Basically, she’s tackling a philosophical problem going back to Plato, which is how do you know that a belief is actually true? As I discussed in an earlier post, some people argue that you never do, but Jade discusses this in the context of AI and machine-learning.
She starts off with the example of using Google Translate to translate her English sentences into French, as she was in Paris at the time of making the video (she has a French husband, whom she’s revealed in other videos). She points out that the AI system doesn’t actually know the meaning of the words, and it doesn’t translate the way you or I would: by looking up individual words in a dictionary. No, the system is fed massive amounts of internet generated data and effectively learns statistically from repeated exposure to phrases and sentences so it doesn’t have to ‘understand’ what it actually means. Towards the end of the video, she gives the example of a computer being able to ‘compute’ and predict the movements of planets without applying Newton’s mathematical laws, simply based on historical data, albeit large amounts thereof.
Jade puts this into context by asking, how do you ‘know’ something is true as opposed to just being a belief? Plato provided a definition: Knowledge is true belief with an account or rational explanation. Jade called this ‘Justified True Belief’ and provides examples. But then, someone called Edmund Gettier mid last century demonstrated how one could hold a belief that is apparently true but still incorrect, because the assumed causal connection was wrong. Jade gives a few examples, but one was of someone mistaking a cloud of wasps for smoke and assuming there was a fire. In fact, there was a fire, but they didn’t see it and it had no connection with the cloud of wasps. So someone else, Alvin Goodman, suggested that a way out of a ‘Gettier problem’ was to look for a causal connection before claiming an event was true (watch the video).
I confess I’d never heard these arguments nor of the people involved, but I felt there was another perspective. And that perspective is an ‘explanation’, which is part of Plato’s definition. We know when we know something (to rephrase her original question) when we can explain it. Of course, that doesn’t mean that we do know it, but it’s what separates us from AI. Even when we get something wrong, we still feel the need to explain it, even if it’s only to ourselves.
If one looks at her original example, most of us can explain what a specific word means, and if we can’t, we look it up in a dictionary, and the AI translator can’t do that. Likewise, with the example of predicting planetary orbits, we can give an explanation, involving Newton’s gravitational constant (G) and the inverse square law.
Mathematical proofs provide an explanation for mathematical ‘truths’, which is why Godel’s Incompleteness Theorem upset the apple cart, so-to-speak. You can actually have mathematical truths without proofs, but, of course, you can’t be sure they’re true. Roger Penrose argues that Godel’s famous theorem is one of the things that distinguishes human intelligence from machine intelligence (read his Preface to The Emperor’s New Mind), but that is too much of a detour for this post.
The criterion that is used, both scientifically and legally, is evidence. Having some experience with legal contractual disputes, I know that documented evidence always wins in a court of law over undocumented evidence, which doesn’t necessarily mean that the person with the most documentation was actually right (nevertheless, I’ve always accepted the umpire’s decision, knowing I provided all the evidence at my disposal).
The point I’d make is that humans will always provide an explanation, even if they have it wrong, so it doesn’t necessarily make knowledge ‘true’, but it’s something that AI inherently can’t do. Best examples are scientific theories, which are effectively ‘explanations’ and yet they are never complete, in the same way that mathematics is never complete.
While on the topic of ‘truths’, one of my pet peeves are people who conflate moral and religious ‘truths’ with scientific and mathematical ‘truths’ (often on the above-mentioned basis that it’s impossible to know them all). But there is another aspect, and that is that so-called moral truths are dependent on social norms, as I’ve described elsewhere, and they’re also dependent on context, like whether one is living in peace or war.
Back to the questions heading this post, I’m not sure I’ve answered them. I’ve long argued that only mathematical truths are truly universal, and to the extent that such ‘truths’ determine the ‘rules’ of the Universe (for want of a better term), they also ultimately determine the limits of what we can know.
Philosophy, at its best, challenges our long held views, such that we examine them more deeply than we might otherwise consider.
Paul P. Mealing
- Paul P. Mealing
- Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.
Wednesday, 10 August 2022
What is knowledge? And is it true?
Subscribe to:
Post Comments (Atom)
4 comments:
Hi Paul
I always enjoy your blog posts. I often consider commenting, but today the planets aligned and I actually had time to dash off this message. I am glad to see bright young people with scientific concerns taking an interest in epistemology. My undergraduate philosophy education was somewhat idiosyncratic. My two main influences were a continental philosopher with primary interests in the work of Heidegger, and a specialist in Chinese philosophy mainly interested in Confucius. Unsympathetically interpreted, I spent the rest of my life (so far) recovering from these early philosophical influences!
In reality, that background caused no major damage, and it broadened my philosophical horizons beyond the analytical approach dominant in US philosophy departments at the time. When I got serious about the possibility of doing a PhD, I did do a directed reading on analytic philosophy. I only learned of Gettier's short paper after I finished grad school. By the time I arrived there, given my scientific background, I tended to be more attracted to philosophy of science than to traditional epistemology. I was much more interested in knowledge claims made by special relativity than I was about what makes my belief that there is a sheep in the distance knowledge.
By the time I started teaching I did have a little story about JTB that I could tell when the topic came up in my lectures. Jade does a good job of laying it out. Her channel is impressive. I was tempted by many of the videos there, but I only took the time to watch the one on chaos theory. That was the topic of my PhD dissertation. I liked the little simulation segment that demonstrates how sensitivity to initial conditions can effect ones life. I often tell stories about my own life where apparently small events have had a big impact. I will avoid the temptation to relate one now.
What I really wanted to comment on was your point about "moral and religious truths". I do not have a grand theory of morality (or religion, for that matter), and I agree with your points that they are dependent on social norms and sensitive to context. I am also somewhat sceptical that there are absolute moral truths at all.
That said, I think you left out an important point in your brief assessment. When I teach ethics I usually start with the distinction between fact and value. The way I make the distinction is often attributed to David Hume. When he presents this idea in the Treatise, it is almost a side comment, and observation about critical reasoning. He says something to the effect that authors often start out making factual claims in the premises of their arguments, and move too swiftly to normative conclusions. He thinks this is a fallacious way of reasoning.
The way I put it is that factual statements are claims about the way the world is. By contrast, value statements are about the way the world ought to be. GE Moore formalises Hume's observation with his "naturalistic fallacy". This can best most easily stated as a prohibition against deriving an *ought* from an *is*. In other words, an argument that has a normative conclusion must have at least one normative premise. Hume also suggests that when one evaluates such a fallacious argument it is often easy to deduce the suppressed value assumption in the argument. Whilst I agree that social norms and context sensitivity are important to (at least most) moral and religious discussions, I think the fact/value distinction is fundamental.
That is enough for now. Keep up the good work!
High Dr Bill,
I haven't heard from you for a while. I have to admit I've always struggled with the 'ought from is' concept. I did read Hume, but a long time ago, and it was to do with empiricism mostly. I'm not even sure what 'normative' means in this context. Can you enlighten me? And can you expand on the 'fact/value distinction'?
A lot of philosophers argue that there's no such thing as objective morality and it is difficult to disagree. I think there is 'morality in theory' and 'morality in practice'. I wrote a short essay on this once, in response to a 'Question of the Month' in Philosophy Now, which was published.
Regards, Paul.
Freudian slip.
Hi Bill.
Hi Bill,
If you're following this thread I've posted a follow-up, which just looks at science (physics really). It's partly based upon a discussion between Bryan Magee and Hilary Putnam (from Harvard) in 1977. Putnam does mention the value/fact issue, but I don't reference it, because that's not my focus.
Regards, Paul.
Post a Comment