There are 2 commonly held myths associated with AI (Artificial Intelligence) that are being propagated through popular science, whether intentionally or not: that computers will inevitably become sentient and that brains work similarly to computers.
The first of these I dealt with indirectly in a post last month, when I reviewed Colin McGinn’s book, The Mysterious Flame. McGinn points out that there is no correlation between intelligence and sentience, as sentience evolved early. There is a strongly held belief, amongst many scientists and philosophers, that AI will eventually overtake human intelligence and at some point become sentient. Even if the first statement is true (depending on how one defines intelligence) the second part has no evidential basis. If computers were going to become sentient on the basis that they ‘think’ then they would already be sentient. Computers don’t really think, by the way, it’s just a metaphor. The important point (as McGinn points out) is that there is no evidence in the biological world that sentience increases with intelligence, so there is no reason to believe that it will even occur with computers if it hasn’t already.
This is not to say that AI or Von Neumann machines could not be Darwinianly successful, but it still wouldn’t make them necessarily sentient. After all, plants are hugely Darwinianly successful but are not sentient.
In the last issue of Philosophy Now (Issue 87, November/December 2011), the theme was ‘Brains & Minds’ and it’s probably the best one I’ve read since I subscribed to it. Namit Arora (based in San Francisco and creator of Shunya) wrote a very good article, titled The Minds of Machines, where he tackles this issue by referencing Heidegger, though I won’t dwell on that aspect of it. Most relevant to this topic, he quotes Hubert L. Dreyfuss and Stuart E. Dreyfus from Making a Mind vs Modelling the Brain:
“If [a simulated neural network] is to learn from its own ‘experiences’ to make associations that are human-like rather than be taught to make associations which have been specified by its trainer, it must also share our sense of appropriateness or outputs, and this means it must share our needs, and emotions, and have a human-like body with the same physical movements, abilities and possible injuries.”
In other words, we would need to build a comprehensive model of a human being complete with its emotional, cognitive and sensory abilities. In various ways this is what we attempt to do. We anthropomorphise its capabilities and then we interpret them anthropomorphically. No where is this more apparent than with computer-generated art.
Last week’s issue of New Scientist (14 January 2012) discusses in detail the success that computers have had with ‘creating’ art; in particular, The Painting Fool, the brain child of computer scientist, Simon Colton.
If we deliberately build computers and software systems to mimic human activities and abilities, we should not be surprised that they sometimes pass the Turing test with flying colours. According to Catherine de Lange, who wrote the article in New Scientist, the artistic Turing test has well and truly been passed both in visual art and music.
One must remember that visual art started by us copying nature (refer my post on The dawn of the human mind, Oct. 2011) so we now have robots copying us and quite successfully. The Painting Fool does create its own art, apparently, but it takes its ‘inspiration’ (i.e. cues) from social networks, like Facebook for example.
The most significant point of all this is that computers can create art but they are emotionally blind to its consequences. No one mentioned this point in the New Scientist article.
Below is a letter I wrote to New Scientist. It’s rather succinct as they have a 250 word limit.
As computers become better at simulating human cognition there is an increasing tendency to believe that brains and computers work in the same way, but they don’t.
Art is one of the things that separates us from other species because we can project our imaginations externally, be it visually, musically or in stories. Imagination is the ability to think about something that’s not in the here and now – what philosophers call intentionality – it can be in the past or the future, or another place, or it can be completely fictional. Computers can’t do this. Computers have something akin to semantic memory but nothing similar to episodic memory, which requires imagination.
Art triggers a response from us because it has an emotional content that we respond to. With computer art we respond to an emotional content that the computer never feels. So any artistic merit is what we put into it, because we anthropomorphise the computer’s creation.
Artistic creation does occur largely in our subconscious, but there is one state where we all experience this directly and that is in dreams. Computers don’t dream so the analogy breaks down.
So computers produce art with no emotional input and we appraise it based on our own emotional response. Computers may be able to create art but they can’t appreciate it, which is why if feels so wrong.
Postscript: People forget that it requires imagination to appreciate art as well as to create it. Computers can do one without the other, which is anomalous, even absurd.