I’ve long had a problem with the term meme, partly because I believe it is over-used and over-interpreted, though I admit it is a handy metaphor. When I first heard the term meme, it was used in the context of cultural norms or social norms, so I thought why not use ‘social norms’ as we do in social psychology. Yes, they get passed on from generation to generation and they ‘mutate’, and one could even say that they compete, but the analogy with genes has a limit, and the limit is that there are no analogous phenotypes and genotypes with memes as there are with genes (I made the same point in a post on Human Nature in Nov.07). And Dawkins makes the exact same point, himself, in his discussion on memes in The God Delusion. Dawkins talks about ‘memplexity’ arising from a ‘meme-pool’, and in terms of cultural evolution one can see merit in the meme called meme, but I believe it ignores other relevant factors as I discuss below.
Earlier this year I referenced essays in Hofstadter and Dennett’s The Mind’s I (Subjectivity, Jun.09; and Taoism, May 09). One of the essays included is Dawkins’ Selfish Genes and Selfish Memes. In another New Scientist issue (18 July 2009), Fern Elsdon-Baker, head of the British Council’s Darwin Now project, is critical of what he calls the Dawkins dogma saying: ‘Metaphors that have done wonders for people’s understanding of evolution are now getting in the way’; and ‘Dawkins contribution is indisputable, but his narrow view of evolution is being called into question.’ Effectively, Elsdon-Baker is saying that the ‘selfish gene’ metaphor has limitations as well, which I won’t discuss here, but I certainly think the ‘selfish meme’ metaphor can be taken too literally. People tend to forget that neither genes nor memes have any ‘will’ (Dawkins would be the first to point this out) yet the qualifier, ‘selfish,’ implies just that. However, it’s a metaphor, remember, so there’s no contradiction. Now I know that everyone knows this, but in the case of memes, I think it’s necessary to state it explicitly, especially when Blackmore (and Dawkins) compare memes to biological parasites.
Getting back to Blackmore’s article: the first replicators are biological, being genes; the second replicators are human brains, because we replicate knowledge; and the third replicators will be computers because they will eventually replicate knowledge or information independently of us. This is an intriguing prediction and there’s little doubt that it will come to pass in some form or another. Machines will pass on ‘code’ analogous to the way we do, since DNA is effectively ‘code’, albeit written in molecules made from amino acids rather than binary arithmetic. But I think Blackmore means something else: machines will share knowledge and change it independently of us, which is a subtly different interpretation. In effect, she’s saying that computers will develop their own ‘culture’ independently of ours, in the same way that we have created culture independently of our biological genes. (I will return to this point later.)
And this is where the concept of meme originally came from: the idea that cultural evolution, specifically in the human species, overtook biological evolution. I first came across this idea, long before I’d heard of memes, when I read Arthur Koestler’s The Ghost in the Machine. Koestler gave his own analogy, which I’ve never forgotten. He made the point that the human brain really hasn’t change much since homo sapiens first started walking on the planet, but what we had managed to do with it had changed irrevocably. The analogy he gave was to imagine someone, say a usurer, living in medieval times, who used an abacus to work out their accounts; then one morning they woke up to find it had been replaced with an IBM mainframe computer. That is what the human brain was like when it first evolved – we really had no idea what it was capable of. But culturally we evolved independently of biological evolution, and from this observation Dawkins coined the term, meme, as an analogy to biological genes, and, in his own words, the unit of selection.
But reading Blackmore: ‘In all my previous work in memetics I have used the term “meme” to apply to any information that is copied between people…’. So, by this definition, the word meme covers everything that the human mind has ever invented, including stories, language, musical tunes, mathematics, people’s names, you name it. When you use one idea to encompass everything then the term tends to lose its veracity. I think there’s another way of looking at this, and it’s to do with examining the root cause of our accelerated accumulation of knowledge.
In response to a comment on a recent post (Storytelling, last month) I pointed out how our ability to create script effectively allows us to extend our long term memory, even across generations. Without script, as we observe in many indigenous cultures, dance and music allows the transmission of knowledge across generations orally. But it is this fundamental ability, amplified by the written word, that has really driven the evolution of culture, whether it be in scientific theories, mathematical ideas, stories, music, even history. Are all these things memes? By Blackmore’s definition (above) the answer is yes, but I think that’s stretching the analogy, if, for no other reason than many of these creations are designed, not selected. But leaving that aside, the ability to record knowledge for future generations has arguably been the real accelerant in the evolution of culture in all its manifestations. We can literally extend our memories across generations – something that no other species can do. So where does this leave memes? As I alluded to above, not everything generated by the human mind is memetic in my opinion, but I’ll address that at the end.
Going back to my original understanding of meme as a cultural or social norm, I can see its metaphorical value. I still see it as an analogy to genes – in other words, as a metaphor. Literally, memes are social norms, but they are better known for their metaphorical meaning as analogous to genes. If, on the other hand, memes are all knowledge - in other words, everything that is imbedded in human language - then the metaphor has been stretched too far to be meaningful in my view. A metaphor is an analogy without the conjunction, ‘like’, and analogies are the most common means to explain a new concept or idea to someone else. It is always possible that people can take a metaphor too literally, and I believe memes have suffered that fate.
As for the ‘third replicator’, it’s an intriguing and provocative idea. Will machines create a culture independently of human culture that will evolutionarily outstrip ours? It’s the stuff of science fiction, which, of course, doesn’t make it nonsense. I think there is the real possibility of machines evolving, and I’ve explored it in my own ventures into sci-fi, but how independent they will become of their creators (us) is yet to be seen. Certainly, I see the symbiotic relationship between us and technology only becoming more interdependent, which means that true independence may never actually occur.
However, the idea that machine-generated ideas will take on a life of their own is not entirely new. What Blackmore is suggesting is that such ideas won’t necessarily interact with humanity for selection and propagation. As she points out, we already have viruses and search engines that effectively do this, but it’s their interaction with humanity that eventually determines their value and their longevity, thus far. One can imagine, however, a virus remaining dormant and then becoming active later, like a recessive gene, so there: the metaphor has just been used. Because computers use code, analogous to DNA, then comparisons are unavoidable, but this is not what Blackmore is referring to.
Picture this purely SF scenario: we populate a planet with drones to ‘seed’ it for future life, so that for generations they have no human contact. Could they develop a culture? This is Asimov territory, and at this stage of technological development, it is dependent on the reader’s, or author’s, imagination.
One of Blackmore’s principal contentions is that memes have almost been our undoing as a species in the past, but we have managed to survive all the destructive ones so far. What she means is that some ideas have been so successful, yet so destructive, that they could have killed off the entire human race (any ideologue-based premise for global warfare would have sufficed). Her concern now is that the third replicator (machines) could create the same effect. In other words, AI could create a run-away idea that could ultimately be our undoing. Again, this has been explored in SF, including stories I’ve written myself. But, even in my stories, the ‘source’ of the ‘idea’ was originally human.
However, as far as human constructs go, we’re not out of the woods by a long shot, with the most likely contender being infinite economical growth. I suspect Blackmore would call it a meme but I would call it a paradigm. The problem is that a meme implies it’s successful because people select it, whereas I think paradigms are successful simply because they are successful at whatever they predict, like scientific theories and mathematical formulae, all of which are inherently un-memetic. In other words, they are not successful because we select them, but we select them because they are successful, which turns the meme idea on its head.
But whatever you want to call it, economic growth is so overwhelmingly successful: socially, productively, politically, on a micro and macro scale; that it is absolutely guaranteed to create a catastrophic failure if we continue to assume the Earth has infinite resources. But that’s a subject for another post. Of course, I hope I’m totally wrong, but I think that’s called denial. Which begs the question: is denial a meme?