This is not a singularity you find in black holes or at the origin of the universe – this is a metaphorical singularity entailing the breakthrough of artificial intelligence (AI) to transcend humanity. And prophecy is an apt term, because there are people who believe in this with near-religious conviction. As
Wilson da Silva is the editor of COSMOS, an excellent Australian science magazine I’ve subscribed to since its inception. The current April/May 2009 edition has essay length contributions on this topic from robotics expert, Rodney Brooks, economist, Robin Hanson, and science journalist, John Horgan, along with sound bites from people like Douglas Hofstadter and Steven Pinker (amongst others).
Where to start? I’d like to start with Rodney Brooks, an ex-pat Aussie, who is now Professor of Robotics at Massachusetts Institute of Technology. He’s also been Director of the same institute’s Computer Science and Artificial Intelligence Lab, and founder of Heartland Robotics Inc. and co-founder of iRobot Corp. Brooks brings a healthy tone of reality to this discussion after da Silva’s deliberately provocative introduction of the ‘Singularity’ as ‘Rapture’. (In a footnote, da Silva reassures us that he ‘does not expect to still be around to upload his consciousness to a computer.’)
So maybe I’ll backtrack slightly, and mention Raymond Kurzweil (also the referenced starting point for Brooks) who does want to upload (or download?) his consciousness into a computer before he dies, apparently (refer Addendum 2 below). It reminds me of a television discussion I saw in the 60s or 70s (in the days of black & white TV) of someone seriously considering cryogenically freezing their brain for future resurrection, when technology would catch up with their ambition for immortality. And let’s be honest: that’s what this is all about, at least as far as Kurzweil and his fellow proponents are concerned.
Steven Pinker makes the point that many of the science fiction fantasies of his childhood, like ‘jet-pack commuting’ or ‘underwater cities’, never came to fruition, and he would put this in the same bag. To quote: ‘Sheer processing power is not a pixie dust that magically solves all your problems.’
Back to Rodney Brooks, who is one of the best qualified to comment on this, and provides a healthy dose of scepticism, as well as perspective. For a start, Brooks points out how robotics hasn’t delivered on its early promises, including his own ambitions. Brooks expounds that current computer technology still can’t deliver the following childlike abilities: ‘object recognition of a 2 year-old; language capabilities of a 4 year-old; manual dexterity of a 6 year-old; and the social understanding of an 8 year-old.’ To quote: ‘[basic machine capability] may take 10 years or it may take 100. I really don’t know.’
Brooks states at the outset that he sees biological organisms, and therefore the brain, as a ‘machine’. But the analogy for interpretation has changed over time, depending on the technology of the age. During the 17th Century (Descartes’ time), the model was hydrodynamics, and in the 20th century it has gone from a telephone exchange, to a logic circuit, to a digital computer to even the world wide web (Brooks’ exposition in brief).
Brooks believes the singularity will be an evolutionary process, not a ‘big bang’ event. He sees the singularity as the gradual evolvement of machine intelligence till it becomes virtually identical to our own, including consciousness. Hofstadter expresses a similar belief, but he ‘…doubt[s] it will happen in the next couple of centuries.’ I have to admit that this is where I differ, as I don’t see machine intelligence becoming sentient, even though my view is in the minority. I provide an argument in an earlier post (The Ghost in the Machine, April 08) where I discuss Henry Markram’s ‘Blue Brain’ project, with a truckload dose of scepticism.
Robin Hanson is author of The Economics of the Singularity, and is Associate Professor of Economics at
For a start, all these disciples of the extreme version of the Singularity seem to forget how the other half live, or, more significantly, simply ignore the fact that the majority of the world’s population doesn’t live in a Western society. In fact, for the entire world to enjoy ‘Our’ standard of living would require 4 planet earths (ref: E.O. Wilson, amongst others). But I won’t go there, not on this post. Except to point out that many of the world’s people struggle to get a healthy water supply, and that is going to get worse before it gets better; just to provide a modicum of perspective for all the ‘rapture geeks’.
I’ve left John Horgan’s contribution to last, just as COSMOS does, because he provides the best realism check you could ask for. I’ve read all of Horgan’s books, but The End of Science is his best read, even though, once again, I disagree with his overall thesis. It’s a treasure because he interviews some of the best minds of the latter 20th Century, some of whom are no longer with us.
I was surprised and impressed by the depth of knowledge Horgan reveals on this subject. In particular, the limitations of our understanding of neurobiology and the inherent problems in creating direct neuron-machine interfaces. One of the most pertinent aspects, he discusses, is the sheer plasticity of the brain in its functionality. Just to give you a snippet: ‘…synaptic connections constantly form, strengthen, weaken and dissolve. Old neurons die and – evidence is overturning decades of dogma – new ones are born.’
There is a sense that the brain makes up neural codes as it goes along - my interpretation, not Horgan's - but he cites Steven Rose, neurobiologist at Britain's Open University, based in Milton Keyes: 'To interpret the neural activity corresponding to any moment ...scientists would need "access to [someone's] entire neural and hormonal life history" as well as to all [their] corresponding experiences.'
It’s really worth reading Horgan’s entire essay – I can’t do it justice in this space – he covers the whole subject and puts it into a perspective the ‘rapture geeks’ have yet to realise.
I happened to be re-reading John Searle’s Mind when I received this magazine, and I have to say that Searle’s book is still the best I’ve read on this subject. He calls it ‘an introduction’, even on the cover, and reiterates that point more than once during his detailed exposition. In effect, he’s trying to tell us how much we still don’t know.
I haven’t read Dennett’s Consciousness Explained, but I probably should. In the same issue of COSMOS, Paul Davies references Dennett’s book, along with Hofstadter’s Godel, Escher, Bach, as 2 of the 4 most influential books he’s read, and that’s high praise indeed. Davies says that while Dennett’s book ‘may not live up to its claim… it definitely set the agenda for how we should think about thinking.’ But he also adds, in parenthesis, that ‘some people say Dennett explained consciousness away’. I think Searle would agree.
Dennett is a formidable philosopher by anyone’s standards, and I’m certainly not qualified, academically or otherwise, to challenge him, but I obviously have a different philosophical perspective on consciousness to him. In a very insightful interview over 2 issues of Philosophy Now, Dennett elaborated on his influences, as well as his ideas. He made the statement that ‘a thermostat thinks’, which is a well known conjecture originally attributed to David Chalmers (according to Searle): it thinks it’s too hot, or it thinks it’s too cold, or it thinks the temperature is just right.
Searle attacks this proposition thus: ‘Consciousness is not spread out like jam on a piece of bread… If the thermostat is conscious, how about parts of the thermostat? Is there a separate consciousness to each screw? Each molecule? If so, how does their consciousness relate to the consciousness of the whole thermostat?’
The corollary to this interpretation and Dennett’s, is that consciousness is just a concept with no connection to anything real. If consciousness is an emergent property, an idea that Searle seems to avoid, then it may well be ‘spread out like jam on a piece of bread’.
To be fair to Searle (I don't want to misrepresent him when I know he'll never read this) he does see consciousness being on a different level to neuron activity (like Hofstadter) and he acknowledges that this is one of the factors that makes consciousness so misunderstood by both philosophers and others.
But I’m getting off the track. The most important contribution Searle makes, that is relevant to this whole discussion, is that consciousness has a ‘first person ontology’ yet we attempt to understand it solely as a ‘third person ontology’. Even the Dalai Lama makes this point, albeit in more prosaic language, in his book on science and religion, The Universe in a Single Atom. Personally, I find it hard to imagine that AI will ever make the transition from third person to first person ontology. But I may be wrong. To quote my own favourite saying: 'Only future generations can tell us how ignorant the current generation is'.
There are 2 aspects to the Singularity prophecy: we will become more like machines, and they will become more like us. This is something I’ve explored in my own fiction, and I will probably continue to do so in the future. But I think that machine intelligence will complement human intelligence rather than replace it. As we are already witnessing, computers are brilliant at the things we do badly and vice versa. I do see a convergence, but I also see no reason why the complementary nature of machine intelligence will not only continue, but actually improve. AI will get better at what it does best, and we will do the same. There is no reason, based on developments to date, to assume that we will become indistinguishable, Turing tests notwithstanding. In other words, I think there will always remain attributes uniquely human, as AI continues to dazzle us with abilities that are already beyond us.
P.S. I review Douglas Hofstadter's brilliant book, Godel, Escher, Bach: an Internal Golden Braid in a post I published in Feb.09: Artificial Intelligence & Consciousness.
Addendum: I'm led to believe that at least 2 of the essays cited above were originally published in IEEE Spectrum Magazine prior to COSMOS (ref: the authors themselves). Addendum 2: I watched the VBS.TV Video on Raymond Kurzweil, provided by a contributor below (Rory), and it seems his quest for longevity is via 'nanobots' rather than by 'computer-downloading his mind' as I implied above.
36 comments:
Actually, this blog post is the first I've heard that I have an article in the April COSMOS! It looks like a copy of my IEEE Spectrum article from last summer. I've emailed them to ask WTF.
Hi Robin,
I have to say I'm surprised if COSMOS published without your knowledge or consent.
Overall, it's a very good article. Provocative and stimulating.
Regards, Paul.
That (many) humans have a psychological need to believe that there's some ineffable, numinous quality to human sentience and intelligence that's inherently not susceptible of replication in any sort of machine but one instantiated in neuronal wetware, and born of woman (or perhaps of a test tube)... does not, of course, logically establish that such a belief is probably erroneous, any more than my need to believe that I can walk across the room without flying centrifugally off the surface of the earth refutes the proposition that gravity exists. In contrast with the latter example, though (since nobody seriously fears an attack on his or her belief in the existence of gravity), it does at least lend Bayesian probabilistic substance to the proposition that *more* desperately fierce advocacy for the irreproducibility of what's supposedly quintessentially human exists than is warranted by the evidence, or by dispassionate consideration of the facts.
I think we're all (you and I, and everyone else seriously interested in the question) boringly familiar with the concept of a "virtual machine." The interface presented to my inspection for use in the composition of this essay consists of many, many such machines, each layered upon one more primitive, all the way down to the actual (not the nano-coded) architecture of the AMD cpu. It's no trick whatever to get a virtual MAC running on a PC or vice versa, or to go back and emulate a DEC 20 or the ENIAC, for that example, on computers much more primitive than the microprocessor in your wristwatch. If you're concerned about speed and efficiency and capaciousness of memory, though, you obviously want to emulate the *less* sophisticated (or computationally powerful) machine on one with more teraflops, but the principle holds pretty much across the board that something more powerful and sophisticated can emulate something that's less so without breaking a cybernetic sweat (and perhaps do a thousand other things concurrently).
I agree with you that machines can do some things (ever more things) brilliantly that humans can do, sometimes, scarcely at all, and that their power will continue to grow. Why is it difficult to imagine that an (eventually) arbitrarily powerful machine whose constituent elements did not happen to involve DNA would be unable to emulate the relatively more primitive architecture of the human brain (at least, for purposes of functional equivalency, though I would not necessarily confine the assertion by that limitation)? The human brain is a finitistic machine consisting of a finite number of elements, and leaving aside quantum arguments (which I personally tend partly to credit and partly to think are imported as the impossible-to-circumvent magical pixie dust impediment to the replication of human intelligence) -- because there's *no* reason to think we can't incorporate quantum elements into machine intelligence; we've already *built* quantum computers, for heaven's sake... leaving that aside, if I take all those finite elements and reproduce them (and their functional behavior) using something other than dendrites, axons and the happy spice of serotonin and neurepinephrine, then why isn't that human (or *superhuman*) intelligence? Is it that I'll never be able to do it because of the pixie dust magic argument, or is it that it'll just take too long, because the code's too hard to break, and we don't (yet) have the technology to deconstruct a brain, particle by particle, at whatever highest level might be relevant for replicatory purposes?
Now, I'm not saying categorically that we *can* ultimately build something indistinguishable from a sentient human, but I am saying that I have yet to see an irrefragably convincing argument why we *can't*, and there does seem to me to be a disproportionate amount of intellectual and emotional energy (and on the part of spectacularly bright people) invested in the assertion of an eternal, cosmically interdictive negative, based only on what can currently be known or imagined. If I were a philosopher (or a psychologist), I might wonder why that should be the case.
oops...
that's "norepinephrine"
a machine writing my essay would not have experienced the neuronal glitch! :)
Hi PK,
I was beginning to think you would never respond. As I say somewhere in the midst of my discourse: ‘my view is the minority’. I don’t know where to start. Believe it or not, writing this post has convinced me more than ever that machines won’t achieve sentience. So there: I’m really sticking my neck out. And yes, I may still be proven wrong; certainly, most people in this field think so. Sometimes, when I write an essay on a topic it somehow clarifies my thoughts in a way that I don’t expect, and this seems to be such an occasion. It’s a worry when you become convinced by your own rhetoric.
Perhaps it’s the John Searle influence, even though I don’t agree with him completely either. We finally have invented something that vaguely simulates what we do, which is ‘to think’, but we really don’t understand what ‘thinking’ entails. Computers are simply the latest ‘model’ for attempting to understand how the brain works, but it doesn’t mean that is how brains work. For a start, and so many people overlook this, brains don’t run on software, and it’s software that makes Turing machines Turing machines. Now you may say we will discover the software eventually, but I’m far from convinced. I don’t think the brain runs on software at all, but I admit I don’t really know. But if it does, then it’s unlike any software we know, and certainly nothing like DNA, which I would argue really is software. Maybe one day we will invent a computer that rewires itself in response to stimuli just like the human (and other animals) brains do, but what software will drive it? The human brain doesn’t reduce to a whole string of ones and zeros, which makes it fundamentally different to every Turing Machine so far invented, including all the examples you give in your first paragraph (correct me if I'm wrong).
Quantum computers will make computation even more spectacular, stratospheric compared to what we have now, but will it perform ‘basic human capabilities’ any better, as defined by Rodney Brooks? Maybe. I think that machines will get ever better and more compatible with humans – we will design them that way – and we will even design them to ‘learn’ skills from their mistakes, but they won’t be human is my prediction. They won’t even have imagination or intentionality as philosophers seem to call it (imagination is a dirty word in some circles, I’ve learnt). They will respond to emotions because we will programme them that way, and the simulations will get better and better, but they still won’t have ‘thoughts’ or a subjective ontology, and that’s the fundamental difference. We don’t even know how we do it, so how can we assume that machines, using completely different means, will do it.
And this is my main point: consciousness is the least understood phenomenon in the universe, and yet we assume that if we build a computer powerful enough it will acquire consciousness automatically. This is the fundamental assumption that I simply do not accept, philosophically or epistemically.
This is one of those areas where future knowledge will look back and wonder at our ignorance (yours or mine, but probably both).
Regards, Paul.
Sorry, should have referenced your second para, not first.
Regards, Paul.
Turns out IEEE Spectrum editors are also unaware of the reprinting!
I seem to recall already having linked this stuff somewhere or other, but it bears repeating, I guess:
http://www.ft.com/cms/s/0/f2b97d9a-1f96-11de-a7a5-00144feabdc0.html
http://www.technologyreview.com/computing/22339/?a=f
I'm not quite sure that we won't ever get machine consciousness or even sentience, but will it be like ours? A Wittgenstein quote about language comes to mind: If a lion could talk, we could not understand him.
Here is another question, though: if we discover that building a human-like consciousness robotically is prohibitively difficult but synthesizing one out of proteins is relatively easy even without using human DNA? It's sort of an iffy scenario to say the least, but I want to get at this (to me) weird dichotomy that seems to exist about the nature of consciousness - namely, that unless we can build consciousness with a computer it isn't physical, even as an emergent property, and therefore our brains are not at all like hardware or software. At first that seemed right to me, too, but the more I think about it the less I understand why that ought to be true.
Hi Larry,
Thanks for your contribution, and the links. I looked them up. The first one sounds like a complex means of evolutionary design (artificial natural selection, if you will excuse the oxymoron). The computer cum robot tries different things until it comes up with something that works - I expect it's a reiterative process - there is not enough detail to tell, so I may have it wrong. New Scientist reported on something similar where a group of programmers were building computers (or programming them) to come up with mathematical ideas, and their first success was one that led to the Goldbach conjecture. They were getting the computer to follow 'interesting leads' until it came up with something that wasn't in the original programme. In both cases the programme 'creates' something new by 'learning'. I certainly think that this is the way forward for AI, called 'bottom-up AI' by Roger Penrose. Obviously, much closer to what we (and other animals) do.
The second link is also interesting and another way forward: parallel processing. It's curious that Henry Markram is sceptical of it, when he's attempting something similar. I've expressed my own scepticism of Markram's project else where. I think Markram will build a very good 'model' of a brain, but I'll be genuinely surprised if he produces consciousness. If I'm proven wrong then my whole thesis collapses, I admit.
I think your last point is a good one. This is one area that is wide open to speculation, where we feel we are on the verge of new discoveries. It's why I explore it in fiction, because that's where one can imagine and speculate without consequences, yet still stimulate and provoke thought.
If you've moved on from my position then perhaps you have an insight I don't have. But the more I read on this, the more I realise how much we don't know, and I guess that's the fuel for my scepticism.
Regards, Paul.
Hi Paul,
Sorry if I was a bit dilatory noticing that you'd posted again on the subject dearest to my emergent property :), though I can't have been by *too* much; I do tend to check out your site at least on a daily basis.
I hope I haven't engendered the misapprehension that I think that birthing consciousness is *only* a matter of assembling sufficiently mighty and daunting computational resources. I don't think we could duplicate the Brooklyn Bridge by dumping a sufficiently vast number of girders into the East River and hoping they'd somehow "sort themselves out." And the largest assemblage of associatively-interlinked data in the world, the internet, has yet to leap up from its primordial electronic bog and yell, "howdy!" or even say "goo, goo" to any of its users, notwithstanding the piling on of ever more petabytes of information. No, obviously, we'd need, not just the resources, but also sufficient understanding of what I, too, would acknowledge is a phenomenally ill-understood emergent property to engineer an artificial sentient "human," unless we did the thing by brute force (deploying an as-yet uninvented technology to inventory the whole structure of someone's brain down to the molecular or the atomic or the subatomic level, and then pressing the "copy" button on our handy Star Trek-style replicator). Nor do I think it'll be accomplished *just* by "algorithmizing" in exactly the right way, though I wouldn't *preclude* that solution. I do think we can probably get an input/output-isomorphic result that would pass the Turing Test (at an immeasurably more sophisticated level than did Eliza or Parry, decades ago) in that manner, but I would share your reservations about "subjective ontology." Not being able to get into anybody else's head, let alone anything else's circuitry, we don't even know for sure that there's another conscious being in the universe (and it's odd to write that oxymornic sentence using a first-person plural pronoun), so at the moment, for all *one of us* can prove, solipsism is the only empirically totally defensible position.
I think we differ less than it might seem in our views on this matter. What bothers you is what you perceive to be the majoritarian assumption among the "artificial intelligentsia" (I'm not at all sure it is) that we *can* do it. Eventually. What bothers me is what I perceive to be a desire on the part of many parties to the debate to make it a default assumption that we *can't possibly* do it. Ever.
As I said, I'm not asseverating categorically that we can, though that's what I do tend to believe, with some reservations. And I don't think you're asserting that you're 100% sure that we can't (though your internal debate, through written exposition, seems to be solidifying your disposition to believe that).
I'd be happy just to dispense with *default* assumptions that predictively guarantee OR predictively interdict success in replicating human consciousness. I'm sure we can get machines to perform every task we now perceive as the exclusive domain of human intelligence in such a way way as to pass the Turing Test, but I'm not 100% sure we can engineer "consciousness" (except by means of the aforementioned brute-force methodology... or by the more traditional means of doing it: having babies. We are, after all, self-replicating consciousnesses in at least that one sense. :))
Maybe only 98%. :)
Regards, Peter
P.S. Do you think we need a "Mary" somewhere in this dialogue?
Hi Peter,
Well, I don’t post that often so I can’t expect immediate responses, but, yes, I know your predilection for this subject.
I think in philosophy people do take default positions and then argue from there – it’s human nature. To quote from a long ago post entitled: What is Philosophy? ‘In a nutshell, philosophy is a point of view supported by rational argument. The corollary to this is that it requires argument to practice philosophy.’
The purpose of arguing is to have your views challenged, possibly refined, sometimes changed, but more likely tempered. I like my ideas challenged because it’s the only way to test them, without doing actual research, which is why we depend on others to inform us, and everyone contributing here is obviously well read.
Anyway, read my book and you may find another opinion or a more flexible one.
Why do you want a ‘Mary’? You think this discussion is a bit ‘blokish’ to use the Aussie vernacular.
Curiously, last night I spoke on the phone to a ‘Sue’ who has views on this subject, but she doesn’t live in cyberspace, sometimes visits, but wouldn’t leave a trail – a bit reticent in that regard.
Regards, Paul.
On the subject of solipsism, I have an opinion on that as well. Solipsism occurs in dreams, and how do we tell the difference? Very easily: if I meet you in a dream you will have no memory of it, but if I meet you in the flesh then it's something we share and we both have a conscious memory of it. Remember, that without memory, we would have no self. What's the Bob Dylan lyric: 'You can be in my dream if I can be in yours.' I think that's how it goes. Didn't know Mr. Zimmerman was a philosopher did you?
Regards, Paul.
Hi Paul,
With regard to "Mary," just a silly reference to a popular American singing group of the 60's (Peter, Paul and Mary), and also suggested to me by my transient whimsical reflection on the "other" means by which replication of human consciousness can be achieved.
Regards, Peter
Hi Paul,
Our posts "crossed in the mail."
As regards "solipsism," I was thinking more that, even if we could meet in person, and I knew for a fact that I wasn't asleep, and I could see you and hear your voice and shake your hand, I *still* couldn't preclude the possibility that all those experiences that appeared to be artifacts of my perception of a real and tangible external world, and of a real and conscious other person... were in reality nothing more than projections of my own consciousness, existing somehow alone in a void (or in something like "the matrix"). I'd consider that to be a low-order probability, but I couldn't absolutely prove that it wasn't true.
For the record, though, I do believe that you exist as a separate, conscious and highly intelligent being 12,000 miles away. (I'm sure you'll be relieved to hear it. :))
BTW, I've received your book, and reading it is next on my agenda.
Regards, Peter
Actually was a fan of Peter, Paul & Mary - my sister had an album of theirs. Can't remember the title, but I do know there was a song about Samson and Delilah, which I really liked.
Of course the album had my initials on it: PPM.
The point I make about solipsism is that in dreams it's really true, though few people seem to realise it until you point it out.
Searle provides a brief discussion in Mind (no mention of dreams) but explaining, logically, how anyone who claims to be solipsist would not be believed, which is why no one ever does.
He also gave the following anecdote attributed to Russell: 'I once received a letter from an eminent logician, Mrs Christine Ladd Franklin, saying that she was a solipsist, and was surprised that there were no others.'
You've probably heard it (or read it) before.
Regards, Paul
Regards, Paul.
Developed a stutter, or there's an echo in cyberspace.
Hi Paul,
Amazed you remember Samson and Delilah, which I'd practically forgotten, despite my fondness for the group. I think they're most commonly remembered for "Puff, the Magic Dragon." I dreamed that I had misconstrued your reference to solipsism, but then I can't be sure that I'm not dreaming that I dreamed that, as I write this, or imagine that I'm writing it, or imagine that I'm having a hallucination in which, although I really am writing this, my experience of writing it is entirely manufactured and happens only accidentally to parallel the reality of the situation, although that reality is really just a "subroutine" running in "Deep Thought" on the planet Magrathea and I won't know the truth of the matter for just a bit of a while, now, but I promise I'll get back to you with a definitive answer (or imagine that I'm getting back to you -- pan-dimensionally, of course) in... oh, just about seven-and-a-half million years.
Regards, Peter
BTW, do you happen to speak mouse?
I have to say that my language abilities are very limited, despite learning French for 6 years at school. My father was a natural: though he had bugger all education he learnt German when he was a POW and never forgot it for the rest of his life.
I'm computer illiterate as well.
A bit of personal history.
Regards, Paul.
Hi Journey Man,
Wanted to drop you a note about this week's episode of Motherboard (our tech show). Saw you had previously talked about him on the site so I thought this may be of interest to you.
Ray Kurzweil tells us about his vision of the Singuarlity—a point around 2045 when computers will acquire full-blown artificial intelligence and technology will infuse itself with biology. His theories have all sorts of supporters, detractors, and critics, but do you even remember what life was like before three-year-olds had cell phones and you actually had to remember facts instead of relying on the internet? That was only 10 years ago. If Kurzweil is right, we'll have supercomputers more powerful than every human brain on the planet combined within a few decades.
WATCH THE SINGULARITY OF RAY KURZWEIL ON VBS.TVYou can also geek out and catch up on the previous Motherboard episodes like our interview with Richard Garriot, or the backyard rocketeer, or even robots with Professor Sankai.
Thanks for watching! See you in 2045!
Rory
Hi Paul,
My comment about speaking "mouse" was a joke -- not a reference to your (or anybody's) knowledge of languages. In Douglas Adam's Hitchhiker's Guide to the Galaxy, the Earth was created on the planet of Magrathea by a race of hyperintelligent, pan-dimensional beings whose 3-dimensional manifestation on our own planet was as mice, conducting diabolically devious experiments on psychologists who were under the misapprehension that *they* were experimenting on the mice. The rest of my post was likewise a reference to Adams' book. In the story, the computer known as "Deep Thought" took 7.5 million years to come up with the answer to "life, the universe and everything." (The answer was 42.) I sincerely apologize if I somehow gave you the impression that I was commenting negatively on your knowledge of languages (about which I know only what you've told me, and couldn't care less; your English is flawless, and I don't see much reason for you to create a website in French or German or Tagalog) or of computers (about which I think your profession of ignorance couldn't possibly be less true, unless you just mean that you're not a C++ hacker).
I really have to watch it with the flights of absurdity, or learn to use :) emoticons a lot more liberally.
Regards, Peter
As you can see I'm slow on the uptake. I am too literal sometimes. Even the Peter,Paul & Mary thing hit in belatedly.
I was nearly going to mention 42 (it crossed my mind), so you must have got through to me subliminally; I had no idea why it popped into my mind.
I never read tbe book or saw the movie - I did hear the radio serial version once. But everyone knows all the quirks, so I should have picked it up.
Regards, Paul.
I don’t know if you’re still following this Larry, but the Adam and Eve robots that you referenced earlier are covered in last week’s New Scientist, with a bit more detail. Basically the robot, Adam (Eve is yet to be built) uses the scientific method to gain the best results in a research project. This is how I would see AI developing, where a robot or computer does the routine work, while humans, in this case, scientists, do the higher level thinking. In fact, that’s exactly what the article says: ‘Adam, Eve and their ilk could soon automate routine and time-consuming scientific chores, leaving human scientists free to make higher-level, creative leaps.’ But the researchers also add: ‘Ultimately the robots may even be capable of conducting independent research.’
In an aside, the article also refers to work being done at Cornell University in Ithaca, New York, ‘[researchers] have developed software that can observe physical systems and independently identify the laws that underlie them.’ Using an ‘evolutionary algorithm’ to generate equations and test them against hard data, ‘the computer produced an equation that described conservation of angular momentum.’
So I accept that there are many ways to get ‘intelligent’ machines to mimic humans, even in scientific research.
Refer: Robot ScientistsRegards, Paul.
A post script to the last comment: check out the imbedded video.
Argh - my thoughts on this matter are too scattered. It may take a few iterations to get this right, but here goes...
First of all, what would it look like for the internet to emerge as a consciousness? I for one have no idea, but I don't see why we should expect such a consciousness to be like ours - the consciousness that we have, after all, is not at all like what our neurons have (assuming that consciousness supervenes on facts about neurons). It's this kind of thing that makes me suspicious: we seemingly cannot help but talk about the idea of consciousness as though it necessarily feels like it does to us.
Another problem is that the robots, no matter how smart they get, will in some way always have had a conscious designer. Even if Adam and Eve (or similar bots) get so advanced that they start coming up with radically different methods of generating hypotheses and/or tests, there'll always be some force to the reductionist argument that their intelligence is really just our intelligence with super-hyped-up computing power (and, therefore, not really machine consciousness at all).
Still a third problem is how sensory data comes into this. There seems to be a disconnect between merely accepting data input from the exterior world and sensing, and this too might be relevant to the question of non-human/biological consciousnesses. Even without this we could make a machine capable of diagnosing its software or hardware and reporting when it's "sick" using empathetic language (e.g., "Quick, call the doctor - I'm in really bad shape!"), but I think most people would still say that it's not conscious. But is that really relevant? My intuition says it is, but I couldn't for the life of me say why.
Like I said, this whole thing confuses me.
Hi Larry,
I'm not surprised you're confused. This is an area of 'research' where we feel we are on the cusp, so 'we' (includes academia, experts and ordinary laypeople like you and me) don't like to admit how ignorant we are. But I pretty well agree with everything you say here.
Absolutely no shame in being confused - gets back to my quote from Confucius: 'To realise that you know something when you do, and to realise that you do not when do not, this is knowing.' Stating the bloody obvious, I know, yet the complete antithesis to fundamentalism which is blind certainty.
Regards, Paul.
Hi Paul,
Not Confucius, but along the same lines, "frogs at the bottom of a well see only a corner of the sky," and I tend to agree. It's not something I believe, but it certainly is what attorneys would call a "colorable argument" that, despite the apparent centrality of recursion to human intelligence, the ultimate recursive act of apprehension of the nature of human intelligence by that intelligence itself is somehow mathematically prohibited. Probably an argument along the lines of Goedel's, though Goedel's Theorem, as I've mentioned previously, only precludes comprehensiveness: not comprehension.
Anyway, why I'm writing is that Larry's comment (Hi Larry), that "robots...will in some way always have had a conscious designer," though intuitively reasonable, both is and is not really true. It's true in the most basic sense in which the initiator of a process that results in something possibly unforeseeable can still be said to have "designed" it.
By way of an example, one not-very-new, but often effective, AI technology called "genetic algorithms" lets you solve problems or even solve the problem of designing an algorithm by means of a simulation of the evolutionary process, wherein you start almost ex nihilo, with the most generic sort of entity, and let it evolve influenced only by heuristics that encode some of the properties that you want to see in the final result. I'm not sure that could be described as "design." (I'm also very unsure that you could produce a sentient being using genetic algorithms, but leave that aside, since I'm equally unsure that you couldn't.) If this were intuitively what most people meant by "design," then you wouldn't have all these religious fundamentalists still attempting to discredit the theory of evolution.
And, of course, neural nets can "design" algorithms by learning from multiple examples of the behavior you want those algorithms to exhibit, or from multiple examples of other algorithms. (At a simpler level, and more usually, they just learn the i/o pattern, but they can also be made to produce algorithms encoded in network form.) Basically, they do what the brain does in the way of pattern learning, though I wouldn't claim they're sentient. Whether they could produce a network that had sentience as an emergent property, I really don't know, though I tend to doubt it on grounds of the computational intensity exceeding that for NP-complete problems that I think would be required. I'm generally reticent to rule things out on the grounds of computational cost, but you'd have to if, for example, your best projection of the time required for the number of machine cycles running even on a quantum computer tended to exceed the life expectancy of the known universe.
But then, you're right, Paul. I am (or used to be) an "expert," and though I *think* (even *believe* with a fairly high degree of certainty) that we can create sentient intelligence artificially, I really don't know exactly how, and I have no means of proving the proposition, other than to resort to intuition informed by my experience of designing programs that merely solved the task of emulating the problem-solving behavior of human intelligence. For none of those programs would I ever have claimed sentience.
"By way of an example, one not-very-new, but often effective, AI technology called "genetic algorithms" [description etc.]. I'm not sure that could be described as 'design.'"
For what it's worth I would describe it that way. I've seen people play around with this to make faces and stuff, and from what I've seen this fits what I call "design." But the actual answer is less important to me than the question, because the question applies equally to machines and biological creatures and I'm not sure how much people realize that. (You do, obviously, but...)
"Whether they could produce a network that had sentience as an emergent property, I really don't know, though I tend to doubt it on grounds of the computational intensity exceeding that for NP-complete problems that I think would be required."
Ah! See, this is the sort of thing that I would like to see more of: though I don't know what NP-completeness is (I remember hearing the term a lot from my compsci friends in college but never got the details), it's at least a concrete, well-understood thing that we can track. As for computational power, I thought that had been taken care of, hadn't it? They keep saying that computers will soon be able to match the human brain in computing power - but maybe I'm misremembering that?
I think it's also helpful to recall that the game of life can in theory be used to make a universal Turing machine. (Dennett talks about this in Evolving Freedom, I can find the section if you like.) If we postulate that sentience (not even necessarily human sentience, just sentience) is replicable with a universal Turing machine, then we know for sure that the question comes down to computing power - and so on and so forth. I'd rather try to work up from preexisting notions than try to work from thought experiments, basically.
Hi Larry,
>Ah! See, this is the sort of thing that I would like to see more of: though I don't know what NP-completeness is (I remember hearing the term a lot from my compsci friends in college but never got the details), it's at least a concrete, well-understood thing that we can track. As for computational power, I thought that had been taken care of, hadn't it? They keep saying that computers will soon be able to match the human brain in computing power - but maybe I'm misremembering that?
There is a class of problems known as "nondeterministic-polynomially-complete" (two of the more common of which are the "knapsack problem" and "the travelling salesman problem") which are thought to be computationally intractable, though this has yet to be proven conclusively. What's interesting is that it *has* been proven that if you could find a polynomial-time solution to *any* of these problems, it would solve all of them, since they've been shown to be isomorphic. (I used to teach a course called "analysis of algorithms" in which most of the focus was on methods of finding the algebraic expressions that describe the amount of time or space required to execute a given program as a function of its size, which can be interesting if, for example, the program in question is recursive in a complicated way.) Anyway, you're spot on to identify this as a tangible, concrete way of determining whether something is doable.
People can argue this, predicated on their assumptions about what the particulate units are in the human brain of memory and of computation, but you're also right in thinking that, if we haven't already grossly exceeded the capacity of the human brain in storage capacity and computational power (and I think we have) on one computer or a network of them, then we're certainly playing in the same ballpark. So that wasn't what I meant by saying that we lacked the computational power to "build" a brain of power equivalent to our own using genetic algorithms or neural networks, which probably wouldn't be the most efficient way to go. To resort to a (somewhat lame) analogy I might have enough resources to run or to build a car (if I knew how to do it beforehand), but I wouldn't have the resources to build a General Motors factory that would in turn build a car, especially not if that factory were staffed by 10,000 monkeys connecting machinery by trial-and-error. :)) The monkeys would have better luck reproducing the first draft of Hamlet. Anyway, the point is that, although it may be theoretically possible to do something in a certain way, it can probably be shown that that method would take effectively forever.
Regards, Peter
I really hope none of this is an original thought...
From reading wiki, it seems that we can't solve NP-complete problems in their generalized formulations - is that right? In other words, this isn't just an academic matter relating to the philosophy of mind but also a real opportunity to add something to the human arsenal, so to speak. If that's the case, I'm not totally sure why it would be the case that NP-completable algorithms (is there a less unwieldy term for this?) are necessary for consciousness or sentience.
And: what about other animals? It seems, on reflection, like kind of an absurdly high bar to set for ourselves that if we cannot simulate human consciousness then we have failed.
And: have we ever tried to connect several genetic algorithms? A major part of our consciousness, it seems, is analogizing and approaching learning in a fundamentally multidisciplinary fashion. I wouldn't be at all surprised, then, if only something very advanced but weirdly consciousness-empty could be made from just one genetic algorithm (that is, just an algorithm or a group thereof designed to address one problem in one way): this would be like a hyperbolic case of autism, in a way. But chaining them together somehow might at least grant the appearance of consciousness, might it not?
The internet, it occurs to me, would be the best possible resource to try something like this. Since so much of our lives now take place online, it seems like we ought (in theory) to be able to code one genetic algorithm that applies for jobs (altering the job postings to which it applies and maybe even its "credentials"), another that looks for "satisfying" ways to spend money once it "has" a job (based, say, on given personality traits), a third that navigates social networks hunting for friends, another for managing memory, and so on. (Let's say, just to make things easy, that a person steps in to provide at least some of the linguistic help for this.)
Has anyone ever tried something like this? Would it be at all feasible (assume for the sake of argument that someone would pay you to try)? If I were a genius programmer this seems like something I might want to experiment with, but unfortunately I'm a really terrible programmer, so I can only throw ideas around to see what sticks
>From reading wiki, it seems that we can't solve NP-complete problems in their generalized formulations - is that right?
We can't solve NP-complete problems of size n where n is *large* in a reasonable amount of time. As an example, consider the knapsack problem. You are presented with a pile of rocks of varying weights, and asked to fill a knapsack with a selection of rocks whose total weight is exactly x. Obviously, it's a combinatorially explosive problem, since you have to consider all the possible subsets of rocks, and that number grows with n! (n factorial), where n is the number of rocks in the pile. Problems that are of n! complexity are intractable, because, whereas we can solve them for small cases (5 or 10 rocks), the value of, say, 100! is 9332621544394415268169923885626670049071596826438162146859
29638952175999932299156089414639761565182862536979208272237582511852109168
64000000000000000000000000, which, even with a very fast robot (or simulated with a very fast processor at 1 case per nanosecond), is just too many seconds. By contrast, an example of a polynomial problem would be one whose computational cost was n-squared. 100^2 is 10,000, which is very doable.
>In other words, this isn't just an academic matter relating to the philosophy of mind but also a real opportunity to add something to the human arsenal, so to speak.
Actually, I kind of just arbitrarily chose NP-complete problems as a lower boundary, since I haven't bothered to do the math, but it's obvious to me that the cost of designing a human-like brain of size "n" using genetic algorithms or neural networks would be at least that bad, and probably much worse. But you're right: if you can even come up with a polynomial solution to NP-complete problems -- leave aside replicating sentience artificially -- you ought to be in line for at least two or three Nobel
Prizes for what that would be worth in advancing all the sciences (except that they don't give Nobel Prizes in math or cs).
If that's the case, I'm not totally sure why it would be the case that NP-completable algorithms (is there a less unwieldy term for this?) are necessary for consciousness or sentience.
They might not be. I was saying only that the cost of generating such algorithms using only neural nets or genetic algorithms would *itself* be NP-complete (or worse).
>And: what about other animals? It seems, on reflection, like kind of an absurdly high bar to set for ourselves that if we cannot simulate human consciousness then we have failed.
Well, we've been moving incrementally in that direction, and have made phenomenal strides, but no one claims to have created sentience. I'm not setting any bars, nor would I feel we had wasted our time if we didn't actually produce a sentient machine. I've always thought it was interesting and gratifying just to be able to replicate human problem-solving processes, and write programs that could solve problems that humans sometimes couldn't.
> And: have we ever tried to connect several genetic algorithms?
Not sure exactly how you mean, but practically every combination and variation of the existing, well-established technologies does seem to me to have been tried, if for no other purpose than to churn out another dissertation. :)
"Actually, I kind of just arbitrarily chose NP-complete problems as a lower boundary, since I haven't bothered to do the math, but it's obvious to me that the cost of designing a human-like brain of size "n" using genetic algorithms or neural networks would be at least that bad, and probably much worse."
This is the thing that I'm attracted to. I wonder if we're even capable of doing the math, at this point - it seems like we might not know enough about the brain to come up with anything other than a very rough estimate (although, on the other hand, even a very rough estimate could in theory discern between polynomial time and not).
"Not sure exactly how you mean, but practically every combination and variation of the existing, well-established technologies does seem to me to have been tried, if for no other purpose than to churn out another dissertation. :)"
Fair point! I'm not actually entirely sure what I mean, but it would have to be something like having a genetic algorithm to manage several other ones. In other words, even if you have really really good learning algorithms running in parallel as a part of one (pseudo-)consciousness, that won't even approximate what we experience because its learning will be partitioned. (This would like learning the formulae of calculus and Newtonian physics but never being able to see the connection.) But the way we break down those partitions seems also to be governed by a self-correcting algorithm or heuristic - I'm just not entirely sure, as I said, how to formalize it.
Anyway, Kurzweil apparently thinks we'll have reached an important juncture in this by 2029, which should be well within my lifetime. I guess I'll just have to be patient!
"Fair point! I'm not actually entirely sure what I mean, but it would have to be something like having a genetic algorithm to manage several other ones. In other words, even if you have really really good learning algorithms running in parallel as a part of one (pseudo-)consciousness, that won't even approximate what we experience because its learning will be partitioned. (This would like learning the formulae of calculus and Newtonian physics but never being able to see the connection.) But the way we break down those partitions seems also to be governed by a self-correcting algorithm or heuristic - I'm just not entirely sure, as I said, how to formalize it."
I see what you're getting at. I could write an intelligent agent program to apply for jobs on-line, say by scanning the entries in monsterjobs.com and careerbuilder.com and some of the other on-line databases, and using an algorithm to generate a bogus resume and an application letter tailored to the apparent requirements of each job. This would only require some rudimentary NL-understanding and templates for boilerplate resume- and letter-generation, or I could make it a bit more sophisticated, depending on how much effort I wanted to devote to the task. If it were possible to confine the whole process to on-line activity (circumventing the necessity of having an NLU-cum-language generation/voice synthesis program with the ability actually to make a telephone call and pass the Turing Test with an interviewer), then I could also move up a level, and use a set of operators to modify the program, whether by algorithm perturbation or changing parameters, to produce a generation of its "children" and see which children were most successful at eliciting responses from prospective employers, and allow those, in turn, to generate progeny in the same fashion, and eventually I'd perhaps get some really supper-effective pseudo-applicant agent that might be better than the first one, or the best one I could design a priori by myself without resorting to artificial evolutionary techniques. Then I could do the same thing in a number of other domains of human activity (writing blog entries and responses to comments whose effectiveness could be measured by the number of hits, e.g.), and so I'd have a whole bunch of "evolved" agents programs which, in the aggregate, might be able to perform a significant number of the ordinary on-line tasks a human typically does, but I don't I'd claim that those programs were, either individually or in the aggregate, sentient in any meaningful way. I actually think you'd have to "seed" a genetic algorithm with some more primitive cognitive constructs (algorithmic or heuristic) capable of being used to effect more generic sorts of learning -- and hope, by a judicious choice of primitives and operators somehow to get extremely lucky with your evolution, so that there'd eventually be an emergent property that could be characterized as sentient, but my instinct is to doubt that you could do it that way.
It's always easier to design an agent program for one particular application "top-down" (algorithmically), or create one with an intelligently-designed neural net that could be given lots of examples of the behavior it was supposed to exhibit, but then nobody would claim you'd managed to "design" sentience, unless we really understood sentience, in which case seeking to harness the power of learning algorithms in the hope of creating it on analogy with getting lightning to strike just the right amino acids in the primordial soup would be a potential alternative approach, but as I remarked earlier, one incredibly difficult to manage. If you *could* manage it, it would probably be because you'd been able to deploy evolutionary operators and heuristics for ranking successful children based on *some* real understanding of what sentience was. And if you had *that*, then resorting to the evolutionary approach would probably be rendered unnecessary. Also, the evolutionary approach would be incredibly costly computationally, and also in terms of "real-time" since we'd be interfacing with the "real" (or, at least cyber-) world to judge the success of each of the progeny in each generation actually capable of outperforming its parent, though most would be "junk DNA" throwaways. But leave aside "real-time" considerations, and I still think the process would be too computation-intensive, though I'd have to make lots of assumptions about the nature of the seed programs and the transformational operators to get a mathematical handle on the cost.
And so on (blather, blather, blather). Sorry about the run-on, stream-of-consciousness explanation, but I hope you got the idea.
Anyway, one thing I would emphasize is that I don't think you get sentience by building it in several hundred pieces, each replicating some human cognitive skill, and then trying to assemble them, though it's certainly tempting to try that approach as a way of passing the Turing Test.
Regards, Peter
"I actually think you'd have to "seed" a genetic algorithm with some more primitive cognitive constructs (algorithmic or heuristic) capable of being used to effect more generic sorts of learning -- and hope, by a judicious choice of primitives and operators somehow to get extremely lucky with your evolution, so that there'd eventually be an emergent property that could be characterized as sentient, but my instinct is to doubt that you could do it that way."
Yes! That's precisely what I meant: any utility-driven learning algorithm or haphazard collection thereof will necessarily lack the creativity of association that's indicative of conscious/sentient thought, which means a more generic overseer function would be required. Luck might not be the right word here, but even if it is it then becomes a question of statistics: how long must we run such a program for until we get an interesting result? My intuition says sentience, or at least something eerily sentience-like, would come of this, but that's the point of it being an experiment.
On the other side of the coin, though, I don't think such an experiment would necessarily have to succeed in order to be valuable. In particular, the quantity and quality of its failures could well indicate new directions for research, give rise to new theories, etc.
"...the evolutionary approach would be incredibly costly computationally, and also in terms of "real-time" since we'd be interfacing with the "real" (or, at least cyber-) world to judge the success of each of the progeny in each generation actually capable of outperforming its parent, though most would be "junk DNA" throwaways."
Oh, absolutely. This would be probably the biggest challenge, trying to translate millions of years of actual evolution into a hugely artificial system in such a way that the computations governing that translation terminate any time soon.
"I don't think you get sentience by building it in several hundred pieces, each replicating some human cognitive skill, and then trying to assemble them."
Fair enough - but would you say that if sentience evolved this would have been how? Certainly the "several hundred pieces, each replicating some human (cognitive?) skill" came first, at least historically. The only possible difference I can see is the word "trying": if sentience evolved naturalistically, it clearly didn't do so because of somebody trying to make that happen. That shouldn't make the slightest bit of difference in an experiment, though, because the intentions of an algorithm's designer (or lack thereof, if that algorithm was created through biological evolution) are causally inert with respect to how that algorithm treats input.
"Fair enough - but would you say that if sentience evolved this would have been how?"
I certainly wouldn't preclude it, though I'd tend to think that we've long had a constellation of discrete (though interrelated) cognitive skills, all of which have been evolving concurrently. Where there was a tipping point is, for the moment, a bit imponderable, but, too, I think it's a matter of what we mean by "sentience." I'd account lower primates (chimps, et. al.) and cetaceans "sentient." Perhaps even most or all other mammals are in some trivial sense, though obviously not capable of human-like internal self-talk and ratiocination.
Our objective, though, is sentience conjoined with human-like intelligence, and I think we're generally on the same page. It's hard to evaluate computational costs and the probability of success within, say, 20 years, without a much more specific experimental design for the seed algorithms and the evolutionary heuristics for "pruning" the children. How many do we want to generate, and how many of those do we want to keep, in each successive generation (i.e., what is the branching factor?), because our costs are going to be at least b^n*k, where b is the branching factor, n is the number of generations required to produce either success or at least an interesting result, and k is the cost of generating and subsequently evaluating a given child.
Chess has a branching factor that averages about 30, though good heuristics and the minimax algorithm make a brute-force approach perfectly viable. If you had to be exhaustive and perfect, though, there are 10^122 possible chessboard configurations that can be reached in an actual game. By contrast, the number of particles in the universe is estimated by physicists to be approx. 10^78. :) Deep Blue went down, I think, as far as 12 plies on average. Would that be enough generations to produce sentience? Would the branching factor be much smaller? Would the cost of evaluating a child "brain" (an immeasurably more complicated object than
a chessboard) be a matter of milliseconds or minutes?
Regards, Peter
Great post and interesting discussion thread.
Can machines achieve sentience? Of course, they can. All we're talking about is a difference of medium. This all reminds me of Terry Bisson's They're Made Out Of Meat which I am sure Paul has seen before. Here it is:
THEY'RE MADE OUT OF MEATby Terry Bisson
"They're made out of meat."
"Meat?"
"Meat. They're made out of meat."
"Meat?"
"There's no doubt about it. We picked up several from different parts of the planet, took them aboard our recon vessels, and probed them all the way through. They're completely meat."
"That's impossible. What about the radio signals? The messages to the stars?"
"They use the radio waves to talk, but the signals don't come from them. The signals come from machines."
"So who made the machines? That's who we want to contact."
"They made the machines. That's what I'm trying to tell you. Meat made the machines."
"That's ridiculous. How can meat make a machine? You're asking me to believe in sentient meat."
"I'm not asking you, I'm telling you. These creatures are the only sentient race in that sector and they're made out of meat."
"Maybe they're like the orfolei. You know, a carbon-based intelligence that goes through a meat stage."
"Nope. They're born meat and they die meat. We studied them for several of their life spans, which didn't take long. Do you have any idea what's the life span of meat?"
"Spare me. Okay, maybe they're only part meat. You know, like the weddilei. A meat head with an electron plasma brain inside."
"Nope. We thought of that, since they do have meat heads, like the weddilei. But I told you, we probed them. They're meat all the way through."
"No brain?"
"Oh, there's a brain all right. It's just that the brain is made out of meat! That's what I've been trying to tell you."
"So ... what does the thinking?"
"You're not understanding, are you? You're refusing to deal with what I'm telling you. The brain does the thinking. The meat."
"Thinking meat! You're asking me to believe in thinking meat!"
"Yes, thinking meat! Conscious meat! Loving meat. Dreaming meat. The meat is the whole deal! Are you beginning to get the picture or do I have to start all over?"
"Omigod. You're serious then. They're made out of meat."
"Thank you. Finally. Yes. They are indeed made out of meat. And they've been trying to get in touch with us for almost a hundred of their years."
"Omigod. So what does this meat have in mind?"
"First it wants to talk to us. Then I imagine it wants to explore the Universe, contact other sentiences, swap ideas and information. The usual."
"We're supposed to talk to meat."
"That's the idea. That's the message they're sending out by radio. 'Hello. Anyone out there. Anybody home.' That sort of thing."
"They actually do talk, then. They use words, ideas, concepts?"
"Oh, yes. Except they do it with meat."
"I thought you just told me they used radio."
"They do, but what do you think is on the radio? Meat sounds. You know how when you slap or flap meat, it makes a noise? They talk by flapping their meat at each other. They can even sing by squirting air through their meat."
"Omigod. Singing meat. This is altogether too much. So what do you advise?"
"Officially or unofficially?"
"Both."
"Officially, we are required to contact, welcome and log in any and all sentient races or multibeings in this quadrant of the Universe, without prejudice, fear or favor. Unofficially, I advise that we erase the records and forget the whole thing."
"I was hoping you would say that."
"It seems harsh, but there is a limit. Do we really want to make contact with meat?"
"I agree one hundred percent. What's there to say? 'Hello, meat. How's it going?' But will this work? How many planets are we dealing with here?"
"Just one. They can travel to other planets in special meat containers, but they can't live on them. And being meat, they can only travel through C space. Which limits them to the speed of light and makes the possibility of their ever making contact pretty slim. Infinitesimal, in fact."
"So we just pretend there's no one home in the Universe."
"That's it."
"Cruel. But you said it yourself, who wants to meet meat? And the ones who have been aboard our vessels, the ones you probed? You're sure they won't remember?"
"They'll be considered crackpots if they do. We went into their heads and smoothed out their meat so that we're just a dream to them."
"A dream to meat! How strangely appropriate, that we should be meat's dream."
"And we marked the entire sector unoccupied."
"Good. Agreed, officially and unofficially. Case closed. Any others? Anyone interesting on that side of the galaxy?"
"Yes, a rather shy but sweet hydrogen core cluster intelligence in a class nine star in G445 zone. Was in contact two galactic rotations ago, wants to be friendly again."
"They always come around."
"And why not? Imagine how unbearably, how unutterably cold the Universe would be if one were all alone ..."
Thanks TAM,
Actually, I hadn't seen or heard it before. Wouldn't call it a strong argument for machine sentience but it's satirical and entertaining.
Regards, Paul.
Post a Comment