tag:blogger.com,1999:blog-3427479692989285926.post8058481449492860556..comments2024-03-17T11:54:10.124+11:00Comments on Journeyman Philosopher: The Singularity ProphecyPaul P. Mealinghttp://www.blogger.com/profile/14573615711151742992noreply@blogger.comBlogger36125tag:blogger.com,1999:blog-3427479692989285926.post-24462661007372223092009-04-26T22:35:00.000+10:002009-04-26T22:35:00.000+10:00Thanks TAM,
Actually, I hadn't seen or heard it b...Thanks TAM,<br /><br />Actually, I hadn't seen or heard it before. Wouldn't call it a strong argument for machine sentience but it's satirical and entertaining.<br /><br />Regards, Paul.Paul P. Mealinghttps://www.blogger.com/profile/14573615711151742992noreply@blogger.comtag:blogger.com,1999:blog-3427479692989285926.post-69035989484105563942009-04-26T21:52:00.000+10:002009-04-26T21:52:00.000+10:00Great post and interesting discussion thread.
C...Great post and interesting discussion thread. <br /><br />Can machines achieve sentience? Of course, they can. All we're talking about is a difference of medium. This all reminds me of Terry Bisson's <I>They're Made Out Of Meat</I> which I am sure Paul has seen before. Here it is:<br /><br /><I><B>THEY'RE MADE OUT OF MEAT</B>by Terry Bisson<br /><br />"They're made out of meat."<br /><br />"Meat?"<br /><br />"Meat. They're made out of meat."<br /><br />"Meat?"<br /><br />"There's no doubt about it. We picked up several from different parts of the planet, took them aboard our recon vessels, and probed them all the way through. They're completely meat." <br /><br />"That's impossible. What about the radio signals? The messages to the stars?"<br /><br />"They use the radio waves to talk, but the signals don't come from them. The signals come from machines."<br /><br />"So who made the machines? That's who we want to contact."<br /><br />"They made the machines. That's what I'm trying to tell you. Meat made the machines." <br /><br />"That's ridiculous. How can meat make a machine? You're asking me to believe in sentient meat."<br /><br />"I'm not asking you, I'm telling you. These creatures are the only sentient race in that sector and they're made out of meat."<br /><br />"Maybe they're like the orfolei. You know, a carbon-based intelligence that goes through a meat stage."<br /><br />"Nope. They're born meat and they die meat. We studied them for several of their life spans, which didn't take long. Do you have any idea what's the life span of meat?"<br /><br />"Spare me. Okay, maybe they're only part meat. You know, like the weddilei. A meat head with an electron plasma brain inside."<br /><br />"Nope. We thought of that, since they do have meat heads, like the weddilei. But I told you, we probed them. They're meat all the way through."<br /><br />"No brain?"<br /><br />"Oh, there's a brain all right. It's just that the brain is made out of meat! That's what I've been trying to tell you."<br /><br />"So ... what does the thinking?" <br /><br />"You're not understanding, are you? You're refusing to deal with what I'm telling you. The brain does the thinking. The meat."<br /><br />"Thinking meat! You're asking me to believe in thinking meat!"<br /><br />"Yes, thinking meat! Conscious meat! Loving meat. Dreaming meat. The meat is the whole deal! Are you beginning to get the picture or do I have to start all over?"<br /><br />"Omigod. You're serious then. They're made out of meat."<br /><br />"Thank you. Finally. Yes. They are indeed made out of meat. And they've been trying to get in touch with us for almost a hundred of their years."<br /><br />"Omigod. So what does this meat have in mind?"<br /><br />"First it wants to talk to us. Then I imagine it wants to explore the Universe, contact other sentiences, swap ideas and information. The usual."<br /><br />"We're supposed to talk to meat."<br /><br />"That's the idea. That's the message they're sending out by radio. 'Hello. Anyone out there. Anybody home.' That sort of thing."<br /><br />"They actually do talk, then. They use words, ideas, concepts?"<br />"Oh, yes. Except they do it with meat."<br /><br />"I thought you just told me they used radio."<br /><br />"They do, but what do you think is on the radio? Meat sounds. You know how when you slap or flap meat, it makes a noise? They talk by flapping their meat at each other. They can even sing by squirting air through their meat." <br /><br />"Omigod. Singing meat. This is altogether too much. So what do you advise?"<br /><br />"Officially or unofficially?" <br /><br />"Both."<br /><br />"Officially, we are required to contact, welcome and log in any and all sentient races or multibeings in this quadrant of the Universe, without prejudice, fear or favor. Unofficially, I advise that we erase the records and forget the whole thing."<br /><br />"I was hoping you would say that."<br /><br />"It seems harsh, but there is a limit. Do we really want to make contact with meat?"<br /><br />"I agree one hundred percent. What's there to say? 'Hello, meat. How's it going?' But will this work? How many planets are we dealing with here?"<br /><br />"Just one. They can travel to other planets in special meat containers, but they can't live on them. And being meat, they can only travel through C space. Which limits them to the speed of light and makes the possibility of their ever making contact pretty slim. Infinitesimal, in fact."<br /><br />"So we just pretend there's no one home in the Universe."<br /><br />"That's it." <br /><br />"Cruel. But you said it yourself, who wants to meet meat? And the ones who have been aboard our vessels, the ones you probed? You're sure they won't remember?"<br /><br />"They'll be considered crackpots if they do. We went into their heads and smoothed out their meat so that we're just a dream to them."<br /><br />"A dream to meat! How strangely appropriate, that we should be meat's dream." <br /><br />"And we marked the entire sector unoccupied."<br /><br />"Good. Agreed, officially and unofficially. Case closed. Any others? Anyone interesting on that side of the galaxy?"<br /><br />"Yes, a rather shy but sweet hydrogen core cluster intelligence in a class nine star in G445 zone. Was in contact two galactic rotations ago, wants to be friendly again." <br /><br />"They always come around."<br /><br />"And why not? Imagine how unbearably, how unutterably cold the Universe would be if one were all alone ..."</I>The Atheist Missionaryhttps://www.blogger.com/profile/07191035196328725888noreply@blogger.comtag:blogger.com,1999:blog-3427479692989285926.post-17698826392341312502009-04-24T17:11:00.000+10:002009-04-24T17:11:00.000+10:00"Fair enough - but would you say that if sentience..."Fair enough - but would you say that if sentience evolved this would have been how?"<br /><br />I certainly wouldn't preclude it, though I'd tend to think that we've long had a constellation of discrete (though interrelated) cognitive skills, all of which have been evolving concurrently. Where there was a tipping point is, for the moment, a bit imponderable, but, too, I think it's a matter of what we mean by "sentience." I'd account lower primates (chimps, et. al.) and cetaceans "sentient." Perhaps even most or all other mammals are in some trivial sense, though obviously not capable of human-like internal self-talk and ratiocination.<br /><br />Our objective, though, is sentience conjoined with human-like intelligence, and I think we're generally on the same page. It's hard to evaluate computational costs and the probability of success within, say, 20 years, without a much more specific experimental design for the seed algorithms and the evolutionary heuristics for "pruning" the children. How many do we want to generate, and how many of those do we want to keep, in each successive generation (i.e., what is the branching factor?), because our costs are going to be at least b^n*k, where b is the branching factor, n is the number of generations required to produce either success or at least an interesting result, and k is the cost of generating and subsequently evaluating a given child.<br /><br />Chess has a branching factor that averages about 30, though good heuristics and the minimax algorithm make a brute-force approach perfectly viable. If you had to be exhaustive and perfect, though, there are 10^122 possible chessboard configurations that can be reached in an actual game. By contrast, the number of particles in the universe is estimated by physicists to be approx. 10^78. :) Deep Blue went down, I think, as far as 12 plies on average. Would that be enough generations to produce sentience? Would the branching factor be much smaller? Would the cost of evaluating a child "brain" (an immeasurably more complicated object than<br />a chessboard) be a matter of milliseconds or minutes?<br /><br />Regards, PeterPKnoreply@blogger.comtag:blogger.com,1999:blog-3427479692989285926.post-22017781112239239892009-04-24T14:15:00.000+10:002009-04-24T14:15:00.000+10:00"I actually think you'd have to "seed" a genetic a..."I actually think you'd have to "seed" a genetic algorithm with some more primitive cognitive constructs (algorithmic or heuristic) capable of being used to effect more generic sorts of learning -- and hope, by a judicious choice of primitives and operators somehow to get extremely lucky with your evolution, so that there'd eventually be an emergent property that could be characterized as sentient, but my instinct is to doubt that you could do it that way."<br /><br />Yes! That's precisely what I meant: any utility-driven learning algorithm or haphazard collection thereof will necessarily lack the creativity of association that's indicative of conscious/sentient thought, which means a more generic overseer function would be required. Luck might not be the right word here, but even if it is it then becomes a question of statistics: how long must we run such a program for until we get an interesting result? My intuition says sentience, or at least something eerily sentience-like, would come of this, but that's the point of it being an experiment.<br /><br />On the other side of the coin, though, I don't think such an experiment would necessarily have to succeed in order to be valuable. In particular, the quantity and quality of its failures could well indicate new directions for research, give rise to new theories, etc.<br /><br />"...the evolutionary approach would be incredibly costly computationally, and also in terms of "real-time" since we'd be interfacing with the "real" (or, at least cyber-) world to judge the success of each of the progeny in each generation actually capable of outperforming its parent, though most would be "junk DNA" throwaways."<br /><br />Oh, absolutely. This would be probably the biggest challenge, trying to translate millions of years of actual evolution into a hugely artificial system in such a way that the computations governing that translation terminate any time soon.<br /><br />"I don't think you get sentience by building it in several hundred pieces, each replicating some human cognitive skill, and then trying to assemble them."<br /><br />Fair enough - but would you say that <I>if</I> sentience evolved this would have been how? Certainly the "several hundred pieces, each replicating some human (cognitive?) skill" came first, at least historically. The only possible difference I can see is the word "trying": if sentience evolved naturalistically, it clearly didn't do so because of somebody trying to make that happen. That shouldn't make the slightest bit of difference in an experiment, though, because the intentions of an algorithm's designer (or lack thereof, if that algorithm was created through biological evolution) are causally inert with respect to how that algorithm treats input.Elihttps://www.blogger.com/profile/03543293341085230171noreply@blogger.comtag:blogger.com,1999:blog-3427479692989285926.post-70039265036887321822009-04-24T13:53:00.000+10:002009-04-24T13:53:00.000+10:00"Fair point! I'm not actually entirely sure what I..."Fair point! I'm not actually entirely sure what I mean, but it would have to be something like having a genetic algorithm to manage several other ones. In other words, even if you have really really good learning algorithms running in parallel as a part of one (pseudo-)consciousness, that won't even approximate what we experience because its learning will be partitioned. (This would like learning the formulae of calculus and Newtonian physics but never being able to see the connection.) But the way we break down those partitions seems also to be governed by a self-correcting algorithm or heuristic - I'm just not entirely sure, as I said, how to formalize it."<br /><br />I see what you're getting at. I could write an intelligent agent program to apply for jobs on-line, say by scanning the entries in monsterjobs.com and careerbuilder.com and some of the other on-line databases, and using an algorithm to generate a bogus resume and an application letter tailored to the apparent requirements of each job. This would only require some rudimentary NL-understanding and templates for boilerplate resume- and letter-generation, or I could make it a bit more sophisticated, depending on how much effort I wanted to devote to the task. If it were possible to confine the whole process to on-line activity (circumventing the necessity of having an NLU-cum-language generation/voice synthesis program with the ability actually to make a telephone call and pass the Turing Test with an interviewer), then I could also move up a level, and use a set of operators to modify the program, whether by algorithm perturbation or changing parameters, to produce a generation of its "children" and see which children were most successful at eliciting responses from prospective employers, and allow those, in turn, to generate progeny in the same fashion, and eventually I'd perhaps get some really supper-effective pseudo-applicant agent that might be better than the first one, or the best one I could design a priori by myself without resorting to artificial evolutionary techniques. Then I could do the same thing in a number of other domains of human activity (writing blog entries and responses to comments whose effectiveness could be measured by the number of hits, e.g.), and so I'd have a whole bunch of "evolved" agents programs which, in the aggregate, might be able to perform a significant number of the ordinary on-line tasks a human typically does, but I don't I'd claim that those programs were, either individually or in the aggregate, sentient in any meaningful way. I actually think you'd have to "seed" a genetic algorithm with some more primitive cognitive constructs (algorithmic or heuristic) capable of being used to effect more generic sorts of learning -- and hope, by a judicious choice of primitives and operators somehow to get extremely lucky with your evolution, so that there'd eventually be an emergent property that could be characterized as sentient, but my instinct is to doubt that you could do it that way.<br /><br />It's always easier to design an agent program for one particular application "top-down" (algorithmically), or create one with an intelligently-designed neural net that could be given lots of examples of the behavior it was supposed to exhibit, but then nobody would claim you'd managed to "design" sentience, unless we really understood sentience, in which case seeking to harness the power of learning algorithms in the hope of creating it on analogy with getting lightning to strike just the right amino acids in the primordial soup would be a potential alternative approach, but as I remarked earlier, one incredibly difficult to manage. If you *could* manage it, it would probably be because you'd been able to deploy evolutionary operators and heuristics for ranking successful children based on *some* real understanding of what sentience was. And if you had *that*, then resorting to the evolutionary approach would probably be rendered unnecessary. Also, the evolutionary approach would be incredibly costly computationally, and also in terms of "real-time" since we'd be interfacing with the "real" (or, at least cyber-) world to judge the success of each of the progeny in each generation actually capable of outperforming its parent, though most would be "junk DNA" throwaways. But leave aside "real-time" considerations, and I still think the process would be too computation-intensive, though I'd have to make lots of assumptions about the nature of the seed programs and the transformational operators to get a mathematical handle on the cost.<br /><br />And so on (blather, blather, blather). Sorry about the run-on, stream-of-consciousness explanation, but I hope you got the idea.<br /><br />Anyway, one thing I would emphasize is that I don't think you get sentience by building it in several hundred pieces, each replicating some human cognitive skill, and then trying to assemble them, though it's certainly tempting to try that approach as a way of passing the Turing Test.<br /><br />Regards, PeterPKnoreply@blogger.comtag:blogger.com,1999:blog-3427479692989285926.post-76357568075439668222009-04-21T22:46:00.000+10:002009-04-21T22:46:00.000+10:00"Actually, I kind of just arbitrarily chose NP-com..."Actually, I kind of just arbitrarily chose NP-complete problems as a lower boundary, since I haven't bothered to do the math, but it's obvious to me that the cost of designing a human-like brain of size "n" using genetic algorithms or neural networks would be at least that bad, and probably much worse."<br /><br />This is the thing that I'm attracted to. I wonder if we're even capable of doing the math, at this point - it seems like we might not know enough about the brain to come up with anything other than a very rough estimate (although, on the other hand, even a very rough estimate could in theory discern between polynomial time and not).<br /><br />"Not sure exactly how you mean, but practically every combination and variation of the existing, well-established technologies does seem to me to have been tried, if for no other purpose than to churn out another dissertation. :)"<br /><br />Fair point! I'm not actually entirely sure what I mean, but it would have to be something like having a genetic algorithm to manage several other ones. In other words, even if you have really really good learning algorithms running in parallel as a part of one (pseudo-)consciousness, that won't even approximate what we experience because its learning will be partitioned. (This would like learning the formulae of calculus and Newtonian physics but never being able to see the connection.) But the way we break down those partitions seems also to be governed by a self-correcting algorithm or heuristic - I'm just not entirely sure, as I said, how to formalize it.<br /><br />Anyway, Kurzweil apparently thinks we'll have reached an important juncture in this by 2029, which should be well within my lifetime. I guess I'll just have to be patient!Elihttps://www.blogger.com/profile/03543293341085230171noreply@blogger.comtag:blogger.com,1999:blog-3427479692989285926.post-4731114968330222622009-04-21T17:22:00.000+10:002009-04-21T17:22:00.000+10:00>From reading wiki, it seems that we can't ...>From reading wiki, it seems that we can't solve NP-complete problems in their generalized formulations - is that right? <br /><br />We can't solve NP-complete problems of size n where n is *large* in a reasonable amount of time. As an example, consider the knapsack problem. You are presented with a pile of rocks of varying weights, and asked to fill a knapsack with a selection of rocks whose total weight is exactly x. Obviously, it's a combinatorially explosive problem, since you have to consider all the possible subsets of rocks, and that number grows with n! (n factorial), where n is the number of rocks in the pile. Problems that are of n! complexity are intractable, because, whereas we can solve them for small cases (5 or 10 rocks), the value of, say, 100! is 9332621544394415268169923885626670049071596826438162146859<br />29638952175999932299156089414639761565182862536979208272237582511852109168<br />64000000000000000000000000, which, even with a very fast robot (or simulated with a very fast processor at 1 case per nanosecond), is just too many seconds. By contrast, an example of a polynomial problem would be one whose computational cost was n-squared. 100^2 is 10,000, which is very doable.<br /><br />>In other words, this isn't just an academic matter relating to the philosophy of mind but also a real opportunity to add something to the human arsenal, so to speak.<br /><br />Actually, I kind of just arbitrarily chose NP-complete problems as a lower boundary, since I haven't bothered to do the math, but it's obvious to me that the cost of designing a human-like brain of size "n" using genetic algorithms or neural networks would be at least that bad, and probably much worse. But you're right: if you can even come up with a polynomial solution to NP-complete problems -- leave aside replicating sentience artificially -- you ought to be in line for at least two or three Nobel <br />Prizes for what that would be worth in advancing all the sciences (except that they don't give Nobel Prizes in math or cs).<br /><br />If that's the case, I'm not totally sure why it would be the case that NP-completable algorithms (is there a less unwieldy term for this?) are necessary for consciousness or sentience.<br /><br />They might not be. I was saying only that the cost of generating such algorithms using only neural nets or genetic algorithms would *itself* be NP-complete (or worse).<br /><br />>And: what about other animals? It seems, on reflection, like kind of an absurdly high bar to set for ourselves that if we cannot simulate human consciousness then we have failed.<br /><br />Well, we've been moving incrementally in that direction, and have made phenomenal strides, but no one claims to have created sentience. I'm not setting any bars, nor would I feel we had wasted our time if we didn't actually produce a sentient machine. I've always thought it was interesting and gratifying just to be able to replicate human problem-solving processes, and write programs that could solve problems that humans sometimes couldn't.<br /><br />> And: have we ever tried to connect several genetic algorithms?<br /><br />Not sure exactly how you mean, but practically every combination and variation of the existing, well-established technologies does seem to me to have been tried, if for no other purpose than to churn out another dissertation. :)PKnoreply@blogger.comtag:blogger.com,1999:blog-3427479692989285926.post-26900459551845482502009-04-21T13:18:00.000+10:002009-04-21T13:18:00.000+10:00I really hope none of this is an original thought....I really hope none of this is an original thought...<br /><br />From reading wiki, it seems that we can't solve NP-complete problems in their generalized formulations - is that right? In other words, this isn't just an academic matter relating to the philosophy of mind but also a real opportunity to add something to the human arsenal, so to speak. If that's the case, I'm not totally sure why it would be the case that NP-completable algorithms (is there a less unwieldy term for this?) are necessary for consciousness or sentience.<br /><br />And: what about other animals? It seems, on reflection, like kind of an absurdly high bar to set for ourselves that if we cannot simulate human consciousness then we have failed.<br /><br />And: have we ever tried to connect several genetic algorithms? A major part of our consciousness, it seems, is analogizing and approaching learning in a fundamentally multidisciplinary fashion. I wouldn't be at all surprised, then, if only something very advanced but weirdly consciousness-empty could be made from just one genetic algorithm (that is, just an algorithm or a group thereof designed to address one problem in one way): this would be like a hyperbolic case of autism, in a way. But chaining them together somehow might at least grant the appearance of consciousness, might it not?<br /><br />The internet, it occurs to me, would be the best possible resource to try something like this. Since so much of our lives now take place online, it seems like we ought (in theory) to be able to code one genetic algorithm that applies for jobs (altering the job postings to which it applies and maybe even its "credentials"), another that looks for "satisfying" ways to spend money once it "has" a job (based, say, on given personality traits), a third that navigates social networks hunting for friends, another for managing memory, and so on. (Let's say, just to make things easy, that a person steps in to provide at least some of the linguistic help for this.)<br /><br />Has anyone ever tried something like this? Would it be at all feasible (assume for the sake of argument that someone would pay you to try)? If I were a genius programmer this seems like something I might want to experiment with, but unfortunately I'm a really terrible programmer, so I can only throw ideas around to see what sticksElihttps://www.blogger.com/profile/03543293341085230171noreply@blogger.comtag:blogger.com,1999:blog-3427479692989285926.post-76376610642159697432009-04-21T12:00:00.000+10:002009-04-21T12:00:00.000+10:00Hi Larry,
>Ah! See, this is the sort of thing ...Hi Larry,<br /><br />>Ah! See, this is the sort of thing that I would like to see more of: though I don't know what NP-completeness is (I remember hearing the term a lot from my compsci friends in college but never got the details), it's at least a concrete, well-understood thing that we can track. As for computational power, I thought that had been taken care of, hadn't it? They keep saying that computers will soon be able to match the human brain in computing power - but maybe I'm misremembering that?<br /><br />There is a class of problems known as "nondeterministic-polynomially-complete" (two of the more common of which are the "knapsack problem" and "the travelling salesman problem") which are thought to be computationally intractable, though this has yet to be proven conclusively. What's interesting is that it *has* been proven that if you could find a polynomial-time solution to *any* of these problems, it would solve all of them, since they've been shown to be isomorphic. (I used to teach a course called "analysis of algorithms" in which most of the focus was on methods of finding the algebraic expressions that describe the amount of time or space required to execute a given program as a function of its size, which can be interesting if, for example, the program in question is recursive in a complicated way.) Anyway, you're spot on to identify this as a tangible, concrete way of determining whether something is doable.<br /><br />People can argue this, predicated on their assumptions about what the particulate units are in the human brain of memory and of computation, but you're also right in thinking that, if we haven't already grossly exceeded the capacity of the human brain in storage capacity and computational power (and I think we have) on one computer or a network of them, then we're certainly playing in the same ballpark. So that wasn't what I meant by saying that we lacked the computational power to "build" a brain of power equivalent to our own using genetic algorithms or neural networks, which probably wouldn't be the most efficient way to go. To resort to a (somewhat lame) analogy I might have enough resources to run or to build a car (if I knew how to do it beforehand), but I wouldn't have the resources to build a General Motors factory that would in turn build a car, especially not if that factory were staffed by 10,000 monkeys connecting machinery by trial-and-error. :)) The monkeys would have better luck reproducing the first draft of Hamlet. Anyway, the point is that, although it may be theoretically possible to do something in a certain way, it can probably be shown that that method would take effectively forever.<br /><br />Regards, PeterPKnoreply@blogger.comtag:blogger.com,1999:blog-3427479692989285926.post-19846099556042919372009-04-21T07:38:00.000+10:002009-04-21T07:38:00.000+10:00"By way of an example, one not-very-new, but often..."By way of an example, one not-very-new, but often effective, AI technology called "genetic algorithms" [description etc.]. I'm not sure that could be described as 'design.'"<br /><br />For what it's worth <I>I</I> would describe it that way. I've seen people play around with this to make faces and stuff, and from what I've seen this fits what I call "design." But the actual answer is less important to me than the question, because the question applies equally to machines and biological creatures and I'm not sure how much people realize that. (You do, obviously, but...)<br /><br />"Whether they could produce a network that had sentience as an emergent property, I really don't know, though I tend to doubt it on grounds of the computational intensity exceeding that for NP-complete problems that I think would be required."<br /><br />Ah! See, this is the sort of thing that I would like to see more of: though I don't know what NP-completeness is (I remember hearing the term a lot from my compsci friends in college but never got the details), it's at least a concrete, well-understood thing that we can track. As for computational power, I thought that had been taken care of, hadn't it? They keep saying that computers will soon be able to match the human brain in computing power - but maybe I'm misremembering that?<br /><br />I think it's also helpful to recall that the game of life can in theory be used to make a universal Turing machine. (Dennett talks about this in <I>Evolving Freedom</I>, I can find the section if you like.) If we postulate that sentience (not even necessarily human sentience, just sentience) is replicable with a universal Turing machine, then we know for sure that the question comes down to computing power - and so on and so forth. I'd rather try to work up from preexisting notions than try to work from thought experiments, basically.Elihttps://www.blogger.com/profile/03543293341085230171noreply@blogger.comtag:blogger.com,1999:blog-3427479692989285926.post-30517810910080435092009-04-21T07:15:00.000+10:002009-04-21T07:15:00.000+10:00Hi Paul,
Not Confucius, but along the same lines,...Hi Paul,<br /><br />Not Confucius, but along the same lines, "frogs at the bottom of a well see only a corner of the sky," and I tend to agree. It's not something I believe, but it certainly is what attorneys would call a "colorable argument" that, despite the apparent centrality of recursion to human intelligence, the ultimate recursive act of apprehension of the nature of human intelligence by that intelligence itself is somehow mathematically prohibited. Probably an argument along the lines of Goedel's, though Goedel's Theorem, as I've mentioned previously, only precludes comprehensiveness: not comprehension.<br /><br />Anyway, why I'm writing is that Larry's comment (Hi Larry), that "robots...will in some way always have had a conscious designer," though intuitively reasonable, both is and is not really true. It's true in the most basic sense in which the initiator of a process that results in something possibly unforeseeable can still be said to have "designed" it.<br /><br />By way of an example, one not-very-new, but often effective, AI technology called "genetic algorithms" lets you solve problems or even solve the problem of designing an algorithm by means of a simulation of the evolutionary process, wherein you start almost ex nihilo, with the most generic sort of entity, and let it evolve influenced only by heuristics that encode some of the properties that you want to see in the final result. I'm not sure that could be described as "design." (I'm also very unsure that you could produce a sentient being using genetic algorithms, but leave that aside, since I'm equally unsure that you couldn't.) If this were intuitively what most people meant by "design," then you wouldn't have all these religious fundamentalists still attempting to discredit the theory of evolution.<br /><br />And, of course, neural nets can "design" algorithms by learning from multiple examples of the behavior you want those algorithms to exhibit, or from multiple examples of other algorithms. (At a simpler level, and more usually, they just learn the i/o pattern, but they can also be made to produce algorithms encoded in network form.) Basically, they do what the brain does in the way of pattern learning, though I wouldn't claim they're sentient. Whether they could produce a network that had sentience as an emergent property, I really don't know, though I tend to doubt it on grounds of the computational intensity exceeding that for NP-complete problems that I think would be required. I'm generally reticent to rule things out on the grounds of computational cost, but you'd have to if, for example, your best projection of the time required for the number of machine cycles running even on a quantum computer tended to exceed the life expectancy of the known universe.<br /><br />But then, you're right, Paul. I am (or used to be) an "expert," and though I *think* (even *believe* with a fairly high degree of certainty) that we can create sentient intelligence artificially, I really don't know exactly how, and I have no means of proving the proposition, other than to resort to intuition informed by my experience of designing programs that merely solved the task of emulating the problem-solving behavior of human intelligence. For none of those programs would I ever have claimed sentience.PKnoreply@blogger.comtag:blogger.com,1999:blog-3427479692989285926.post-25552828391023354542009-04-18T09:23:00.000+10:002009-04-18T09:23:00.000+10:00Hi Larry,
I'm not surprised you're confused. This...Hi Larry,<br /><br />I'm not surprised you're confused. This is an area of 'research' where we feel we are on the cusp, so 'we' (includes academia, experts and ordinary laypeople like you and me) don't like to admit how ignorant we are. But I pretty well agree with everything you say here. <br /><br />Absolutely no shame in being confused - gets back to my quote from Confucius: 'To realise that you know something when you do, and to realise that you do not when do not, this is knowing.' Stating the bloody obvious, I know, yet the complete antithesis to fundamentalism which is blind certainty.<br /><br />Regards, Paul.Paul P. Mealinghttps://www.blogger.com/profile/14573615711151742992noreply@blogger.comtag:blogger.com,1999:blog-3427479692989285926.post-29766493750292274152009-04-18T04:59:00.000+10:002009-04-18T04:59:00.000+10:00Argh - my thoughts on this matter are too scattere...Argh - my thoughts on this matter are too scattered. It may take a few iterations to get this right, but here goes...<br /><br />First of all, what would it look like for the internet to emerge as a consciousness? I for one have no idea, but I don't see why we should expect such a consciousness to be like ours - the consciousness that we have, after all, is not at all like what our neurons have (assuming that consciousness supervenes on facts about neurons). It's this kind of thing that makes me suspicious: we seemingly cannot help but talk about the idea of consciousness as though it necessarily <I>feels</I> like it does to us.<br /><br />Another problem is that the robots, no matter how smart they get, will in some way always have had a conscious designer. Even if Adam and Eve (or similar bots) get so advanced that they start coming up with radically different methods of generating hypotheses and/or tests, there'll always be some force to the reductionist argument that their intelligence is really just our intelligence with super-hyped-up computing power (and, therefore, not really machine consciousness at all).<br /><br />Still a third problem is how sensory data comes into this. There seems to be a disconnect between merely accepting data input from the exterior world and <I>sensing</I>, and this too might be relevant to the question of non-human/biological consciousnesses. Even without this we could make a machine capable of diagnosing its software or hardware and reporting when it's "sick" using empathetic language (e.g., "Quick, call the doctor - I'm in really bad shape!"), but I think most people would still say that it's not conscious. But is that really relevant? My intuition says it is, but I couldn't for the life of me say why.<br /><br />Like I said, this whole thing confuses me.Elihttps://www.blogger.com/profile/03543293341085230171noreply@blogger.comtag:blogger.com,1999:blog-3427479692989285926.post-3962084493701733952009-04-17T20:19:00.000+10:002009-04-17T20:19:00.000+10:00A post script to the last comment: check out the i...A post script to the last comment: check out the imbedded video.Paul P. Mealinghttps://www.blogger.com/profile/14573615711151742992noreply@blogger.comtag:blogger.com,1999:blog-3427479692989285926.post-90816130540122370542009-04-17T19:35:00.000+10:002009-04-17T19:35:00.000+10:00I don’t know if you’re still following this Larry,...I don’t know if you’re still following this Larry, but the Adam and Eve robots that you referenced earlier are covered in last week’s <EM>New Scientist</EM>, with a bit more detail. Basically the robot, Adam (Eve is yet to be built) uses the scientific method to gain the best results in a research project. This is how I would see AI developing, where a robot or computer does the routine work, while humans, in this case, scientists, do the higher level thinking. In fact, that’s exactly what the article says: ‘Adam, Eve and their ilk could soon automate routine and time-consuming scientific chores, leaving human scientists free to make higher-level, creative leaps.’ But the researchers also add: ‘Ultimately the robots may even be capable of conducting independent research.’<br /><br />In an aside, the article also refers to work being done at Cornell University in Ithaca, New York, ‘[researchers] have developed software that can observe physical systems and independently identify the laws that underlie them.’ Using an ‘evolutionary algorithm’ to generate equations and test them against hard data, ‘the computer produced an equation that described conservation of angular momentum.’<br /><br />So I accept that there are many ways to get ‘intelligent’ machines to mimic humans, even in scientific research.<br /><br />Refer: <A HREF="http://www.newscientist.com/article/dn16890-robot-scientist-makes-discoveries-without-human-help.html" REL="nofollow">Robot Scientists</A>Regards, Paul.Paul P. Mealinghttps://www.blogger.com/profile/14573615711151742992noreply@blogger.comtag:blogger.com,1999:blog-3427479692989285926.post-36629303540407162552009-04-15T13:18:00.000+10:002009-04-15T13:18:00.000+10:00As you can see I'm slow on the uptake. I am to...As you can see I'm slow on the uptake. I am too literal sometimes. Even the Peter,Paul & Mary thing hit in belatedly.<br /><br />I was nearly going to mention 42 (it crossed my mind), so you must have got through to me subliminally; I had no idea why it popped into my mind.<br /><br />I never read tbe book or saw the movie - I did hear the radio serial version once. But everyone knows all the quirks, so I should have picked it up.<br /><br />Regards, Paul.Paul P. Mealinghttps://www.blogger.com/profile/14573615711151742992noreply@blogger.comtag:blogger.com,1999:blog-3427479692989285926.post-11253569024825097072009-04-15T11:43:00.000+10:002009-04-15T11:43:00.000+10:00Hi Paul,
My comment about speaking "mouse" was a ...Hi Paul,<br /><br />My comment about speaking "mouse" was a joke -- not a reference to your (or anybody's) knowledge of languages. In Douglas Adam's <I>Hitchhiker's Guide to the Galaxy</I>, the Earth was created on the planet of Magrathea by a race of hyperintelligent, pan-dimensional beings whose 3-dimensional manifestation on our own planet was as mice, conducting diabolically devious experiments on psychologists who were under the misapprehension that *they* were experimenting on the mice. The rest of my post was likewise a reference to Adams' book. In the story, the computer known as "Deep Thought" took <I>7.5 million years</I> to come up with the answer to "life, the universe and everything." (The answer was 42.) I sincerely apologize if I somehow gave you the impression that I was commenting negatively on your knowledge of languages (about which I know only what you've told me, and couldn't care less; your English is flawless, and I don't see much reason for you to create a website in French or German or Tagalog) or of computers (about which I think your profession of ignorance couldn't possibly be less true, unless you just mean that you're not a C++ hacker). <br /><br />I really have to watch it with the flights of absurdity, or learn to use :) emoticons a lot more liberally.<br /><br />Regards, PeterPKnoreply@blogger.comtag:blogger.com,1999:blog-3427479692989285926.post-42575962169882835262009-04-15T08:34:00.000+10:002009-04-15T08:34:00.000+10:00Hi Journey Man,
Wanted to drop you a note about t...Hi Journey Man,<br /><br />Wanted to drop you a note about this week's episode of Motherboard (our tech show). Saw you had previously talked about him on the site so I thought this may be of interest to you.<br /><br />Ray Kurzweil tells us about his vision of the Singuarlity—a point around 2045 when computers will acquire full-blown artificial intelligence and technology will infuse itself with biology. His theories have all sorts of supporters, detractors, and critics, but do you even remember what life was like before three-year-olds had cell phones and you actually had to remember facts instead of relying on the internet? That was only 10 years ago. If Kurzweil is right, we'll have supercomputers more powerful than every human brain on the planet combined within a few decades.<br /><br /><A HREF="http://www.vbs.tv/video.php?id=19251860001" REL="nofollow">WATCH THE SINGULARITY OF RAY KURZWEIL ON VBS.TV</A>You can also geek out and catch up on the previous Motherboard episodes like our interview with Richard Garriot, or the backyard rocketeer, or even robots with Professor Sankai.<br /><br />Thanks for watching! See you in 2045!<br /><br />RoryRorynoreply@blogger.comtag:blogger.com,1999:blog-3427479692989285926.post-41072527063808803502009-04-15T08:28:00.000+10:002009-04-15T08:28:00.000+10:00I have to say that my language abilities are very ...I have to say that my language abilities are very limited, despite learning French for 6 years at school. My father was a natural: though he had bugger all education he learnt German when he was a POW and never forgot it for the rest of his life.<br /><br />I'm computer illiterate as well.<br /><br />A bit of personal history.<br />Regards, Paul.Paul P. Mealinghttps://www.blogger.com/profile/14573615711151742992noreply@blogger.comtag:blogger.com,1999:blog-3427479692989285926.post-13124048176237135972009-04-15T02:11:00.000+10:002009-04-15T02:11:00.000+10:00Hi Paul,
Amazed you remember Samson and Delilah, ...Hi Paul,<br /><br />Amazed you remember Samson and Delilah, which I'd practically forgotten, despite my fondness for the group. I think they're most commonly remembered for "Puff, the Magic Dragon." I dreamed that I had misconstrued your reference to solipsism, but then I can't be sure that I'm not dreaming that I dreamed that, as I write this, or imagine that I'm writing it, or imagine that I'm having a hallucination in which, although I really am writing this, my experience of writing it is entirely manufactured and happens only accidentally to parallel the reality of the situation, although that reality is really just a "subroutine" running in "Deep Thought" on the planet Magrathea and I won't know the truth of the matter for just a bit of a while, now, but I promise I'll get back to you with a definitive answer (or imagine that I'm getting back to you -- pan-dimensionally, of course) in... oh, just about seven-and-a-half million years.<br /><br />Regards, Peter<br /><br />BTW, do you happen to speak <I>mouse</I>?PKnoreply@blogger.comtag:blogger.com,1999:blog-3427479692989285926.post-68580504092543663322009-04-14T22:32:00.000+10:002009-04-14T22:32:00.000+10:00Developed a stutter, or there's an echo in cybersp...Developed a stutter, or there's an echo in cyberspace.Paul P. Mealinghttps://www.blogger.com/profile/14573615711151742992noreply@blogger.comtag:blogger.com,1999:blog-3427479692989285926.post-18752120184720168582009-04-14T22:27:00.000+10:002009-04-14T22:27:00.000+10:00Actually was a fan of Peter, Paul & Mary - my ...Actually was a fan of Peter, Paul & Mary - my sister had an album of theirs. Can't remember the title, but I do know there was a song about Samson and Delilah, which I really liked.<br /><br />Of course the album had my initials on it: PPM.<br /><br />The point I make about solipsism is that in dreams it's really true, though few people seem to realise it until you point it out. <br /><br />Searle provides a brief discussion in <EM>Mind</EM> (no mention of dreams) but explaining, logically, how anyone who claims to be solipsist would not be believed, which is why no one ever does.<br /><br />He also gave the following anecdote attributed to Russell: 'I once received a letter from an eminent logician, Mrs Christine Ladd Franklin, saying that she was a solipsist, and was surprised that there were no others.'<br /><br />You've probably heard it (or read it) before.<br /><br />Regards, Paul<br /><br />Regards, Paul.Paul P. Mealinghttps://www.blogger.com/profile/14573615711151742992noreply@blogger.comtag:blogger.com,1999:blog-3427479692989285926.post-16598244681571862002009-04-14T18:11:00.000+10:002009-04-14T18:11:00.000+10:00Hi Paul,
Our posts "crossed in the mail."
As reg...Hi Paul,<br /><br />Our posts "crossed in the mail."<br /><br />As regards "solipsism," I was thinking more that, even if we could meet in person, and I knew for a fact that I wasn't asleep, and I could see you and hear your voice and shake your hand, I *still* couldn't preclude the possibility that all those experiences that appeared to be artifacts of my perception of a real and tangible external world, and of a real and conscious other person... were in reality nothing more than projections of my own consciousness, existing somehow alone in a void (or in something like "the matrix"). I'd consider that to be a low-order probability, but I couldn't absolutely prove that it wasn't true.<br /><br />For the record, though, I do believe that you exist as a separate, conscious and highly intelligent being 12,000 miles away. (I'm sure you'll be relieved to hear it. :))<br /><br />BTW, I've received your book, and reading it is next on my agenda.<br /><br />Regards, PeterPKnoreply@blogger.comtag:blogger.com,1999:blog-3427479692989285926.post-86336552836980197792009-04-14T17:10:00.000+10:002009-04-14T17:10:00.000+10:00Hi Paul,
With regard to "Mary," just a silly refe...Hi Paul,<br /><br />With regard to "Mary," just a silly reference to a popular American singing group of the 60's (Peter, Paul and Mary), and also suggested to me by my transient whimsical reflection on the "other" means by which replication of human consciousness can be achieved.<br /><br />Regards, PeterPKnoreply@blogger.comtag:blogger.com,1999:blog-3427479692989285926.post-4295294181284319522009-04-14T17:07:00.000+10:002009-04-14T17:07:00.000+10:00On the subject of solipsism, I have an opinion on ...On the subject of solipsism, I have an opinion on that as well. Solipsism occurs in dreams, and how do we tell the difference? Very easily: if I meet you in a dream you will have no memory of it, but if I meet you in the flesh then it's something we share and we both have a conscious memory of it. Remember, that without memory, we would have no self. What's the Bob Dylan lyric: 'You can be in my dream if I can be in yours.' I think that's how it goes. Didn't know Mr. Zimmerman was a philosopher did you?<br /><br />Regards, Paul.Paul P. Mealinghttps://www.blogger.com/profile/14573615711151742992noreply@blogger.com