Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Technology

A.I. Developer Challenges Pro-Human Bias 234

destinyland writes "After 13 years, the creator of the Noble Ape cognitive simulation says he's learned two things about artificial intelligence. 'Survival is a far better metric of intelligence than replicating human intelligence,' and "There are a number of examples of vastly more intelligent systems (in terms of survival) than human intelligence." Both Apple and Intel have used his simulation as a processor metric, but now Tom Barbalet argues its insights could be broadly applied to real life. His examples of durable non-human systems? The legal system, the health care system, and even the internet, where individual humans are simply the 'passive maintaining agents,' and the systems can't be conquered without a human onslaught that's several magnitudes larger."
This discussion has been archived. No new comments can be posted.

A.I. Developer Challenges Pro-Human Bias

Comments Filter:
  • Banks (Score:2, Funny)

    by Rendonsmug ( 1516589 )
    The banking system is another example of a system much better than human intelligence for survival and resilience. Oh wait...
    • Re:Banks (Score:5, Insightful)

      by johnsonav ( 1098915 ) on Thursday July 30, 2009 @06:52PM (#28890525) Journal

      The banking system is another example of a system much better than human intelligence for survival and resilience. Oh wait...

      It persuaded us to save its "life", didn't it?

      • The banks "persuaded" "us," didn't they?

      • If we parse "survival" to mean something like "adaptation", then I think we're on to something. Sheer survival doesn't imply intelligence. As a gedankenexperiment, consider that the Loch Ness Monster really exists, and exists solely because it's living in a place (the deep waters of Loch Ness) where it doesn't have to adapt, because it's very seldom preyed on, or seen. Then, all of a sudden, an adventurer finds a way to track it. Game over. Now, if the Loch Ness Monster could figure out a way to hide or get
        • Re: (Score:3, Interesting)

          by bytesex ( 112972 )

          The problem with the Loch Ness monster (as with Bigfoot and Yeti) is that, if they really resemble their known species colleagues (lizards and apes), they need at least a thirty or forty year cycle to reproduce. And they will have a lifecycle of around hundred years. And since they've been 'seen' for more than a hundred years, they must have had children, and since there were children, they must have had mates, and since they must have had mates, they must have had fathers, mothers, children, and by-and-l

  • Bad metric (Score:4, Insightful)

    by dgatwood ( 11270 ) on Thursday July 30, 2009 @06:06PM (#28889955) Homepage Journal

    Survival is a terrible metric of intelligence. By that standard, lions and tigers and bears are the most intelligent species on the planet.

    • by SomeJoel ( 1061138 ) on Thursday July 30, 2009 @06:08PM (#28889983)

      Survival is a terrible metric of intelligence. By that standard, lions and tigers and bears are the most intelligent species on the planet.

      They were, then we started shooting them. Who's the smartest one now, bitches?

      • by Thiez ( 1281866 ) on Thursday July 30, 2009 @06:23PM (#28890163)

        MRSA is, of course.

        Or maybe a species that we can't afford to exterminate. Bees or spiders maybe? Or perhaps a species of bacteria important to our digestion? When there are two species X and Y, and X could in theory slay Y, but cannot live without Y, while Y can live without X but cannot slay X, which one is 'smarter'?

        • by Will.Woodhull ( 1038600 ) <wwoodhull@gmail.com> on Friday July 31, 2009 @12:41AM (#28892737) Homepage Journal

          MRSA. That's an interesting thought.

          But I think normal human G.I. flora are much more intelligent than any variety of staph aureus. These colonies have surrounded themselves with incredibly complex biological organisms that actually have the demonstrated the ability to surround themselves with non-biological constructions that have even allowed some of the G.I. colonies to travel off planet.

          Now maybe some of you don't buy that line of reasoning. Well, just think about this: All those reports of alien abductions where the humans experienced anal probes? Obviously the aliens are attempting to communicate with the G.I. flora who are the truly dominant species of Earth.

      • by rtb61 ( 674572 )

        Shooting them, hmm, but, that is the intelligence of a society not an individual. So that is the flaw in thinking, survival of the fittest in terms if humanity, is not in regard to individual humans but in regard to humans within humane societies. Inhumane societies always die, they inherently destroy themselves ie. prey upon each other or are eliminated by more humane societies that contain large numbers of individuals who a willing to sacrifice their own personal advantage to promote group advantage.

        Sh

    • by bennomatic ( 691188 ) on Thursday July 30, 2009 @06:09PM (#28889991) Homepage

      By that standard, lions and tigers and bears...

      <Dorothy>Oh my!</Dorothy>

      • by dgatwood ( 11270 )

        Yeah, a friend of mine is playing the Tin Man in The Wizard of Oz at Cabrillo College, and a bunch of us went to see it on Sunday. Apparently, it is bleeding into my Slashdot postings....

    • Why is intelligence even a metric? By sheer numbers and biomass, prokaryotes rule the planet, and all us blubbery multicellular types are parasitic hangers-on.

      • by Eudial ( 590661 )

        Well, if your aim is to develop artificial intelligence, intelligence is probably a pretty good metric to determine how well you've performed the task you set out.

        • Re: (Score:3, Insightful)

          by johnsonav ( 1098915 )

          Well, if your aim is to develop artificial intelligence, intelligence is probably a pretty good metric to determine how well you've performed the task you set out.

          Well, that seems a little too easy. Now all we need is a definition of "intelligence" we can all agree on...

          • And I will always disagree with some people, no matter what, so... ;)

      • Why is intelligence even a metric?

        It is important if you want Earth-based life to survive more that the next ~5 billion years which is roughly when the sun runs out of fuel....think long term!

        • by Thiez ( 1281866 )

          Why would we want that? None of us is going to be around by then, and we probably wouldn't recognize our descendents, if we don't go extinct long before that time.

    • Re:Bad metric (Score:5, Insightful)

      by MrMista_B ( 891430 ) on Thursday July 30, 2009 @06:21PM (#28890127)

      You mean stupid. Most lions and tigers are endangered, if not close to extinction, and bears aren't too well off either.

      A better example would be insects, like mosquitoes.

      • by dcw3 ( 649211 )

        Think someone forgot about cockroaches

      • by jvkjvk ( 102057 )

        No, he's being a prime example of why it isn't true.

        6+ billion and able to survive anywhere, including vacuum, but we still have comments like the GP.

    • If you read the article you'll discover that it's not just survival. It's the number of humans required to circumvent survival. One human can kill a lion, tiger, or bear, although it would really depend on their level of technology, which kind of points out a major difficulty with his argument - there isn't a "normalized" version of human intelligence against which to measure.
    • Re: (Score:3, Insightful)

      by Hurricane78 ( 562437 )

      Depends on what you define as "intelligent".

      Survival is the metric for success. And if you are the one surviving, you define what "intelligent" means.

      Try doubting it from your grave. ^^

      And (our) insect( overlord)s by far rule this world. Their only problem: They don't know what "define" means. ;)

    • by Krneki ( 1192201 )
      Sharks man, freaking sharks.

      Evidence for the existence of sharks extends back over 450&#226;&#8364;"420 million years, into the Ordovician period, before land vertebrates existed and before many plants had colonized the continents.
      http://en.wikipedia.org/wiki/Shark#Evolution
    • Survival is a terrible metric of intelligence. By that standard, lions and tigers and bears are the most intelligent species on the planet.

      No by that metric lions, tigers and other mammals are ankle biters compared to lizards, birds, amphibians, insects and fish. In fact Octopuses and sharks have a much longer track record for survival than these mere mammal upstarts. In evolutionary terms, the mammals have not yet proven anything, other than a slight improvement over non-bird dinosaurs. Though I will certainly agree with you that its a bad metric. I ain't bowing to bacteria and weeds in terms of intelligence.

    • Actually, the planet itself is the most intelligent thing around. I agree. Conflating intelligence and survival rate is not very intelligent.

    • When was the last time you woke up thinking "I hope I don't get eaten today" or "I hope I can kill myself some food today so I don't starve"? Seeing as you have the time to comment on /. I'd assume never. Why? Because society provides you everything you need. What other species on this planet has such a complex society that ensures the survival of its members? None

      Look at it this way - even with such a low rate of reproduction (unlike insects or bacteria as others suggest), humans have been able to pop
    • by syousef ( 465911 )

      Survival is a terrible metric of intelligence. By that standard, lions and tigers and bears are the most intelligent species on the planet.

      I've seen plenty of morons survive and even prosper.

    • Survival is a terrible metric of intelligence. By that standard, lions and tigers and bears are the most intelligent species on the planet.

      All three of those are threatend by human activity, so no, by that metric not the most intellegence species.

    • By definition, if it doesn't survive, no longer exists, then it didn't work, did it?

      An excellent example of "artificial intelligence" is my pure-bred Golden Retriever. He's very, very smart. He's sweet, loving, and is amazingly responsive to voice tones, gestures, and the like. He knows exactly what I mean when I point, snap my fingers, even tilt my head towards the door. I could swear up and down that he understands what I say, many times, and definitely not because he always does what I want!

      Yet, for all

    • Yeah, it didn't ring right with me either, he's basically saying: we shouldn't measure [Human style]intelligence by equating it to human intelligence.
      AI is not about making something that can survive but rather making something that can get you a turkey on rye when you tell it "go make me a sammich!"
    • Re: (Score:3, Interesting)

      Survival is a terrible metric of intelligence. By that standard, lions and tigers and bears are the most intelligent species on the planet.

      Many species of lions and tigers are near extinction, and bear populations are well down in most inhabited areas where bears used to live, so by that standard they aren't intelligent at all. Survival rates for large predators just aren't very good in the modern world.

      Now you might have pointed out that rats, raccoons, pigeons, and cockroaches are pretty intelligent by the survival metric.

  • He's too close. (Score:5, Insightful)

    by Toonol ( 1057698 ) on Thursday July 30, 2009 @06:07PM (#28889967)
    By redefining intelligence to have nothing to do with what anybody means by intelligence, he can then claim that other systems exhibit more intelligence. Like a rock, presumably, since it survives far better than humans. I think this may be an example of somebody getting too interesting in specifics of tree-bark, and forgetting about the forest.
    • Re:He's too close. (Score:5, Insightful)

      by Trepidity ( 597 ) <[delirium-slashdot] [at] [hackish.org]> on Thursday July 30, 2009 @06:38PM (#28890339)

      This seems to be a common mode of argument for people who for some reason don't like what people commonly mean by "intelligence", which is something closer to "critical thinking skills combined with ability to acquire, retain, and use information", but nonetheless like the aura of the term. There's been a decades-long wave of politically correct attempts to broaden intelligence to include other things, like "emotional intelligence", which might indeed be important, useful, and worthy of study, but aren't really what the word "intelligence" means, so should probably get new names instead of being shoehorned in there. Now we've got survivability, which is indeed an interesting trait of an organism, but is not in itself actually what anyone calls intelligence (though being more intelligent might help with survivability, at least in some contexts).

      It's a perfectly valid argument to say: look, I don't think intelligence is the most interesting property to study; here's this other property, which might overlap somewhat, but I argue is more interesting. But pretending that your new property is really intelligence is a weird sort of linguistic move, because your property is not what people use that word to mean.

      • Re:He's too close. (Score:5, Interesting)

        by nine-times ( 778537 ) <nine.times@gmail.com> on Thursday July 30, 2009 @07:26PM (#28890845) Homepage

        You have a fair point, but there's the other side to things too. At least part of the reason there's been an attempt to redefine "intelligence" as something touchy-feely by some people is that there's an attempt by other people to conflate "intelligent" with "good in math and science" with "worthwhile human beings". Basically some people who happen to score high on IQ tests are trying to push the idea that we need to let people with high IQs run the world, because they're better than everyone else. (Yes, I scored pretty well on IQ tests when I've taken them, but no, I don't think they're a good measure of a person's worth)

        But then on the other hand, there has been a tendency to restrict "intelligence" to the math/science arena much more than is proper, given what we really mean by "intelligence". We get wrapped up in testing how smart people are by testing their ability to take a square root in their head, or in asking questions about geometry or science. You get a model of intelligence where Rain Man is smarter than us all.

        I think it's fair, though, to talk about "emotional intelligence" insofar as intelligence includes abilities that enable us to figure things out mentally. The ability to understand ones own mind, to understand social situations, and to navigate difficult interpersonal problems is within the realm of "intelligence". I would say that "street smarts" is a kind of intelligence. I've certainly known people who always aced all the tests in school, but at the same time couldn't be trusted to cross a street without getting run over because they were complete dumbasses. Because of that, I don't think it's right to say that "intelligence" is a simple 1 dimensional scale, and it's certainly not something that's measured well by IQ tests.

        But anyway, I'm not sure any of this is what the author of this article has in mind (can't be sure, only RTFS). I think the idea is more like, "When thinking about intelligence abstractly, or in thinking about AI, we tend to assume that intelligence should be measured in a thing's ability to think about the things we think about the way we think about them. This might be a mistake." Imagine you had an alien intelligence that had no ears, only saw in X-rays, and had a thick hide that provided adequate shelter from the elements. Would you assume it was stupid because it didn't develop spoken language? If it hadn't made clothes for itself or built itself housing, would you assume that it was less intelligent than cave men?

        There's a strong philosophical argument that intelligence requires some kind of motivation or drive. It might follow, then, that the measurement of intelligence ought to be in measuring the efficacy of satisfying that drive, rather than satisfying the drives of other beings (us).

        • What about defining intelligence as the ability to learn?
          That makes the most sense to me. That covers the alien with the x-ray vision and thick hide that can learn to scratch messages in the sand, and the baby that can't feed itself but can learn to speak. It should also cover the AI that can't form a coherent sentence to pass a turing test (at the moment), as long as it's good at learning.

          Quit trying to make Adult AIs that seem smart and instead make an infantile one that seems like an imbecile but ca
          • That's a pretty good idea, but how can you measure a thing's "ability to learn" without trying to teach it some particular thing and seeing how quickly it learns an expected response?

            So in a simple example, you try to teach me geometry, and I don't learn. Am I therefore stupid? Maybe. Or maybe I'm not motivated to learn it. It's also possible that you're a bad teacher, or that I'm not good at math but I would be good at learning about other subjects. Or maybe-- and this is kind of a whacky thought--

      • by Draek ( 916851 )

        what people commonly mean by "intelligence", which is something closer to "critical thinking skills combined with ability to acquire, retain, and use information"

        Err, what exactly is "critical thinking skills"? that's one term I've never quite understood. And while acquiring and retaining information are easy to qualify, how do you measure its use?

        • Re: (Score:2, Informative)

          by Keynan ( 1326377 )

          Err, what exactly is "critical thinking skills"? that's one term I've never quite understood. And while acquiring and retaining information are easy to qualify, how do you measure its use?

          http://en.wikipedia.org/wiki/Critical_thinking [wikipedia.org]

          The articles a little difficult to read at first. So here goes, Critical Thinking is similar to cynicism in that you don't believe everything your told. Rather that you question everything, but accept what is shown by the evidence. In turn it is the ability to ask the correct

      • by lawpoop ( 604919 )

        There's been a decades-long wave of politically correct attempts to broaden intelligence to include other things, like "emotional intelligence", which might indeed be important, useful, and worthy of study, but aren't really what the word "intelligence" means, so should probably get new names instead of being shoehorned in there

        The concept of intelligence has had a problem since the get go. Time was, intelligence used to mean the ability to do higher math, play chess, understand logic and reason, and all that. Only smart people could do it. Then, we build computers that operated purely on logic themselves. They could reason, solve logic puzzles, do higher math, and routinely beat average humans at chess. People thought it was only a short time until robots dominated our society, and C3POs would be walking around everywhere. Then w

      • Re:He's too close. (Score:4, Insightful)

        by rlseaman ( 1420667 ) on Thursday July 30, 2009 @07:46PM (#28891009)

        There's been a decades-long wave of politically correct attempts to broaden intelligence to include other things, like "emotional intelligence", which might indeed be important, useful, and worthy of study, but aren't really what the word "intelligence" means

        Point taken, but you are confounding two separate issues yourself. The notion of Howard Gardner's so-called "multiple intelligences" is well presented in Stephen Jay Gould's book, "Mismeasure of Man". Gould's thesis is that IQ is a meaningless measure, and that intelligence is a meaningless notion that doesn't correspond to a single measurable entity in the first place.

        You suggest a definition "critical thinking skills combined with ability to acquire, retain, and use information", but this begs the question by assuming its own premises. In the first place, you describe a composite entity comprising multiple skills (there's Gardner's multiple intelligences) as well as something ("ability to acquire, retain, and use information") that seems itself like a circular definition.

        So yes, there is a bit of academic slight of hand in reusing the word "intelligence" to represent something other than "what people commonly mean", but the fundamental point is that what people commonly are trying to express is a bunch of hooey.

        That said, this statement from the referenced article: "survival is a far better metric of intelligence than replicating human intelligence" seems evolutionarily extremely suspect. Survival is the dependent variable in Natural Selection. Phenotypical traits like intelligence, whether multiple or singular, are the independent variables driving evolution.

        • Re: (Score:3, Insightful)

          by Trepidity ( 597 )

          I don't see it as some sort of prerequisite for a word that describes humans to describe a single entity that's empirically testable. People use phrases like "kind" and "loving" and "artistic" and "creative" to describe humans, even though there is probably no solid definition that's empirically testable. I'd still resist some scientist trying to take one of those terms and apply it to their own pet concept that happens to be empirically testable but isn't what the word actually means. Inventing new jargon,

          • Try the following exercise:

            In 10 minutes, come up with as many different ways as possible to use a 3' long bit of 2"x4" wood. Variations on a theme are not allowed - "Crushing a cockroach" and "Crushing an ant" would count as "smooshing little critters" for example.

            How many did you come up with?

            Then have many thousands of people world wide, of all genders, religions, socioeconomic status, cultures, etc. do the same exercise.

            You'll wind up with a pretty normalized distribution after awhile (and it may vary d

          • "pet concept that happens to be empirically testable but isn't what the word actually means"

            This assumes that there is a stable (if perhaps complex) concept denoted by the word - and that all share that concept. The whole point of this discussion is that some dispute the definition. The other traits you mention map pretty well onto Gardner's classifications - and just to select one, "love" has its own rich class distinctions from eros to agape. (See also "Galatea 2.2" by Richard Powers.)

            I believe Gardn

        • That said, this statement from the referenced article: "survival is a far better metric of intelligence than replicating human intelligence" seems evolutionarily extremely suspect.

          It should be suspect. The AI Developer is not talking about intelligence. He's talking about *Artificial* Intelligence. And it's within that context of wanting to create man-made intelligence that you should consider his desire to study the "intelligence" of living systems.

          A typical AI developer doesn't want to hear about AI in t

        • by Toonol ( 1057698 )
          Gould's thesis is that IQ is a meaningless measure, and that intelligence is a meaningless notion that doesn't correspond to a single measurable entity in the first place.

          I would completely agree with the sentiment that IQ is a vastly simplified measure, but it certainly isn't meaningless. Take a thousand people and separate them by IQ into top 33%, middle 34%, and bottom 33%. You'll find that the groups will correlate very strongly with wealth, social standing, and many other indicators of success.
      • Well put, and I agree. I would add that what we actually want out of artificial systems is some kind of combination of survivability and intelligence, and we don't want to go too far in either direction.

        "Too much survivability" would be where we can't shut the system down when it's not doing what we want it to, or being destructive. Too little survivability would be where the resources to keep it going exceed the benefit of the output it gives us.

        Now, how can you get too much intelligence? Well, if you t

      • So basically you are arguing:

        "You keep using that word. I don't think it means what you think it means." [translation into slashdot meme]

        Yeah, I agree. What TFA is doing is advancing an argument by redefining the language commonly used to frame the questions. And that's a particularly disgusting form of intellectual masturbation.

        AI could really benefit from some seminal thinking right now... but not that kind of seminal thinking.

    • Re: (Score:3, Interesting)

      by michaelmalak ( 91262 )
      Indeed, it appears to be the Captain Kirk method of winning the race to the first AI: win by changing the rules of the game.
    • Re: (Score:3, Interesting)

      Well, the problem with the rock is it doesn't survive, because it isn't alive.

      That said, all of the Intelligences he mentions are at best Meta-Intelligences. He refers to how many humans it takes to stop a system, but doesn't take into account how many it takes to maintain the system. It takes tens of thousands of people to take down the healthcare system, but there are millions of people supporting it.

    • by dbcad7 ( 771464 )

      To me this sentence doesn't make sense .. 'Survival is a far better metric of intelligence than replicating human intelligence,' .. One thing is a measure the other is a method..

  • Obviously the solar system is the most intelligent of them all!

    I for one, welcome our planetary overlords.

    • Why do you suggest our solar system? Those black holes at the centers of galaxies have been around for far longer. I for one, welcome our new gravitational overlords.
  • He essentially seems to be arguing that grey goo is the pinnacle of AI.

    I much prefer the existing literature requiring that intelligence be an intelligence we can relate to as humans. Survivability is an interesting metric for creating more self-sustaining systems, but the goal of robotics should be fostering better knowledge and understanding of the universe. Searching for blind replication at the best rate possible just feels empty.

    • by Thiez ( 1281866 )

      > Searching for blind replication at the best rate possible just feels empty.

      I'm sure there is a joke in there somewhere about your mom claiming the opposite.

    • Searching for blind replication at the best rate possible just feels empty. Many a bachelor has discovered this some decide that it's ok and continue, others go looking for a trap and inevitably fall into it face first. Thus the system of marriage continues.
    • by geekoid ( 135745 )

      Actually silver time traveling goo is the pinnacle of A.I.

      The goal of robotics should be to create a world where machine do all the hard physical work.

  • by liquiddark ( 719647 ) on Thursday July 30, 2009 @06:15PM (#28890065)
    First of all, he doesn't actually say much about the survival as intelligence idea beyond the positing of the notion itself. It gives him a nice way to consider survival and intelligence as linked systems, with the "survival" of a system (that definition alone gets pretty abstract) being measured in terms of the logarithm of the number of humans required to shut it down.

    He says you CAN consider the Internet, legal system, medical system, and others in terms of this notion, but doesn't get terrifically specific about it. He does, however, specifically state that road systems and the legal system are at least an order of magnitude more resilient than a human-level intelligence, which is nice, if you believe his examples are well-chosen. I'd be hard pressed to claim that they are.

    In other words, he sets up an interesting research topic and then between his own poor choice of phrasing, the multiple Singularity references which surround the article, and the /. article writers' need to get your attention, it suddenly becomes Human Intelligence Is Over.
    • by GigsVT ( 208848 )

      In general the only emergent behavior of systems like roads and power grids and Internet are novel ways to massively fail, usually in some unforeseen cascade.

    • It would seem that he's referring to slow-changing "durable" systems as having better "survival" than individual humans. Anyone who's ever "fought the system" already knows that it takes an incredible amount of effort to cause even the slightest change unless you already have the authority to change the system arbitrarily (e.g. legislators can pass a bill.)

      I think we've got another case of Slashdot story editors getting "creative" with the summary to attract readers. Now there's a system that's "resili

  • are better than mere Humans.

    Survival is *beyond* good and evil, anyone?

  • Wheee-oooooh!
    Imma go kill me a few dolphins and assert my intelligence fellas by surviving longer than them.
  • by sugarmotor ( 621907 ) on Thursday July 30, 2009 @06:28PM (#28890225) Homepage

    Reading the article, it struck me as a good explanation of why AI is not getting anywhere.

    By the authors criterion the solar system would take a huge number of people to shut down, and thus would be vastly more intelligent than any collection of surviving knives and forks used at AI conferences. I think that answers the other complaint of the author as well,

    "There is a lack of scholarship in this area. This is, in large part, because most ideas about intelligence are deeply and fallaciously interconnected with an assumed understanding of human intelligence."

    Oh well,

    Stephan

  • Creativity short of schizophrenia is a better metric of *human* intelligence than survival or logic or spacial recognition or any of the rest of the mess that AI researches try to measure intelligence with.

  • Noble Ape FAQ? (Score:5, Interesting)

    by argent ( 18001 ) <(peter) (at) (slashdot.2006.taronga.com)> on Thursday July 30, 2009 @06:44PM (#28890409) Homepage Journal

    The available documentation for Noble Ape is fairly shallow and opaque, it describes a simple scripting language, and some high leve discussion about space, time, and so on... but that's about it. Where's the AI? How exactly does the model simulate an ape, what's the relationship of the model to ApeScript? Where, in short, is the FAQ?

  • Well, then bacteria must be highly intelligent: not only do they have the greatest biomass and numbers on earth, they have almost certainly already traveled to other planets!

  • Survival is not a good measure of intelligence, but maybe we are seeing intelligence in terms too human. Maybe more than survival what you should check is how it reacts and adapts to a new environment, to new things. In that sense, Law is definitely less intelligent than Internet, as is pretty slow and dumb adapting to the reality created by the existence of internet.

    Could a bee hive or an ant colony be treated as a separate intelligent entity. Probably that could fit better in the intelligence concept than
  • health care system is durable? clearly he isn't living in the real world...
  • A bit of a Summary (Score:5, Interesting)

    by digitally404 ( 990191 ) on Thursday July 30, 2009 @07:46PM (#28891007)
    Unsurprisingly, most of the people here haven't read, or perhaps not really absorbed, what TFA discusses, and are jumping to quick and irrelevant conclusions.

    The author explains that Survival is a good metric of Intelligence, and he uses humans as an example. One human can definitely kill one lion, bear, mosquito, single bacteria, etc. if equipped with his intelligently designed tools such as a gun, or a mosquito zapper, antibacterial soap. He uses these tools, intelligently, to kill one bear, and hence, the human is more intelligent. However, if you take 10 bears, then sure, they may be able to kill the 1 human, but that means they are less intelligent, and take more numbers.

    He simulates intelligence this way, and he defines a simulation as any environment with applied constraints, and that may include the internet, legal system, your neighbourhood community, etc.

    So here's what he says: A system, such as the health care or legal system, will not be shutdown by one person. In fact, it probably won't even be shutdown by 10 people, maybe 100. And hence, the system is vastly more intelligent than a human, intrinsically since we worked in numbers to evolve this system.

    I think it's a very interesting way of looking at intelligence. Again, this is all based Mr. Barbalet's assumptions.
    • by Bongo ( 13261 ) on Friday July 31, 2009 @03:47AM (#28893515)

      So here's what he says: A system, such as the health care or legal system, will not be shutdown by one person. In fact, it probably won't even be shutdown by 10 people, maybe 100. And hence, the system is vastly more intelligent than a human, intrinsically since we worked in numbers to evolve this system.

      In philosophy (couple of books) there is a discussion about how various fields confuse individuals and systems. Like, Nature is a huge complex system, and man wouldn't survive without Nature, therefore Nature and the ecosystem are more important than Man. Therefore man is just another species, and man must learn his place and minimize his impact. Well, there is some truth to that, but the underlying confusion is that they're comparing an individual organism with a massive complex system, under the guise that the organism is just another complex system anyway. Similar confusions come up when people talk about whether society or the individual is more important. I had one Marxist tell me that I am "nothing" without society. Well, again there is some truth to that, but it is only partially true.

      It is not just that we don't like comparing ourselves to other more complex things, and feel uncomfortable about it. It is that these different things have some very different properties. An individual organism like a person has sentience and self-directed intentionality. Society doesn't have sentience (at most it exhibits "flocking" type behaviors) and an ecosystem doesn't have sentience (despite what some new agers claim about the planet being "conscious").

      And meanwhile, society has properties that can't be reduced to individual consciousness. We have the English Language, and you have to be born into or join a society of English speakers in order to learn it. We have ethical codes, which again are about social interactions. If I was the only person on the planet, the only being, there would be no need for ethics. They wouldn't exist without some sort of collective to bounce good and bad off of. And these social structures do indeed "last longer" than individuals, and can't be torn down by individuals, not because they are more intelligent, but simply because they exist in a different domain to the individual. They are a different side of the coin. They are distinct but related to the individual.

      But also notice, that without individual minds interacting with each other, there would be no social system, no legal frameworks, no ethical codes. Just like you can't have an ecosystem without organisms interacting. And as everyone here is saying, if you start to mis-assign a quality that belongs to one domain (sentience, intentionality, intelligence) to a different domain (ecosystems, legal systems) you end up in weird and wrong places (but its research so who knows what might come of it).

      But it does end up looking like, because modeling human intelligence is so hard, we'll just change fields and start modeling systems instead, and you know, maybe we'll get somewhere with that, and nobody will notice we just changed our research area.

  • By this definition a bacterium strain or virus would be considered the most intelligent thing on the planet.

  • by Twinbee ( 767046 ) on Thursday July 30, 2009 @07:55PM (#28891089)
    Uh oh, it's one of those hard-line relativist type rants again.

    One choice quote from the article:

    The same reason you get the opinion "The primacy of human intelligence is one of the last and greatest myths of the anthropomorphic divide

    Okay, human intelligence may be fuzzy and difficult to objectively measure. But that applies to many things such as CPU speed, Kolmogorov complexity [wikipedia.org], how complicated a shape is, or how much heat/sound insulation a particular material provides. Even how good a piece of music/art is.

    They're tricky, but there's no doubt that exponentially low and high numbers can be given to each of those attributes.

    • I think you're right!

      Stephan

    • Even how good a piece of music/art is.

      Wrong. Art is whatever the market says it is. If the (or a) market says that a toilet is art, it's art. If market says a 4x6 index card listing all the women I've had sex with is art, then, it's art.

      Cage provided us with Silence as music. Lou Reed's Metal Machine Music is about an hour's worth of shrieking feedback. Merzbow has made a career out of sheer noise. There is no metric for that. It is whatever people want it to be and are willing to pay for. Art and Musi

      • by Twinbee ( 767046 )

        Yes and 4'33" was a real masterpiece. SURE it was. Sorry, I get quite agitated when I hear that used as an excuse that all music is equally as good.

        I think one of the things the intrinsic quality of a piece of art is blurred is because of the memories and associations that it inpsires. So it turns out that a piece of art may cause happiness indirectly (most of the examples you provide are perfect examples of that - it's not the music doing most of the talking).

        But on the other hand, a lot of music (and some

        • your "critiques" of music are useless. Music is completely culturally bound. Listen to classical Japanese court music or Noh Plays. To western ears, it's dissonant shriekery. To the Japanese ear, it is not.

          I didn't say 4'33" was some great masterpiece of performance. What I said was "Cage gave us silence", which is the opposite of complexity. For that reason and its historical role as the refutation / logical conclusion to serialism, yes it is a masterpiece of composition.

          You hold a typically parochial

          • Re: (Score:3, Insightful)

            by Twinbee ( 767046 )

            And now listen to some of the latest Japanese pop - don't you find it just a tiny bit odd how much like Western pop it is? (I enjoy all types of music, and think most (but not all) modern pop is crap for the record).

            Look I'm not saying I have the best taste in music in the world, and I've known people who at least partially subscribe to relativism and have *decent* taste in music. I've known the reverse too (people like myself, except with probably bad taste).

            But let's not start putting all music (or even a

      • Certainly the dadaists would like you to believe that. Some of us still have actual aesthetic standards though. I agree with my grandparent - just because it's difficult to precisely characterize doesn't mean there is no scale of relative merit in art.

  • Survival is an instinct inherent in all living things, and survival is not measured by one's own ability but by the predators around them. Very few species (if any save humans) will willingly destroy themselves or lie down to die when the opportunity to survive presents itself.
  • How do you use an ape-in-environment simulator to evaluate a microprocessor? You've got virtual apes running around on islands getting Hungry and Feared and Sexed.. how would you even introduce a processor design into that kind of simulation? Processors competing for electricity resources while getting Warmed and Turned Off?

    Using an evolutionary algorithm to optimize certain highly complex design elements makes sense, but roaming around an environment interacting and competing for resources optimizes apes
  • by Animats ( 122034 ) on Thursday July 30, 2009 @08:38PM (#28891403) Homepage

    There's something to be said for focusing on low-level survival issues, but one can do more than pontificate about it. As someone who's worked on legged locomotion control, I've made the point that a big part of life is about not falling down and not bumping into stuff. Unless you've got that working, not much else is going to work. Once you've got that working well, the higher level stuff can be added on gradually. This is bottom-up AI, as opposed to top-down AI.

    Bottom-up AI is now mainstream, but it was once a radical concept. I went through Stanford at the height of the expert systems boom in the mid-1980s, when abstraction was king and the logicians were running the AI programs. The "AI Winter" followed after they failed.

    Rod Brooks made bottom-up AI semi-respectable, but he went off onto a purely reactive AI tangent, with little insect robots. That works, but it doesn't lead to more than insect-level AI. My comment on this was that it was a matter of the planning horizon for movement planning. Purely reactive systems have a planning horizon of zero. That works for insects, because they are small and light, and can just bang feelers into obstacles without harm.

    As creatures get bigger and faster, they need a longer planning horizon. The minimum planning horizon for survival is your stopping distance. (This is explicit in autonomous vehicle work.) Bigger and faster animals need better motion planners. This is probably what drove the evolution of the cerebellum, which is most of the brain in the mammals below the primates.

    I've had horses for many years; I was on horseback an hour ago. Horses are interesting in this respect because they're big, fast, have very good short-term motion planning, but do little long-term planning. Horse brains have a big cerebellum and a small cortex, which is consistent with horse behavior. This gives a sense of what to aim for in bottom-up AI; good motion control, good vision, defer work on the higher level stuff until we have the low-level stuff nailed.

    That's happening. The DARPA Grand Challenge, especially the 2006 season with driving in traffic, forced some work on the beginnings of short term situational awareness. BigDog has some of the low-level motion control working really well, but BigDog isn't yet very good at picking footholds. They're just getting started on situational awareness. There's some good stuff going on in the game community, especially in programs that can play decent football. This problem is starting to crack.

    Short-term planning in these areas revolves around making predictions about what's going to happen next. The ability to ask "what-if" questions about movement before trying them improves survival enormously. This kind of planning isn't combinatoric, like traditional AI planning systems. It's more like inverting a simulation to run it as a planner.

    I have no idea how we get to "consciousness", but if we can get to horse-level AI, we're well into the mammal range. I encourage people to work on that problem. There's enough compute power to do this stuff now without beating head against wall on CPU time issues. There wasn't in the 1990s when I worked on this.

  • AI is intelligence built by, and by logical extension, for artifice. You have a goal that needs to be met, and the only way to meet it is a self-aware mechanism. So long as it meets that goal, it has achieved its purpose. Any existence beyond that goal is pointless and self-defeating.

    Here's where your monkey meat kicks in and demands that there be a purpose to life above and beyond what you can know. In our case, as biological entities that have evolved over the course of a few billion years of advanced org

  • The legal system, the health care system, and even the internet, where individual humans are simply the 'passive maintaining agents,' and the systems can't be conquered without a human onslaught that's several magnitudes larger.

    Funny that you pick examples that wouldn't exist without the massive amount of 'passive maintaining agents' that maintain them. Humans don't have to 'attack them' to destroy them, they just have to stop using them or maintaining them. Seems kind of silly to call then 'non-human dur

  • Sounds to me more like he's suddenly realised everyone is noticing the AI Emperor has no clothes and is panicking about future funding.

    Professor Penrose is right: Artificial Intelligence is impossible because we are fundamentally incapable of understanding human intelligence. Our brains are not predictable, clockwork, mechanical devices they are at a fundamental depend on chaos.

Whoever dies with the most toys wins.

Working...