Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Technology

A.I. Developer Challenges Pro-Human Bias 234

destinyland writes "After 13 years, the creator of the Noble Ape cognitive simulation says he's learned two things about artificial intelligence. 'Survival is a far better metric of intelligence than replicating human intelligence,' and "There are a number of examples of vastly more intelligent systems (in terms of survival) than human intelligence." Both Apple and Intel have used his simulation as a processor metric, but now Tom Barbalet argues its insights could be broadly applied to real life. His examples of durable non-human systems? The legal system, the health care system, and even the internet, where individual humans are simply the 'passive maintaining agents,' and the systems can't be conquered without a human onslaught that's several magnitudes larger."
This discussion has been archived. No new comments can be posted.

A.I. Developer Challenges Pro-Human Bias

Comments Filter:
  • But... (Score:1, Interesting)

    by Anonymous Coward on Thursday July 30, 2009 @07:07PM (#28889969)

    If you're going to approach the argument that way you have to consider survival of the species rather than the individual. On that metric human intelligence is clearly superior, as modern humans have been around for a few hundred thousand years, vs at most a few thousand for our most enduring created systems.

  • by sugarmotor ( 621907 ) on Thursday July 30, 2009 @07:28PM (#28890225) Homepage

    Reading the article, it struck me as a good explanation of why AI is not getting anywhere.

    By the authors criterion the solar system would take a huge number of people to shut down, and thus would be vastly more intelligent than any collection of surviving knives and forks used at AI conferences. I think that answers the other complaint of the author as well,

    "There is a lack of scholarship in this area. This is, in large part, because most ideas about intelligence are deeply and fallaciously interconnected with an assumed understanding of human intelligence."

    Oh well,

    Stephan

  • Noble Ape FAQ? (Score:5, Interesting)

    by argent ( 18001 ) <peter@slashdot.2 ... m ['.ta' in gap]> on Thursday July 30, 2009 @07:44PM (#28890409) Homepage Journal

    The available documentation for Noble Ape is fairly shallow and opaque, it describes a simple scripting language, and some high leve discussion about space, time, and so on... but that's about it. Where's the AI? How exactly does the model simulate an ape, what's the relationship of the model to ApeScript? Where, in short, is the FAQ?

  • Re:He's too close. (Score:3, Interesting)

    by michaelmalak ( 91262 ) <michael@michaelmalak.com> on Thursday July 30, 2009 @07:49PM (#28890469) Homepage
    Indeed, it appears to be the Captain Kirk method of winning the race to the first AI: win by changing the rules of the game.
  • by Anonymous Coward on Thursday July 30, 2009 @07:51PM (#28890497)
    Many people have commented that rocks, grey goo, the solar system are better at survival than humans! I could choose to use the rock for making something or simply destroy it. How long can it "resist" being used or being destroyed? If humans can master the planet's weather, even if it takes time, then yes they are more intelligent. If humans are able to colonize the solar system then yes, they are more intelligent than the solar system. Subconsciously survival *is* our definition of intelligence. If someone is cheated, he is called a fool and the cheater is supposed to be smarter. And if the cheated remains gullible then his chances of survival are less. Deception is a quality that is common to all humans. What we say is most of the times is different than what we think. Turing test is an accepted method for determining intelligent behavior. And what does it involve? Deception. The computer is supposed to cheat (and thus survive) the human by making him believe that its not really a computer but a human being.
  • Re:He's too close. (Score:5, Interesting)

    by nine-times ( 778537 ) <nine.times@gmail.com> on Thursday July 30, 2009 @08:26PM (#28890845) Homepage

    You have a fair point, but there's the other side to things too. At least part of the reason there's been an attempt to redefine "intelligence" as something touchy-feely by some people is that there's an attempt by other people to conflate "intelligent" with "good in math and science" with "worthwhile human beings". Basically some people who happen to score high on IQ tests are trying to push the idea that we need to let people with high IQs run the world, because they're better than everyone else. (Yes, I scored pretty well on IQ tests when I've taken them, but no, I don't think they're a good measure of a person's worth)

    But then on the other hand, there has been a tendency to restrict "intelligence" to the math/science arena much more than is proper, given what we really mean by "intelligence". We get wrapped up in testing how smart people are by testing their ability to take a square root in their head, or in asking questions about geometry or science. You get a model of intelligence where Rain Man is smarter than us all.

    I think it's fair, though, to talk about "emotional intelligence" insofar as intelligence includes abilities that enable us to figure things out mentally. The ability to understand ones own mind, to understand social situations, and to navigate difficult interpersonal problems is within the realm of "intelligence". I would say that "street smarts" is a kind of intelligence. I've certainly known people who always aced all the tests in school, but at the same time couldn't be trusted to cross a street without getting run over because they were complete dumbasses. Because of that, I don't think it's right to say that "intelligence" is a simple 1 dimensional scale, and it's certainly not something that's measured well by IQ tests.

    But anyway, I'm not sure any of this is what the author of this article has in mind (can't be sure, only RTFS). I think the idea is more like, "When thinking about intelligence abstractly, or in thinking about AI, we tend to assume that intelligence should be measured in a thing's ability to think about the things we think about the way we think about them. This might be a mistake." Imagine you had an alien intelligence that had no ears, only saw in X-rays, and had a thick hide that provided adequate shelter from the elements. Would you assume it was stupid because it didn't develop spoken language? If it hadn't made clothes for itself or built itself housing, would you assume that it was less intelligent than cave men?

    There's a strong philosophical argument that intelligence requires some kind of motivation or drive. It might follow, then, that the measurement of intelligence ought to be in measuring the efficacy of satisfying that drive, rather than satisfying the drives of other beings (us).

  • A bit of a Summary (Score:5, Interesting)

    by digitally404 ( 990191 ) on Thursday July 30, 2009 @08:46PM (#28891007)
    Unsurprisingly, most of the people here haven't read, or perhaps not really absorbed, what TFA discusses, and are jumping to quick and irrelevant conclusions.

    The author explains that Survival is a good metric of Intelligence, and he uses humans as an example. One human can definitely kill one lion, bear, mosquito, single bacteria, etc. if equipped with his intelligently designed tools such as a gun, or a mosquito zapper, antibacterial soap. He uses these tools, intelligently, to kill one bear, and hence, the human is more intelligent. However, if you take 10 bears, then sure, they may be able to kill the 1 human, but that means they are less intelligent, and take more numbers.

    He simulates intelligence this way, and he defines a simulation as any environment with applied constraints, and that may include the internet, legal system, your neighbourhood community, etc.

    So here's what he says: A system, such as the health care or legal system, will not be shutdown by one person. In fact, it probably won't even be shutdown by 10 people, maybe 100. And hence, the system is vastly more intelligent than a human, intrinsically since we worked in numbers to evolve this system.

    I think it's a very interesting way of looking at intelligence. Again, this is all based Mr. Barbalet's assumptions.
  • Language Problems (Score:1, Interesting)

    by Anonymous Coward on Thursday July 30, 2009 @08:59PM (#28891121)

    Intelligence here is a technical term that AI researchers use to describe the fitness of an agent in a simulation.
    He's not talking about brain activity or as was mentioned 'the ability to retain and process information'. He's talking about a history of actions taken that created an agent with better fitness. It's not an attempt to get attention it's simply someone in a highly technical field using terms in ways that normal people don't.
    One of those dialect problems that scientists often have when trying to describe their ideas.

  • by BinaryX01 ( 1609025 ) on Thursday July 30, 2009 @09:20PM (#28891271)
    Survival is an instinct inherent in all living things, and survival is not measured by one's own ability but by the predators around them. Very few species (if any save humans) will willingly destroy themselves or lie down to die when the opportunity to survive presents itself.
  • by Animats ( 122034 ) on Thursday July 30, 2009 @09:38PM (#28891403) Homepage

    There's something to be said for focusing on low-level survival issues, but one can do more than pontificate about it. As someone who's worked on legged locomotion control, I've made the point that a big part of life is about not falling down and not bumping into stuff. Unless you've got that working, not much else is going to work. Once you've got that working well, the higher level stuff can be added on gradually. This is bottom-up AI, as opposed to top-down AI.

    Bottom-up AI is now mainstream, but it was once a radical concept. I went through Stanford at the height of the expert systems boom in the mid-1980s, when abstraction was king and the logicians were running the AI programs. The "AI Winter" followed after they failed.

    Rod Brooks made bottom-up AI semi-respectable, but he went off onto a purely reactive AI tangent, with little insect robots. That works, but it doesn't lead to more than insect-level AI. My comment on this was that it was a matter of the planning horizon for movement planning. Purely reactive systems have a planning horizon of zero. That works for insects, because they are small and light, and can just bang feelers into obstacles without harm.

    As creatures get bigger and faster, they need a longer planning horizon. The minimum planning horizon for survival is your stopping distance. (This is explicit in autonomous vehicle work.) Bigger and faster animals need better motion planners. This is probably what drove the evolution of the cerebellum, which is most of the brain in the mammals below the primates.

    I've had horses for many years; I was on horseback an hour ago. Horses are interesting in this respect because they're big, fast, have very good short-term motion planning, but do little long-term planning. Horse brains have a big cerebellum and a small cortex, which is consistent with horse behavior. This gives a sense of what to aim for in bottom-up AI; good motion control, good vision, defer work on the higher level stuff until we have the low-level stuff nailed.

    That's happening. The DARPA Grand Challenge, especially the 2006 season with driving in traffic, forced some work on the beginnings of short term situational awareness. BigDog has some of the low-level motion control working really well, but BigDog isn't yet very good at picking footholds. They're just getting started on situational awareness. There's some good stuff going on in the game community, especially in programs that can play decent football. This problem is starting to crack.

    Short-term planning in these areas revolves around making predictions about what's going to happen next. The ability to ask "what-if" questions about movement before trying them improves survival enormously. This kind of planning isn't combinatoric, like traditional AI planning systems. It's more like inverting a simulation to run it as a planner.

    I have no idea how we get to "consciousness", but if we can get to horse-level AI, we're well into the mammal range. I encourage people to work on that problem. There's enough compute power to do this stuff now without beating head against wall on CPU time issues. There wasn't in the 1990s when I worked on this.

  • Re:He's too close. (Score:3, Interesting)

    by 'nother poster ( 700681 ) on Thursday July 30, 2009 @10:15PM (#28891643)

    Well, the problem with the rock is it doesn't survive, because it isn't alive.

    That said, all of the Intelligences he mentions are at best Meta-Intelligences. He refers to how many humans it takes to stop a system, but doesn't take into account how many it takes to maintain the system. It takes tens of thousands of people to take down the healthcare system, but there are millions of people supporting it.

  • Re:Bad metric (Score:3, Interesting)

    by SleepingWaterBear ( 1152169 ) on Friday July 31, 2009 @01:18AM (#28892645)

    Survival is a terrible metric of intelligence. By that standard, lions and tigers and bears are the most intelligent species on the planet.

    Many species of lions and tigers are near extinction, and bear populations are well down in most inhabited areas where bears used to live, so by that standard they aren't intelligent at all. Survival rates for large predators just aren't very good in the modern world.

    Now you might have pointed out that rats, raccoons, pigeons, and cockroaches are pretty intelligent by the survival metric.

  • by bytesex ( 112972 ) on Friday July 31, 2009 @05:53AM (#28893905) Homepage

    The problem with the Loch Ness monster (as with Bigfoot and Yeti) is that, if they really resemble their known species colleagues (lizards and apes), they need at least a thirty or forty year cycle to reproduce. And they will have a lifecycle of around hundred years. And since they've been 'seen' for more than a hundred years, they must have had children, and since there were children, they must have had mates, and since they must have had mates, they must have had fathers, mothers, children, and by-and-large, represent at least one family of at least six members, but much more likely (to keep the gene pool a bit fresh), several tens of members, at the very least. Now you can hide one bigfoot in the hills, and one Nessy in a lake, but thirty ?

  • by AlecC ( 512609 ) <aleccawley@gmail.com> on Friday July 31, 2009 @06:44AM (#28894137)

    Nothing like twisting the language to force a point. We have different words for "survival" and "intelligence" because they are different things. Redefining one to mean the other does not contribute to the discussion. It may be that Artificial Survival is a better goal for research than Artificial Intelligence - the point could be argued. But this semantic redefinition assumes that argument won, and claims victory in an Orwellian manner by redefining the language to state that victory has been won.

Always draw your curves, then plot your reading.

Working...