Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software

Two AI Pioneers, Two Bizarre Suicides 427

BotnetZombie writes "Wired tells the quite sad but very interesting stories of Chris McKinstry and Pushpinder Singh. Initially self-educated, both had the idea to create huge fact databases from which AI agents could feed, hoping to eventually have something that could reason at a human level or better. McKinstry leveraged the dotcom era to grow his database. Singh had the backing of MIT, where he eventually got his PhD and had been offered a position as a professor alongside his mentor, Marvin Minsky. Sadly, personal life was more troublesome for them, and the story ends in a tragic way.
This discussion has been archived. No new comments can be posted.

Two AI Pioneers, Two Bizarre Suicides

Comments Filter:
  • by tomhudson ( 43916 ) <barbara.hudson@b ... m ['son' in gap]> on Sunday January 20, 2008 @01:08PM (#22117514) Journal
    ... and we're coming for you next.
  • by KodaK ( 5477 ) <`sakodak' `at' `gmail.com'> on Sunday January 20, 2008 @01:10PM (#22117524) Homepage
    to make friends. :(
    • by seeker_1us ( 1203072 ) on Sunday January 20, 2008 @01:21PM (#22117604)
      Yes. This was a terribly sad article.

      I read this part

      While Singh was climbing the academic ladder at MIT, McKinstry was trying to put his life back together after spending two and a half months in jail. But the suicidal standoff had given him a new sense of purpose. He liked to think that the police robot had deliberately misfired its tear gas canisters in an effort to save him "Maybe robots do have feelings," he later mused. By 1992, McKinstry had enrolled at the University of Winnipeg and immersed himself in the study of artificial intelligence.

      I mean... that's inspiring.

      And then, he falls apart and kills himself on the web years later, abandoning his dream because of a fundamental flaw, he was a geek but he didn't have business sense.

      That's about as close to Greek Tragedy as you can get.

      • by Lemmy Caution ( 8378 ) on Sunday January 20, 2008 @01:27PM (#22117656) Homepage
        I think the real flaw for both of them were profound emotional problems, not a lack of business acumen.
        • by RattFink ( 93631 ) on Sunday January 20, 2008 @01:50PM (#22117820) Journal
          I wouldn't call chronic physical pain in the case of Singh an emotional problem.
          • What about Alan Turing himself? Cyanide-laced apple to the face, dude.

        • One was a nutty kook.

          The other was an extremely smart and ambitious professor.

          One was mentally ill.

          The other had excruciating pain because of an injury.

          Other than one having delusions about AI and the other having useful ideas about AI, and killing themselves, they're different.

          One killed himself because he was depressed and crazy and screwed up. The other was in horrible neurological pain.

          It is not uncommon for chronic pain patients to kill themselves. It's that bad.

          If they had lived, one would end up in
        • by Venik ( 915777 ) on Sunday January 20, 2008 @03:32PM (#22118790)
          There was a world of difference between Singh and McKinstry: one was an academic researcher and the other - an amateur with little theoretical training. But in the end, both of them got burned out by a task that turned out to be far more complex than anyone cares to admit. The only known working example of intelligence we can attempt to copy is our own. Creating AI with enormous databases of trivial knowledge is a completely preposterous idea: knowledge is the result of intelligence - not a source of it. One can't create an machine approximation of human intelligence without first understanding how human intelligence works on a physical level.
          • by unlametheweak ( 1102159 ) on Sunday January 20, 2008 @05:30PM (#22119876)

            Creating AI with enormous databases of trivial knowledge is a completely preposterous idea: knowledge is the result of intelligence - not a source of it.
            I think it is more akin to what the mapping of the human gnome project is. They were merely trying to map out the rules of "common sense" or "reasoning"; much like linguists map out the rules of language.

            It's a rather brute force way of gaining knowledge (well in this case, for a computer system to gain knowledge). One may not necessarily gain more understanding of intelligence by doing this (much like one will not necessarily gain a better understanding of how to fight cancer just because one knows the the DNA structure of a blood cell for example). It is however a tool. If this "common sense" knowledge could be combined with neural networks (combining the knowledge with a mechanism to learn), then perhaps something useful may be had of this. All AI systems (as far as I know) require the input of knowledge, like typing in the quality and quantity of weapons in a war game simulator for example. The difference being that their efforts were more grandiose than these more limited forms of AI.

            "Knowledge" itself is not the product of intelligence as you propose (although it can be). This knowledge already exists without human intervention. The phrase "Dogs have four legs" does not require a human brain for this fact to be true. The crux is having a computer system with this knowledge, and then developing a system to use this knowledge in an intelligent, human-like fashion.
            • by fractoid ( 1076465 ) on Monday January 21, 2008 @02:43AM (#22123892) Homepage

              I think it is more akin to what the mapping of the human gnome project is.
              Mapping human gnomes?
              ...first they came for the gnomes, but I did not cry out, because I was not a gnome. O.o
            • Re: (Score:3, Interesting)

              by Venik ( 915777 )
              Intelligence is a means of turning information into knowledge. A newborn child possesses none of the facts in your trivia database and yet he is already intelligent. You think dogs have four legs and you will program this fact into your AI machine. Imagine how many circuits it will burn out when it sees a three-legged dog.

              I see no logical connection between building a mega-database of basic facts and creating AI. Access to information is neither a prerequisite for intelligence, nor a source of it. You may s
              • Re: (Score:3, Interesting)

                Nearly all (given the exception of instinctual) knowledge that a child will know about the world around him will be learned. A newborn, if you consider a newborn to be intelligent, already has built in knowledge systems (the instincts that computer systems do not inherently have if they are also deprived of this knowledge).

                Intelligence is NOT a means of turning knowledge into information. Intelligence is the ability to learn (to put it simply. There are in fact different forms of intelligence. Ref: http://e [wikipedia.org]
                • Re: (Score:3, Interesting)

                  by Venik ( 915777 )
                  Whatever specific definition of intelligence you use, the bottom line is: it's an ability we are born with. As far as we know, computers do not have this ability built in. A child will be able to learn through interaction with the environment. There is no need for intelligent guidance or supervision. The quality of this learning process will be lower than with supervision, but it will occur and it will occur spontaneously. A computer cannot learn on its own because it does not possess whatever it is that is
        • by Lazerf4rt ( 969888 ) on Sunday January 20, 2008 @05:03PM (#22119642)

          I would be more specific than just to say, "profound emotional problems." I think the real problem (for both guys) was obsessive thinking. These guys lived in a non-stop world of abstractions, symbols, logic and ideas. And that's a useful world in many ways, but it's not the real world. The real world is the world you see, hear, taste, smell, feel & experience directly.

          Personally I think the best thing that could have helped these guys would have been to grasp the correct (or more correctly, one particular) definition of the word "meditation", and to practice that. This is the best medicine for any person with an out-of-control, overactive intellect. It bothers me a little that the people with the most aptitude for terms & definition often go through life never learning this particular term & definition. I would guess that if you scan their giant A.I. database for the word "meditation" you would find some reference to Descartes' essays, but nothing about the more practical meaning of the word.

          • Re: (Score:3, Interesting)

            by vertinox ( 846076 )
            These guys lived in a non-stop world of abstractions, symbols, logic and ideas. And that's a useful world in many ways, but it's not the real world. The real world is the world you see, hear, taste, smell, feel & experience directly.

            I disagree. Both are the real world because they affect each other. In a sense the world of abstractions, symbols, logic and ideas affects what you can see, hear, taste, etc and experience directly. Or better yet gives you control of what you experience... Like reading music
      • nope

        That's about as close to Geek Tragedy as you can get.

        robots won't get feelings until we can make them feel things first.
        • by CastrTroy ( 595695 ) on Sunday January 20, 2008 @01:36PM (#22117728)
          why would you want to give robots feelings? I mean the novelty would be great, but the whole point is to make robots that do our bidding, not ones that go around moping half the time. Tell the computer to render some 3d movie and having it tell you it doesn't feel like it today is not the way I want my computer to act.
          • by Kingrames ( 858416 ) on Sunday January 20, 2008 @03:24PM (#22118712)
            Don't talk to me about life.
          • WISE OLD BIRD: [clivebanks.co.uk] Now listen. Our world suffered two blights. One was the blight of the robot.
            ARTHUR: Tried to take over did they?
            WISE OLD BIRD: Oh my dear fellow, no, no, no, no, no. Much worse than that. They told us they liked us.
            ARTHUR: No?!
            WISE OLD BIRD: Well, it's not their fault, poor things, they'd been programmed to. But you can imagine how we felt, or at least our ancestors.
            ARTHUR: Ghastly.

            Eerily prescient, that. AI is "Clippy" - the computer guesses what you are trying to do and tries to help you,

          • Because if a robot had feelings, it could determine its own behavior. The great DA solved this puppy long, long ago:

            The scientists at the Institute thus discovered the driving force behind all change, development and innovation in life, which was this: herring sandwiches. They published a paper to this effect, which was widely criticized as being extremely stupid. They checked their figures and realized that what they had actually discovered was `boredom', or rather, the practical function of boredom. I

          • Re: (Score:3, Funny)

            why would you want to give robots feelings? I mean the novelty would be great, but the whole point is to make robots that do our bidding, not ones that go around moping half the time.
            I think Marvin [wikipedia.org] would disagree, you androidaphobe!
          • Re: (Score:3, Interesting)

            by sydneyfong ( 410107 )
            It is my gut feeling that if we want "intelligent" robots, they must be imbued with some kind of "feeling". Either real or just apparently real (that's another philosophical question for another day)

            When we really think about it, we don't really recognize intelligence unless the systems are sufficiently close to what we feel emotionally. In a functional sense, all systems that we wish to evaluate for intelligence take some "input" and produce some "output". Obviously we don't classify as "intelligence" any
      • He liked to think that the police robot had deliberately misfired its tear gas canisters in an effort to save him "Maybe robots do have feelings," he later mused.

        I mean... that's inspiring.


        Inspiring... batshit crazy... either/or.

      • That's about as close to Greek Tragedy as you can get.

              So would that be Geek Tragedy?
      • Re: (Score:2, Funny)

        by mixenmaxen ( 857917 )
        I think you mean Geek tragedy
      • Re: (Score:2, Funny)

        by Chrutil ( 732561 )
        >> That's about as close to Greek Tragedy as you can get.

        Indeed. This Geek Tragedy is only an 'r' away from being Greek.
        • Re: (Score:3, Funny)

          by AJWM ( 19027 )
          This Geek Tragedy is only an 'r' away from being Greek.

          So then, Geek Tragedy is like Greek Tragedy but without the pirates?
      • by wdhowellsr ( 530924 ) on Sunday January 20, 2008 @04:30PM (#22119344)
        There is a thin line between genius and insanity; I know I've spent the last forty-two years on both sides. The bottom line is that this world sucks in a really big way and if you don't have some sort of anchor you're screwed. Whether is it God, family or friends you will need them becasue if you are blessed or cursed, depending on how you look at it with almost supernatural technical insight you will also be troubled by the pure insanity of this world.

        If it has not already happened it will no doubt happen eventually that one of our fellow slashdotters will be a serial killer or a victim of suicide. The only hope is to find some non-technical, non-computer, non-geek outlet for the fact that we are human and need what everyone else needs.

        P.S. If you ever think you are going insane or have nothing to live for just check yourself voluntarily into the local mental health facility. I can guarantee you that within four hours you will realize:

        1) That you are sane.
        2) That there are worse things than being smarter than most people.
        3) That you never want to go back.

        P.P.S.

        Would you believe that they show horror movies on halloween night in mental hospitals?
        • Re: (Score:3, Insightful)

          by Lord Ender ( 156273 )

          There is a thin line between genius and insanity; I know I've spent the last forty-two years on both sides. The bottom line is that this world sucks in a really big way and if you don't have some sort of anchor you're screwed.

          You are absolutely right. I, too, am depressed. But, like you, I have an anchor to help me hold on in the form of a delusion of superintelligence. It always brightens my mood to get on the tubes and tell everyone in the message board how much smarter I am compared to them.

        • Re: (Score:3, Insightful)

          by jellomizer ( 103300 ) *
          Well the problem with genius (or most likely just slightly over the average intelligence, where 45% of the population is smarter then you) that makes the insanity line thin, is the fact that you can see all the problems in the world. With the combination of hubris you see your solutions as the only way to fix them. You get frustrated when you see all the problems then combined that most people won't listen to you, your hubris makes sure you won't listen to them. Causing a sense that you are helpless in a
  • McKinstry was a kook (Score:5, Informative)

    by Anonymous Coward on Sunday January 20, 2008 @01:11PM (#22117536)

    Check out the flamewars in the wpg.general newsgroup. McKinstry ("McChimp") was a liar and self-promoting ass until he took off from Winnipeg leaving debt in his wake. He was not a visionary, he was a drug-addled delusional kook. Hell I remember his bogus "OxyLock" protection scheme which, like any protection scheme, utterly failed.

    disclosure: I'm in a few of the usenet posts as he and I were about the same age and grew up in the same city.

    • by tomhudson ( 43916 ) <barbara.hudson@b ... m ['son' in gap]> on Sunday January 20, 2008 @01:25PM (#22117632) Journal

      The basic premise is flawed.

      After a few months, however, McKinstry abandoned the bot, insisting that the premise of the test was flawed. He developed an alternative yardstick for AI, which he called the Minimum Intelligent Signal Test. The idea was to limit human-computer dialog to questions that required yes/no answers. (Is Earth round? Is the sky blue?) If a machine could correctly answer as many questions as a human, then that machine was intelligent. "Intelligence didn't depend on the bandwidth of the communication channel; intelligence could be communicated with one bit!" he later wrote.

      According to that criteria, a dead-tree book is "intelligent."

      Intelligence requires the ability to answer "yes" or "no". Sometimes, the intelligent answer is "maybe". Sometimes, its "I don't know." And, ironically, sometimes, its "fuck off and die."

      Classic example of a question that can't be properly answered by a yes or no: "Do you still beat your wife?" Intelligence goes beyond simple logic.

      • Re: (Score:3, Funny)

        by Splab ( 574204 )
        You just answer "mu".
      • Re: (Score:3, Insightful)

        by ShieldW0lf ( 601553 )
        Classic example of a question that can't be properly answered by a yes or no: "Do you still beat your wife?" Intelligence goes beyond simple logic.

        What if the answer is "Yes, I'm still beating my wife." or "No, I've stopped beating my wife."?

        Clearly, you didn't think this through very far...
        • It does have proper yes/no answers, but they don't cover every possibility, unless 'Have you ever beat your wife?' was an earlier question that filtered who gets 'Do you still beat your wife?'
        • Re: (Score:3, Insightful)

          by dissy ( 172727 )

          Classic example of a question that can't be properly answered by a yes or no: "Do you still beat your wife?" Intelligence goes beyond simple logic.

          What if the answer is "Yes, I'm still beating my wife." or "No, I've stopped beating my wife."?

          Actually, that question has many answers, which yes and no does not even answer all of properly.

          'Yes' - Yes, i still beat my wife
          'no' - No, i no longer beat my wife

          'no' - No, i dont beat my wife, and never did (communicated poorly, thus a wrong answer)
          'yes' - Yes, i beat my wife now, but never did before (also communicated poorly)

          'no, and i never did' - 2nd no above but communicated right, but using more than yes/no
          'yes, but i never have before' and
          'yes, and always have'

          then theres
          'no' / 'no, i have no wif

          • Re: (Score:3, Insightful)

            by ShieldW0lf ( 601553 )
            If the context is such that the question was nonsense before you finished asking it, then there are no right answers, because it's not a question, it's gibberish. If it wasn't nonsense, it's a simple yes or no question. This isn't some deep secret of the universe you're talking about here... you're setting up a straw man.
        • Re: (Score:3, Insightful)

          by tomhudson ( 43916 )

          Actually, you just reminded me of another ability of intelligence - deceit. True intelligence must be capable of recognizing lies. It pretty much follows that it must be capable of lying itself, if only as a defense against lies.

          Otherwise, it leaves itself open to easy attack and destruction, which isn't intelligent at all.

          An intelligent system would be capable of trolling. A truly intelligent one would enjoying it!

          The idea that a database of answers could in any way be intelligent is fundamentally f

          • Re: (Score:3, Interesting)

            by ShieldW0lf ( 601553 )
            Actually, you just reminded me of another ability of intelligence - deceit. True intelligence must be capable of recognizing lies.
             
            That's nonsense. You can fail to acknowledge that there are any other sentients out there to lie to you and still be intelligent and self aware. Dogs don't even understand our language, they clearly cannot tell when we are lying, yet they have intelligence. Humans raised wild are another example of the same.
            • by Tomy ( 34647 )

              Yes, but is that the kind of intelligence you want to model? Furthermore, dogs learn, so they're not just relying on a database of facts, they can add items and update items on their own. My dogs know the word "walk" so I would have to spell it out to my wife, "Do you want to take the dogs on a W-A-L-K?" They eventually learned that this also meant "walk."

              "Joe has a degree in CS" may be false today and true at a later time. The ability to update your own database or "opinions" over time may exclude large po
            • by account_deleted ( 4530225 ) on Sunday January 20, 2008 @03:17PM (#22118636)
              Comment removed based on user account deletion
              • by ShieldW0lf ( 601553 ) on Sunday January 20, 2008 @03:26PM (#22118736) Journal
                My dog eats his own shit. You call that intelligence?

                Yes. He can't pick it up and take it away because he has no hands, and if he leaves it in the wrong place, he knows predators will find his regular haunts, so he buries it when he can or eats it when he can't. Same thing as cats who eat hairballs. It's an example of him recognizing that the shit piles are long term risks to his survival and taking steps to preserve himself.
            • by tomhudson ( 43916 ) <barbara.hudson@b ... m ['son' in gap]> on Sunday January 20, 2008 @03:33PM (#22118804) Journal

              "Dogs don't even understand our language, they clearly cannot tell when we are lying"

              You clearly don't have enough experience with dogs. They can tell. Eventually, they can even figure out the word "bath" if we spell it instead of saying it. They understand the difference between "do you want to go outside" and "youy're not going outside", and "come get a treat" and "come get a cookie" Bear doesn't like the treats, but he likes chocolate chip cookies. He knows the difference between "treat" and "cookie". Toby clearly understands "don't go in the garbage", but he still sneaks into it when he thinks he can get away with it, and he pretends nothing's wrong up to the moment of discovery, at which point he KNOWS he's been busted, even before I say anything.

              There was a cat that temporarily had a limp. It got more attention when it was limping, so if anyone was watching, it limped. As soon as it thought nobody was watching, it walked perfectly normal. Even cats know how to lie, and can do it intentionally.

          • The idea that a database of answers could in any way be intelligent is fundamentally flawed.

            The database of Cyc and other AI systems does not contain just answers. It contains the basic "understanding" so that it can read and comprehend other materials, such as encyclopedias, that contain the answers. Cyc has had the ability to sit and ponder over what is in its knowledge base and ask questions to get clarification and further understanding. It still is a long way from strong AI, though.

      • Classic example of a question that can't be properly answered by a yes or no: "Do you still beat your wife?"
        Bad example [metacafe.com].
      • You misunderstood the criterion. The test is meant to take the "appearance" of the bot out of the test. Take the set of all yes/no questions which a human being can answer. If the bot can answer all these questions too, it is intelligent. The test does not involve other questions, such as questions which don't have a clear answer or questions which are not boolean. The idea behind that modification is that cognition, not articulation, is intelligence and that making a person believe that a bot is a human (l
    • by freeweed ( 309734 ) on Sunday January 20, 2008 @02:05PM (#22117930)
      Couldn't have said it better myself. I knew Chris for a few years back in the day (even stayed at his house on Maryland a few times), and you nailed it. He was a drug abusing paranoid kook who videotaped CNN 24 hours a day and watched it on fast-forward to see if anything the US government was doing might be affecting him. He was your stereotypical geek who never got past his teenage pathos of "the MAN is trying to get me" - and as such, pretty much refused to get any real sort of work after a while. He just moved on to scamming people. Leaving behind debt is an understatement.

      He did have access to some pretty potent LSD, though. Before knowing him, I always thought LSD was pretty harmless, but with the quantities that man could ingest, I now wonder if permanent brain damage kicks in. And he loved to combine it with a little coke - or whatever other easily accessible drug was around.

      Funny, the last I had heard about him was his mindpixel scam. Which made me chuckle a lot, because very few people seemed to catch on that the entire project was just the ravings of a drug-addled lunatic.

      I didn't realize he finally offed himself. I say finally because everyone who knew him expected it "any day now" - since at least the early 90s. I'm rather astounded he held on so long.
      • Re: (Score:3, Interesting)

        by MousePotato ( 124958 ) *
        He used to post here a lot too. He did do some interesting stuff even while the mindpixel project was going on. One of the last jobs he had was driving the VLT in Paranal Chile and working on the databases there. I always thought his posts here were interesting. When I learned he offed himself though, it was not a surprise. He had, in the past, posted many times about earlier attempts at suicide and bouts of depression throughout his life.
        • Re: (Score:3, Interesting)

          by freeweed ( 309734 )
          Yeah, as further indication of his paranoia... when I had commented about knowing him in a previous Slashdot story a few years back, he got, shall we say, VERY interested in finding out who I was. To the level of hounding me about it. I think he suspected me of being a CIA plant or something. It REALLY bothered him to not be able to connect some random Slashdot UID to an IRL name. :P
    • Hell I remember his bogus "OxyLock" protection scheme which, like any protection scheme, utterly failed.

      It must have failed incredibly hard, because the only relevant hit on Google for "oxylock protection scheme" is the parent post. Just googling for "oxylock" brings up loads of pages about quick-release couplings for oxygen cylinders, nothing about any kind of protection scheme.

      Just sayin'...
  • From TFA:

    All you have to do is try to [imagine] Slashdot without the moderation system to see what's going to happen to your database.
  • All that intelligence. All that education. Lifetimes spent in an unceasing uphill struggle to help mankind take the next great technological leap forward...ended in an instant to provide fodder for a /. joke.

    Gotta love it.

  • One always had suicidal thoughts. The other had excruciating back pain.
  • by Otter ( 3800 )
    The link in the story isn't working for me; this [wired.com] does.

    Previewing ... now that one doesn't work either but this [wired.com] does.

  • by krnpimpsta ( 906084 ) on Sunday January 20, 2008 @01:39PM (#22117738)
    I can't remember the name, but there was this one Sci-fi story about the human race being grown by a superior species. In the same way that we would grow bacteria in a petri dish and put a ring of penecillin around it to kill all bacteria that try to leave that specific area, we were also being confined. But we were confined intellectually - our penecillin was "the discovery of an invisible nuclear shield" that could protect against a nuclear blast. In the story, every scientist who came close to this discovery would commit suicide. The story follows one particularly brilliant scientist who easily solved the problem, but was consumed by an irrisistable urge to kill himself once he figured it out.

    Anyone remember the name of that story? Or was it a book? I don't remember.. but it's pretty interesting to think about - especially if AI researchers begin to have a statistically higher probability of suicide.

    Maybe this is our penecillin?
  • Sounds familiar, maybe the AI was setting them up like in the killswitch episode of the xfiles.

  • by TheLink ( 130905 ) on Sunday January 20, 2008 @01:46PM (#22117786) Journal
    The problem with the "emergent intelligence" from lots of "neural networks" approach is even if it works you often don't really know why it works (or whether it's really working the way you want) - it's more a probability thing.

    The idea that a neural network given a "large enough corpus" can resemble a human being might be true. But a "long enough dead end" could look like a highway. Then again we are probably dead ends too, and so it's more a matter of which one goes on for longer ;).

    My other objection to such approaches is, if you wanted a nonhuman intelligence from neural networks that you don't really understand (the workings of), you can always go get one from the pet store.

    As it is the Biotech people probably have a better chance of making smarter AI than the computer scientists working on AI - who appear to be still stuck at a primitive level. But both may still not understand why :).

    Without a leap in the science of Intelligence/Consciousness, it would then be something like the field of Alchemy in the old days.

    I am not an AI researcher, but I believe things like "building a huge corpus" are wrong approaches.

    It has long been my opinion that what you need is something that automatically creates models of stuff - simulations. Once you get it trying to recursively model itself (consciousness) and the observed world at the same time AND predict "what might be the best thing to do" then you might start to get somewhere.

    Sure pattern recognition is important, but it's just a way for the Modeller to create a better model of the observed world. It is naturally advantageous for an entity to be able to model and predict other entities, and if the other entities are doing the same, you have a need to self model.

    So my question is how do you set stuff up so that it automatically starts modelling and predicting what it observes (including self observations)? ;)
    • Caveat: I'm not an AI researcher either, and what I do know of AI is enough to convince me never to go into the area in any serious way.

      The problem with the "emergent intelligence" from lots of "neural networks" approach is even if it works you often don't really know why it works (or whether it's really working the way you want) - it's more a probability thing.

      The idea is that the probability thing _is_ the reason why it works: intelligence goes way beyond the abilities of reductive reasoning to figure

    • The problem with the "emergent intelligence" from lots of "neural networks" approach is even if it works you often don't really know why it works (or whether it's really working the way you want) - it's more a probability thing.

      The idea that a neural network given a "large enough corpus" can resemble a human being might be true. But a "long enough dead end" could look like a highway. Then again we are probably dead ends too, and so it's more a matter of which one goes on for longer ;).

      That was kind of my thought too. I saw

      huge fact databases from which AI agents could feed, hoping to eventually have something that could reason at a human level or better

      and said, insensitively, "Okay, so he thought of an idea that sounds like crap to begin with, hasn't produced any AI-level results beyond 'neat', and probably won't ever produce any results."

      I don't want to trivialize their deaths, but let's not equate respect for the dead, with merit of their ideas.

  • by xC0000005 ( 715810 ) on Sunday January 20, 2008 @01:46PM (#22117788) Homepage
    Chris was best remembered on K5 for his article on how exciting it was to see what a cat sees by chopping the eye out and wiring it up. I suggested that he perform a simpler test - fill the cat's bowl with food and set the bowl down. If the cat sees the bowl and comes, we know what the cat can see - its food bowl. No cats were harmed in the making of my experiment. Despite this, it was still informational.
  • It's discouraging (Score:5, Informative)

    by Animats ( 122034 ) on Sunday January 20, 2008 @01:54PM (#22117846) Homepage

    It's discouraging reading this. Especially since I knew some of the Cyc [cyc.com] people back in the 1980s, when they were pursuing the same idea. They're still at it. You can even train their system [cyc.com] if you like. But after twenty years of their claiming "Strong AI, Real Soon Now", it's probably not happening.

    I went through Stanford CS back when it was just becoming clear that "expert systems" were really rather dumb and weren't going to get smarter. Most of the AI faculty was in denial about that. Very discouraging. The "AI Winter" followed; all the startups went bust, most of the research projects ended, and there was a big empty room of cubicles labeled "Knowledge Systems Laboratory" on the second floor of the Gates Building. I still wonder what happened to the people who got degrees in "Knowledge Engineering". "Do you want fries with that?"

    MIT went into a phase where Rod Brooks took over the AI Lab and put everybody on little dumb robots, at roughly the Lego Mindstorms level. Minsky bitched that all the students were soldering instead of learning theory. After a decade or so, it became clear that reactive robot AI could get you to insect level, but no further. Brooks went into the floor-cleaning business (Roomba, Scooba, Dirt Dog, etc.) with the technology, with some success.

    Then came the DARPA Grand Challenge. Dr. Tony Tether, the head of DARPA, decided that AI robotics needed a serious kick in the butt. That's what the DARPA Grand Challenge was really all about. It was made clear to the universities receiving DARPA money that if they didn't do well in that game, the money supply would be turned off. It worked. Levels of effort not before seen on a single AI project produced some good results. Stanford had to replace many of the old faculty, but that worked out well in the end.

    This is, at last, encouraging. The top-down strong AI problem was just too hard. Insect-level AI, with no world model, was too dumb. But robot vehicle AI, with world models updated by sensors, is now real. So there's progress. The robot vehicle problem is nice because it's so unforgiving. The thing actually has to work; you can't hand-wave around the problems.

    The classic bit of hubris in AI, by the way, is to have a good idea and then think it's generally applicable. AI has been through this too many times - the General Problem Solver, inference by theorem proving, neural nets, expert systems, neural nets again, and behavior-based AI. Each of those ideas has a ceiling which has been reached.

    It's possible to get too deep into some of these ideas. The people there are brilliant, but narrow, and the culture supports this. MIT has "Nerd Pride" buttons. As someone recruiting me for the Media Lab once said, "There are fewer distractions out here" (It was sleeting.) It sounds like that's what happened to these two young people.

    • ... I knew some of the Cyc people back in the 1980s, when they were pursuing the same idea. They're still at it. ... But after twenty years of their claiming "Strong AI, Real Soon Now", it's probably not happening.

      I don't know whether they're actively still trying to get "true AI" or just milking what they've got; but, assuming the former, some things in science take a really long [aps.org] time [nobelprize.org]. It seems pretty obvious that any intelligence requires a vast amount of knowledge to be useful and that takes a lot of t

      • Re: (Score:3, Informative)

        by bunratty ( 545641 )

        It would be nice if that knowledge and representation were open-sourced.
        It is. It's called OpenCyc [wikipedia.org].
  • by Anonymous Coward
    Both of them were self-aggrandizing self-proclaimed geniuses more interested in science fiction than science. They were in the field because of their emotional problems. AI attracts these kinds of people. Minsky himself has these qualities. The saddest thing is that they were ever taken seriously.
  • A "fact" database.. where ever did they get the idea of storing knowledge as a resource for intelligence?

    That is totally out of left field.
    I feel like a child by the ocean, dwarfed next to such massively innovative thinking.
  • by vorpal22 ( 114901 ) on Sunday January 20, 2008 @02:28PM (#22118138) Homepage Journal
    It isn't really surprising that one of them killed himself due to chronic pain. I myself suffer from it due to complications of Crohn's Disease, and after several months of this, I was pursuing euthanasia as a serious option, much to the horrible upset of the very few loved ones that I told. Note that this wasn't an emotional response to the problem, in my opinion: I had considered my options coolly and calmly and it felt like the best course of action and the most effective solution to the problem.

    Having to live your life in constant pain is worse than you can imagine if you've never had to go through it: you wake up in the morning (provided you could sleep), and you spend the entire day cranky and miserable because you feel horrid. All you do is look forward to the night because again - if you're able to fall asleep - you'll have several hours of some respite from the pain. You rarely feel social or productive because you can't focus your attention or get over your irritability. You're wracked with guilt because you're unable to treat your loved ones with the kindness that they deserve, particularly for putting up with you, and you feel alienated from everyone because few people know what you're going through and you frequently cannot tell them the thoughts that go through your head as they probably often do involve suicide or euthanasia, and psychiatric institutionalization - which is what you worry might be forced upon you - simply isn't going to help, since it won't fix the core issue and the problem isn't psychological.

    Now extend this to months or years with no end in sight and see how you feel.

    Fortunately for me, I was finally able to find a doctor who was willing to prescribe me opioid pain medication and help me get involved with a pain management clinic that teaches mindfulness based meditation, and now I'm doing much better: I'm able to function, I'm looking for a job, I want to see my family and friends on a regular basis, I'm much more pleasant to be around, I can exercise daily, and I'm no longer interested in euthanasia. However, most pain sufferers are *not* as lucky as I am, because doctors are not willing to prescribe long-term use of opioids due to the horrible rules and regulations surrounding these drugs that have been introduced due to their addictive nature. The difficulty in obtaining them is why some people become addicted to heroin; Kurt Cobain is a good example of such a person, who suffered from severe abdominal pain until he found some respite when he took it.

    If anything, people need to fight for their right for quality of life. Yes, opioid abuse can be a serious problem in society, but the people who need these drugs often do not have the strength to put up the huge fight to get them and they must have regular access to them. Perhaps if Singh had been prescribed some relief to his problem, he might still be with us today.
    • To me, the surprising thing is that doctors routinely dismiss chronic pain as imaginary, and far too many pdocs are more interested in a fast buck (some to pay off the horrible expenses of getting qualified) and a quick DSM-IV diagnosis rather than actual clinical analysis. (These days, 9 Tesla MRIs can track individual neurons firing. Combine that with data from fMRI and PET scans, and maybe add some radioactive tracers to standard medicines, and you could build up all the information you could possibly ne
  • Push ... so sad (Score:5, Interesting)

    by FlunkedFlank ( 737955 ) on Sunday January 20, 2008 @02:28PM (#22118144)
    Wow, Push was my TA in Minsky's class in '96. He was an incredibly thoughtful and brilliant soul. He had the sysiphean task of grading several hundred long AI papers all by himself, and the papers all miraculously came back with voluminous detailed and insightful comments. I am just learning of this now. To see that he had achieved such great heights in his career only to end it the way he did ... will we ever be able to find any meaning in this, or is it just one of those inexplicable twists of human behavior?

    This whole story reminds me of the poem Richard Cory (http://www.bartleby.com/104/45.html):

    WHENEVER Richard Cory went down town,
    We people on the pavement looked at him:
    He was a gentleman from sole to crown,
    Clean favored, and imperially slim.

    And he was always quietly arrayed,
    And he was always human when he talked;
    But still he fluttered pulses when he said,
    "Good-morning," and he glittered when he walked.

    And he was rich--yes, richer than a king,
    And admirably schooled in every grace:
    In fine, we thought that he was everything
    To make us wish that we were in his place.

    So on we worked, and waited for the light,
    And went without the meat, and cursed the bread;
    And Richard Cory, one calm summer night,
    Went home and put a bullet through his head.
    • Re: (Score:3, Interesting)

      by Chapter80 ( 926879 )
      http://www.azchords.com/w/wings-4764/richardcory-244477.html [azchords.com]

      He Really gave to The charity, had the common touch,
      And they were Thankful for his patronage.. So They thank You very much,
      So my mind was filled with wonder when the evening headlines read:
      "Richard Cory went home last night and put a bullet through his head."


      But I work in his factory
      And I curse the life I'm living
      I curse my poverty
      I wish that I could be,
      I wish that I could be,
      Oh, I wish that I could be,
      Richard Cory.

  • ... I supposed the story is somewhat interesting.

    The real kicker is that Artificial Intelligence is really just a by-product illusion of Automating Information enough that the illusion presents itself.
    Even these two, as well as the cyc team, were trying to do just that, by first collecting up information to then automate its use. The gears and bearings of which are pretty simple.

    Some interested in the A.I. by product might find this of some interest. [abstractionphysics.net]
  • by Dr. Spork ( 142693 ) on Sunday January 20, 2008 @02:55PM (#22118426)
    I think it's very odd that these two smart people thoguht that input from volunteers could create a better database than what could be obtained if you just uploaded a good dictionary plus the Wikipedia.

    I mean, seriously, with facts like "Brittney Spears is not good at solid-state physics" or whatever, it seems like their database really is a joke, and that they have to introduce a program to cull all that information.

    Programs for parsing semantic content are quickly becoming much better. The reason why Google is not interested in the "Semantic Web" is because they think that their smart bots will be able to mine sematic information from websites, emails and books without any help from human interpreters. That seems to me like the proper start of machine intelligence. What those bots will "learn" will be the right basis for a common-sense database, not the input of some pimply teenagers writing about Btrittney.

  • Suicide and LSD (Score:5, Insightful)

    by Anonymous Coward on Sunday January 20, 2008 @03:00PM (#22118470)
    As someone who has attempted suicide I think I might have a unique perspective on the matter. The reasons very widely from person to person, and I wont discount the possibility that maybe sometimes it's a justifiable act, but for most people it's not the only solution. It's just usually one of the easiest ones. I can only speak about my own experiences, but after struggling with a lot of hard problems--many things that no one should ever be subjected to--something uncomplicated and easy looked increasingly like a good idea. You're getting beat up from all sides of your life and some people break, some sooner than others. I know what it's like to have something you worked so long for yanked out from under you. What are you to do after that happens? You had one thing in life that you could do and now it's gone.

    When you reach that kind of despair it's hard to find your way back to the world. How many great minds and potential contributors to science, art and human culture are lost to suicide before their potential is reached? It was certainly a waste for these two scientists to die. It's a waste, and there's usually always something that could have been done to save them. And it is in society's best interest to help these people any way we can.

    What saved me was, sometime after my attempted suicide I tried the drug LSD for the first time. I've never been the same since that day, for the better I mean. I came to understand things about the nature of consciousness, and how the soul and experiences of all things are connected on such a basic level. Up until that point I felt alone and isolated, physically and emotionally, but I saw and felt how that just is not true at all. The feelings of fear and anger and hopelessness were gone. I now use LSD about 5 or 6 times a year, all have been wonderful experiences so far. It is a crime against humanity that this drug is illegal. It should be given to anyone (in a safe environment and under supervision) who is suicidal. In fact, it should be given to anyone who wants it. It literally saved me. I would likely be dead if I had not experienced that permanent personality changing event. This drug is not addictive. It is not deadly in moderation. It is not corrosive to the fabric of civilization. It is a threat however to the established authorities that want us to remain numb to each other and scared. If everyone could experience it once, we could all feel that universal connection, and there would be no reason to feel alone or worthless or end your own life for so many people who think that's their only way to escape.

    I'm sorry that this got so off course (mod it as such if you will), but the topic of suicide is so important to me now, and I want people to have the same chance that I had.

    I thank Albert Hofmann for my life and my enlightenment, and for giving this gift to all humanity. Perhaps one day we will be more inclined to accept it.

    "I think that in human evolution it has never been as necessary to have this substance LSD. It is just a tool to turn us into what we are supposed to be." -Albert Hofmann
    • Re: (Score:3, Interesting)

      by Anonymous Coward
      I understand where you're coming from. I also took a psychedelic when I was feeling suicidal, and it cured my depression to the extent that found meaning in being alive.

      The medicine I took was Ayahuasca [ayahuasca.com], from plants purchased from certain Internet sites. Western tourists travel to South America to ingest this drug in the presence of a Shaman to cure any mental illnesses or emotional problems. Partakers call ayahuasca a Medicine rather than a drug because of it's beneficial healing effects.

      I wish more suic

  • I learned most of my *nix skills on a NeXTstation I bought from someone named Pushpinder Singh in 1993. If I remember right he was at MIT. So I think it's the same as this guy. That's... really weird.
  • Video Trace as discussed on Slashdot earlier is what I've been waiting for since 2002 to make AI. FOSS AI [fossai.com]

The use of money is all the advantage there is to having money. -- B. Franklin

Working...