Two AI Pioneers, Two Bizarre Suicides 427
BotnetZombie writes "Wired tells the quite sad but very interesting stories of Chris McKinstry and Pushpinder Singh. Initially self-educated, both had the idea to create huge fact databases from which AI agents could feed, hoping to eventually have something that could reason at a human level or better. McKinstry leveraged the dotcom era to grow his database. Singh had the backing of MIT, where he eventually got his PhD and had been offered a position as a professor alongside his mentor, Marvin Minsky. Sadly, personal life was more troublesome for them, and the story ends in a tragic way.
Re:They just wanted... (Score:5, Interesting)
I read this part
I mean... that's inspiring.
And then, he falls apart and kills himself on the web years later, abandoning his dream because of a fundamental flaw, he was a geek but he didn't have business sense.
That's about as close to Greek Tragedy as you can get.
reminds me of this one sci-fi story (Score:5, Interesting)
Anyone remember the name of that story? Or was it a book? I don't remember.. but it's pretty interesting to think about - especially if AI researchers begin to have a statistically higher probability of suicide.
Maybe this is our penecillin?
AI field barely in the "Alchemy" stage (Score:4, Interesting)
The idea that a neural network given a "large enough corpus" can resemble a human being might be true. But a "long enough dead end" could look like a highway. Then again we are probably dead ends too, and so it's more a matter of which one goes on for longer
My other objection to such approaches is, if you wanted a nonhuman intelligence from neural networks that you don't really understand (the workings of), you can always go get one from the pet store.
As it is the Biotech people probably have a better chance of making smarter AI than the computer scientists working on AI - who appear to be still stuck at a primitive level. But both may still not understand why
Without a leap in the science of Intelligence/Consciousness, it would then be something like the field of Alchemy in the old days.
I am not an AI researcher, but I believe things like "building a huge corpus" are wrong approaches.
It has long been my opinion that what you need is something that automatically creates models of stuff - simulations. Once you get it trying to recursively model itself (consciousness) and the observed world at the same time AND predict "what might be the best thing to do" then you might start to get somewhere.
Sure pattern recognition is important, but it's just a way for the Modeller to create a better model of the observed world. It is naturally advantageous for an entity to be able to model and predict other entities, and if the other entities are doing the same, you have a need to self model.
So my question is how do you set stuff up so that it automatically starts modelling and predicting what it observes (including self observations)?
I knew him back in those days (Score:5, Interesting)
He did have access to some pretty potent LSD, though. Before knowing him, I always thought LSD was pretty harmless, but with the quantities that man could ingest, I now wonder if permanent brain damage kicks in. And he loved to combine it with a little coke - or whatever other easily accessible drug was around.
Funny, the last I had heard about him was his mindpixel scam. Which made me chuckle a lot, because very few people seemed to catch on that the entire project was just the ravings of a drug-addled lunatic.
I didn't realize he finally offed himself. I say finally because everyone who knew him expected it "any day now" - since at least the early 90s. I'm rather astounded he held on so long.
Re:I'd kill myself, too... (Score:4, Interesting)
It shouldn't matter (to you) if I say something that is offensive, what matters is how you deal with it. You have choices in how you react to it. One of those choices is to ignore it and write it off as "oh, that's just some asshole on the Internet." Another is to become upset about what some anonymous asshole on the Internet who didn't know your friend has said. It is your choice.
Who am I to you? Nobody. Why should anything I say at all have any impact on you if you don't want it to?
For example, you may consider my stance of "you can only control yourself" as lame-ass, and attempt to insult me by insinuating that I live in the past, but I can choose to react negatively to that (ie: "waaah, my fewwings are hurted") or I can read between the lines and see that you're just angry about someone making a joke about your departed friend and not take offense -- just like I would do "in real life."
Re:McKinstry was a kook (Score:3, Interesting)
That's nonsense. You can fail to acknowledge that there are any other sentients out there to lie to you and still be intelligent and self aware. Dogs don't even understand our language, they clearly cannot tell when we are lying, yet they have intelligence. Humans raised wild are another example of the same.
Push ... so sad (Score:5, Interesting)
This whole story reminds me of the poem Richard Cory (http://www.bartleby.com/104/45.html):
Why not build a crawler bot for common sense data? (Score:3, Interesting)
I mean, seriously, with facts like "Brittney Spears is not good at solid-state physics" or whatever, it seems like their database really is a joke, and that they have to introduce a program to cull all that information.
Programs for parsing semantic content are quickly becoming much better. The reason why Google is not interested in the "Semantic Web" is because they think that their smart bots will be able to mine sematic information from websites, emails and books without any help from human interpreters. That seems to me like the proper start of machine intelligence. What those bots will "learn" will be the right basis for a common-sense database, not the input of some pimply teenagers writing about Btrittney.
I think I had Push's old NeXT (Score:2, Interesting)
Re:McKinstry was a kook (Score:5, Interesting)
"Dogs don't even understand our language, they clearly cannot tell when we are lying"
You clearly don't have enough experience with dogs. They can tell. Eventually, they can even figure out the word "bath" if we spell it instead of saying it. They understand the difference between "do you want to go outside" and "youy're not going outside", and "come get a treat" and "come get a cookie" Bear doesn't like the treats, but he likes chocolate chip cookies. He knows the difference between "treat" and "cookie". Toby clearly understands "don't go in the garbage", but he still sneaks into it when he thinks he can get away with it, and he pretends nothing's wrong up to the moment of discovery, at which point he KNOWS he's been busted, even before I say anything.
There was a cat that temporarily had a limp. It got more attention when it was limping, so if anyone was watching, it limped. As soon as it thought nobody was watching, it walked perfectly normal. Even cats know how to lie, and can do it intentionally.
Re:Chronic pain and suicide (Score:1, Interesting)
Re:Push ... so sad (Score:3, Interesting)
Re:I knew him back in those days (Score:3, Interesting)
Re:Suicide and LSD (Score:3, Interesting)
The medicine I took was Ayahuasca [ayahuasca.com], from plants purchased from certain Internet sites. Western tourists travel to South America to ingest this drug in the presence of a Shaman to cure any mental illnesses or emotional problems. Partakers call ayahuasca a Medicine rather than a drug because of it's beneficial healing effects.
I wish more suicidal/depressed people knew about this other option for a cure. Modern western medicine tries to 'dull' depression by sedating the patient with SSRI's. Ayahuasca helps a person overcome depression by making them confront their innermost fears.
It's unfortunate psychedelics have a bad press in the west, due history of abuse of the drugs. In the right context psychedelics can be a powerful tool for spiritual insight and healing of the mind.
Douglas Adams solved this one (Score:3, Interesting)
Because if a robot had feelings, it could determine its own behavior. The great DA solved this puppy long, long ago:
The scientists at the Institute thus discovered the driving force behind all change, development and innovation in life, which was this: herring sandwiches. They published a paper to this effect, which was widely criticized as being extremely stupid. They checked their figures and realized that what they had actually discovered was `boredom', or rather, the practical function of boredom. In a fever of excitement they then went on to discover other emotions, Like `irritability', `depression', `reluctance', `ickiness' and so on. The next big breakthrough came when they stopped using herring sandwiches, whereupon a whole welter of new emotions became suddenly available to them for study, such as `relief', `joy', `friskiness', `appetite', `satisfaction', and most important of all, the desire for `happiness'.
This was the biggest breakthrough of all.
Vast wodges of complex computer code governing robot behav- iour in all possible contingencies could be replaced very simply. All that robots needed was the capacity to be either bored or happy, and a few conditions that needed to be satisfied in order to bring those states about. They would then work the rest out for themselves.
And that's why Eddie, the shipboard computer is always happy to help the humans - that's his "happy" goal. That's why the doors sigh with pleasure when they open and say "thank you for making a simple door very happy". They're happiest when they do door stuff - let people through them, open and close efficiently, etc.
It's comedy, I know. But it's also really amazingly good thinking.
Re:They just wanted... (Score:5, Interesting)
It's a rather brute force way of gaining knowledge (well in this case, for a computer system to gain knowledge). One may not necessarily gain more understanding of intelligence by doing this (much like one will not necessarily gain a better understanding of how to fight cancer just because one knows the the DNA structure of a blood cell for example). It is however a tool. If this "common sense" knowledge could be combined with neural networks (combining the knowledge with a mechanism to learn), then perhaps something useful may be had of this. All AI systems (as far as I know) require the input of knowledge, like typing in the quality and quantity of weapons in a war game simulator for example. The difference being that their efforts were more grandiose than these more limited forms of AI.
"Knowledge" itself is not the product of intelligence as you propose (although it can be). This knowledge already exists without human intervention. The phrase "Dogs have four legs" does not require a human brain for this fact to be true. The crux is having a computer system with this knowledge, and then developing a system to use this knowledge in an intelligent, human-like fashion.
Re:I'd pull the trigger, and sleep well at night. (Score:3, Interesting)
"So, where does self-depreciating humour fit into your system?"
Self-deprecating humour fits in very well - its a defensive posture to aggression in others. Poke fun at yourself, and you're less likely to look harmful to other aggressive humans.
"What about parody?" Parody makes fun of the thing being parodied - also cruel. For example, "This land was your land, this land's now my land, I've got a big gun, and you ain't got one" makes fun of people who have to give in to "might makes right" bullying. Any really good parody is a cruel send-up of someone. Look at all the celebrity roasts, the Tom Cruise jokes, etc.
"What about when you see two children behaving in startling, unexpected ways and have that "They're so cute, look at what they're doing." laugh?"
Bears will cuff their young on the head when they misbehave. We laugh. If we were to cuff our young on the head, because of the inordinate weakness of the rest of the body (particularly the neck), our young wouldn't survive. We've naturally selected for that response - but too many parents still resort to physical violence at the drop of a hat to make me believe that aggression in humans isn't the normal state of affairs.
"There are lots of instances of humour that do not come from a dark, aggressive and violent place in the psyche.
The fact that you need someone to point this out makes me feel sorry for you, honestly.
Humans ARE dark, aggressive, and violent. How do you think we became the top predator, by being all sugar and sweet? That ANYONE can justify waterboarding shows just how dark, aggressive and violent we really are. There is no excuse for that.
People have developed humour as as a form of self-defense (the kid who gets picked on in school, so he gets people to laugh instead), as a way to deal with loss (gallows humour), as a way to dehumanize others (racist humour, gender-biased humour); these are all used as ways to direct aggression towards others or defend ourselves. Its not funny - its serious.
Think of it - every time you tell a dumb blonde joke, what are you REALLY saying? Every time you tell a gay joke, what does it say about YOU? Every time you tell a racist joke, what message are YOU really sending?
Yeah, its fun making people laugh; I do it all the time. But at least I'm aware of why we as humans evolved humour; its a necessary "social lubricant" because we're by nature too aggressive for our own good. So I'll tell the jokes, and while everyone is laughing, there's a part of me that is saying "you know, its not funny" to the jokes that get the biggest laughs.
The ideal world wouldn't have people deriving fun from each others problems. Then again, in the ideal world, we probably wouldn't exist. So I'll keep making jokes, keep making people laugh, and keep saying "darn - I wish we were all better than that."
As I've pointed out elsewhere in this thread, if we ever come across alien intelligences (biological or mechanical) that have succeeded to the point of first contact, they'll likely be even more aggressive than us. They simply can't afford not to be. And their sense of what's funny will probably be even sharper than ours.
Re:They just wanted... (Score:3, Interesting)
When we really think about it, we don't really recognize intelligence unless the systems are sufficiently close to what we feel emotionally. In a functional sense, all systems that we wish to evaluate for intelligence take some "input" and produce some "output". Obviously we don't classify as "intelligence" any complex system that we don't understand. The weather is hard enough to predict, but we don't call it intelligent. There must be something that chimes with our thoughts and reasoning, something that's complex, but subtly resonating with our own senses and emotions. I really can't put it, and it's almost 6am here (and I haven't slept yet), but to say that an "intelligent" machine doesn't have any apparent feeling seems to me a (almost) logical impossibility.
Re:I knew him back in those days (Score:3, Interesting)
Re:reminds me of this one sci-fi story (Score:2, Interesting)
Hmm.. I was just thinking.. if we are to suspend our disbelief for a moment and consider that the premise of Asimov's story is true: The advent of true AI would be a pretty logical advance to stop - to be our "penicillin." Once AI can be >= Human Intelligence, that AI can produce a greater AI, and so on - causing the technological singularity [wikipedia.org]. That singularity could give almost instantaneous rise to all the technologies that we're not "supposed" to have..
(Sorry, I don't understand how to use the tag.)
Re:They just wanted... (Score:2, Interesting)
One day we may be forced to build a different framework of measurement in which we do not judge the state of learning of a student but simply judge the quality of the student by how much he can absorb in shorter time periods every day. Disallowing extra study and requiring music or athletics to fill most of every day might build a better academic community.
Re:They just wanted... (Score:3, Interesting)
I disagree. Both are the real world because they affect each other. In a sense the world of abstractions, symbols, logic and ideas affects what you can see, hear, taste, etc and experience directly. Or better yet gives you control of what you experience... Like reading music notation (symbols) and pushing piano keys (logic) to get the sounds.
As much as I am pro-"meditation", I really doubt it would have solved their situations. As much as I meditate, it would never bring about a direct change in that world simply by meditating other than how I see it, but even if you were a 10 year veteran of meditation Chronic pain is not something one can will away with ease.
Meditation is good for compulsions and understanding what you do. But if you are clinically depressed, most instructors will still tell you to get professional help.
Re:They just wanted... (Score:3, Interesting)
I see no logical connection between building a mega-database of basic facts and creating AI. Access to information is neither a prerequisite for intelligence, nor a source of it. You may succeed in creating something that complex and convoluted enough to make someone think for a minute they are dealing with intelligence.
The unfortunate reality of AI research is that we don't understand were we are going. Instead we are concentrating on how to get there. Hey, what if we build a really big neural net, or an exact functional electronic copy of the human brain, or a huge database of everyday information - maybe then we will somehow stumble upon artificial intelligence. Not exactly a scientific approach.
Re:They just wanted... (Score:3, Interesting)
Intelligence is NOT a means of turning knowledge into information. Intelligence is the ability to learn (to put it simply. There are in fact different forms of intelligence. Ref: http://en.wikipedia.org/wiki/Intelligence [wikipedia.org]). One cannot learn without knowledge. If you take a child and deprive them of information, they will grow up to have severe learning disabilities (as is the case of extreme deprivation; i.e. the child abuse cases where children were locked in a closet for most of their existence). Children need knowledge to utilize their intelligence (hence we have schools to fill their brains with knowledge). You can't have one without the other. They are complimentary.
At the crux of the argument is how one decides to DEFINE intelligence. It seems unreasonable to presuppose that a computer system can be imbued with human intelligence. It is rather a matter of approximating that intelligence as closely as possible with that of the human experience.
Re:They just wanted... (Score:3, Interesting)