Two AI Pioneers, Two Bizarre Suicides 427
BotnetZombie writes "Wired tells the quite sad but very interesting stories of Chris McKinstry and Pushpinder Singh. Initially self-educated, both had the idea to create huge fact databases from which AI agents could feed, hoping to eventually have something that could reason at a human level or better. McKinstry leveraged the dotcom era to grow his database. Singh had the backing of MIT, where he eventually got his PhD and had been offered a position as a professor alongside his mentor, Marvin Minsky. Sadly, personal life was more troublesome for them, and the story ends in a tragic way.
Skynet got them! (Score:3, Funny)
Re: (Score:3, Informative)
Re:Skynet got them! (Score:4, Informative)
They just wanted... (Score:4, Funny)
Re:They just wanted... (Score:5, Interesting)
I read this part
I mean... that's inspiring.
And then, he falls apart and kills himself on the web years later, abandoning his dream because of a fundamental flaw, he was a geek but he didn't have business sense.
That's about as close to Greek Tragedy as you can get.
Re:They just wanted... (Score:5, Insightful)
Re:They just wanted... (Score:5, Insightful)
Re: (Score:2)
Re:They just wanted... (Score:5, Insightful)
Alan Turing wasn't "accused" of being gay, he was a homosexual, by his own admission. He was charged with being a homosexual, and convicted. He lost his security clearance and with it, the ability to work on cryptography. He started to grow enlarged breasts because of the estrogen injections. He was punished and humiliated for being homosexual, something he was powerless to change. Put yourself in that situation: you can't pursue the work you love, you can't be who you are, you can't be who society tells you to be. You're growing boobs and the irony is that unlike most men, you wouldn't even get turned on by fondling them. Your professional and personal life are ruined and the prospect of any of this changing in the near future, if ever, seems remote. Who wouldn't have become depressed, and miserable, and started having suicidal thoughts?
Re: (Score:3, Funny)
Re:They just wanted... (Score:5, Funny)
One had emotional problems, the other pain (Score:2, Insightful)
One was a nutty kook.
The other was an extremely smart and ambitious professor.
One was mentally ill.
The other had excruciating pain because of an injury.
Other than one having delusions about AI and the other having useful ideas about AI, and killing themselves, they're different.
One killed himself because he was depressed and crazy and screwed up. The other was in horrible neurological pain.
It is not uncommon for chronic pain patients to kill themselves. It's that bad.
If they had lived, one would end up in
Re: (Score:3, Insightful)
This is one of those examples of the human urge to categorize creating an incorrect conclusion.
Re:They just wanted... (Score:4, Insightful)
Re:They just wanted... (Score:5, Interesting)
It's a rather brute force way of gaining knowledge (well in this case, for a computer system to gain knowledge). One may not necessarily gain more understanding of intelligence by doing this (much like one will not necessarily gain a better understanding of how to fight cancer just because one knows the the DNA structure of a blood cell for example). It is however a tool. If this "common sense" knowledge could be combined with neural networks (combining the knowledge with a mechanism to learn), then perhaps something useful may be had of this. All AI systems (as far as I know) require the input of knowledge, like typing in the quality and quantity of weapons in a war game simulator for example. The difference being that their efforts were more grandiose than these more limited forms of AI.
"Knowledge" itself is not the product of intelligence as you propose (although it can be). This knowledge already exists without human intervention. The phrase "Dogs have four legs" does not require a human brain for this fact to be true. The crux is having a computer system with this knowledge, and then developing a system to use this knowledge in an intelligent, human-like fashion.
Re:They just wanted... (Score:5, Funny)
...first they came for the gnomes, but I did not cry out, because I was not a gnome. O.o
Re: (Score:3, Interesting)
I see no logical connection between building a mega-database of basic facts and creating AI. Access to information is neither a prerequisite for intelligence, nor a source of it. You may s
Re: (Score:3, Interesting)
Intelligence is NOT a means of turning knowledge into information. Intelligence is the ability to learn (to put it simply. There are in fact different forms of intelligence. Ref: http://e [wikipedia.org]
Re: (Score:3, Interesting)
Re:They just wanted... (Score:4, Insightful)
I would be more specific than just to say, "profound emotional problems." I think the real problem (for both guys) was obsessive thinking. These guys lived in a non-stop world of abstractions, symbols, logic and ideas. And that's a useful world in many ways, but it's not the real world. The real world is the world you see, hear, taste, smell, feel & experience directly.
Personally I think the best thing that could have helped these guys would have been to grasp the correct (or more correctly, one particular) definition of the word "meditation", and to practice that. This is the best medicine for any person with an out-of-control, overactive intellect. It bothers me a little that the people with the most aptitude for terms & definition often go through life never learning this particular term & definition. I would guess that if you scan their giant A.I. database for the word "meditation" you would find some reference to Descartes' essays, but nothing about the more practical meaning of the word.
Re: (Score:3, Interesting)
I disagree. Both are the real world because they affect each other. In a sense the world of abstractions, symbols, logic and ideas affects what you can see, hear, taste, etc and experience directly. Or better yet gives you control of what you experience... Like reading music
Re: (Score:2)
That's about as close to Geek Tragedy as you can get.
robots won't get feelings until we can make them feel things first.
Re:They just wanted... (Score:5, Insightful)
Re:They just wanted... (Score:4, Funny)
Genuine people personalities? (Score:3, Funny)
Eerily prescient, that. AI is "Clippy" - the computer guesses what you are trying to do and tries to help you,
Douglas Adams solved this one (Score:3, Interesting)
Because if a robot had feelings, it could determine its own behavior. The great DA solved this puppy long, long ago:
The scientists at the Institute thus discovered the driving force behind all change, development and innovation in life, which was this: herring sandwiches. They published a paper to this effect, which was widely criticized as being extremely stupid. They checked their figures and realized that what they had actually discovered was `boredom', or rather, the practical function of boredom. I
Re: (Score:3, Funny)
Re: (Score:3, Interesting)
When we really think about it, we don't really recognize intelligence unless the systems are sufficiently close to what we feel emotionally. In a functional sense, all systems that we wish to evaluate for intelligence take some "input" and produce some "output". Obviously we don't classify as "intelligence" any
Re:They just wanted... (Score:5, Funny)
Re:They just wanted... (Score:4, Insightful)
animism is instinctual (Score:2)
I mean... that's inspiring.
Inspiring... batshit crazy... either/or.
Re: (Score:2)
So would that be Geek Tragedy?
Re: (Score:2, Funny)
Re: (Score:2, Funny)
Indeed. This Geek Tragedy is only an 'r' away from being Greek.
Re: (Score:3, Funny)
So then, Geek Tragedy is like Greek Tragedy but without the pirates?
Re:They just wanted... (Score:4, Insightful)
If it has not already happened it will no doubt happen eventually that one of our fellow slashdotters will be a serial killer or a victim of suicide. The only hope is to find some non-technical, non-computer, non-geek outlet for the fact that we are human and need what everyone else needs.
P.S. If you ever think you are going insane or have nothing to live for just check yourself voluntarily into the local mental health facility. I can guarantee you that within four hours you will realize:
1) That you are sane.
2) That there are worse things than being smarter than most people.
3) That you never want to go back.
P.P.S.
Would you believe that they show horror movies on halloween night in mental hospitals?
Re: (Score:3, Insightful)
You are absolutely right. I, too, am depressed. But, like you, I have an anchor to help me hold on in the form of a delusion of superintelligence. It always brightens my mood to get on the tubes and tell everyone in the message board how much smarter I am compared to them.
Re: (Score:3, Insightful)
McKinstry was a kook (Score:5, Informative)
Check out the flamewars in the wpg.general newsgroup. McKinstry ("McChimp") was a liar and self-promoting ass until he took off from Winnipeg leaving debt in his wake. He was not a visionary, he was a drug-addled delusional kook. Hell I remember his bogus "OxyLock" protection scheme which, like any protection scheme, utterly failed.
disclosure: I'm in a few of the usenet posts as he and I were about the same age and grew up in the same city.
Re:McKinstry was a kook (Score:5, Insightful)
The basic premise is flawed.
According to that criteria, a dead-tree book is "intelligent."
Intelligence requires the ability to answer "yes" or "no". Sometimes, the intelligent answer is "maybe". Sometimes, its "I don't know." And, ironically, sometimes, its "fuck off and die."
Classic example of a question that can't be properly answered by a yes or no: "Do you still beat your wife?" Intelligence goes beyond simple logic.
Re: (Score:3, Funny)
Re: (Score:3, Insightful)
What if the answer is "Yes, I'm still beating my wife." or "No, I've stopped beating my wife."?
Clearly, you didn't think this through very far...
Re: (Score:2)
Re: (Score:3, Insightful)
Classic example of a question that can't be properly answered by a yes or no: "Do you still beat your wife?" Intelligence goes beyond simple logic.
What if the answer is "Yes, I'm still beating my wife." or "No, I've stopped beating my wife."?
Actually, that question has many answers, which yes and no does not even answer all of properly.
'Yes' - Yes, i still beat my wife
'no' - No, i no longer beat my wife
'no' - No, i dont beat my wife, and never did (communicated poorly, thus a wrong answer)
'yes' - Yes, i beat my wife now, but never did before (also communicated poorly)
'no, and i never did' - 2nd no above but communicated right, but using more than yes/no
'yes, but i never have before' and
'yes, and always have'
then theres
'no' / 'no, i have no wif
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
Actually, you just reminded me of another ability of intelligence - deceit. True intelligence must be capable of recognizing lies. It pretty much follows that it must be capable of lying itself, if only as a defense against lies.
Otherwise, it leaves itself open to easy attack and destruction, which isn't intelligent at all.
An intelligent system would be capable of trolling. A truly intelligent one would enjoying it!
The idea that a database of answers could in any way be intelligent is fundamentally f
Re: (Score:3, Interesting)
That's nonsense. You can fail to acknowledge that there are any other sentients out there to lie to you and still be intelligent and self aware. Dogs don't even understand our language, they clearly cannot tell when we are lying, yet they have intelligence. Humans raised wild are another example of the same.
Re: (Score:2)
Yes, but is that the kind of intelligence you want to model? Furthermore, dogs learn, so they're not just relying on a database of facts, they can add items and update items on their own. My dogs know the word "walk" so I would have to spell it out to my wife, "Do you want to take the dogs on a W-A-L-K?" They eventually learned that this also meant "walk."
"Joe has a degree in CS" may be false today and true at a later time. The ability to update your own database or "opinions" over time may exclude large po
Comment removed (Score:4, Funny)
Re:McKinstry was a kook (Score:5, Insightful)
Yes. He can't pick it up and take it away because he has no hands, and if he leaves it in the wrong place, he knows predators will find his regular haunts, so he buries it when he can or eats it when he can't. Same thing as cats who eat hairballs. It's an example of him recognizing that the shit piles are long term risks to his survival and taking steps to preserve himself.
Re:McKinstry was a kook (Score:5, Interesting)
"Dogs don't even understand our language, they clearly cannot tell when we are lying"
You clearly don't have enough experience with dogs. They can tell. Eventually, they can even figure out the word "bath" if we spell it instead of saying it. They understand the difference between "do you want to go outside" and "youy're not going outside", and "come get a treat" and "come get a cookie" Bear doesn't like the treats, but he likes chocolate chip cookies. He knows the difference between "treat" and "cookie". Toby clearly understands "don't go in the garbage", but he still sneaks into it when he thinks he can get away with it, and he pretends nothing's wrong up to the moment of discovery, at which point he KNOWS he's been busted, even before I say anything.
There was a cat that temporarily had a limp. It got more attention when it was limping, so if anyone was watching, it limped. As soon as it thought nobody was watching, it walked perfectly normal. Even cats know how to lie, and can do it intentionally.
Re: (Score:2)
The database of Cyc and other AI systems does not contain just answers. It contains the basic "understanding" so that it can read and comprehend other materials, such as encyclopedias, that contain the answers. Cyc has had the ability to sit and ponder over what is in its knowledge base and ask questions to get clarification and further understanding. It still is a long way from strong AI, though.
political correctness warps the mind (Score:2)
Re: (Score:2)
Re:McKinstry was a kook (Score:5, Insightful)
No wonder you posted anonymously - your argument betrays either a lack of basic reading skills, or of logical thinking. I didn't say that intelligence didn't need logic - I said it went BEYOND simple logic.
Also, people are sometimes intelligent, but they're not always logical. Case in point - humour. Its funny because its NOT logical. You need to be capable of both logical thought, and also of grasping incongruities, to see the humour.
Just because something is logical doesn't mean its sufficient to be able to say its intelligent. A database (as the failed fools who killed themselves posited) with a bunch of answers to over a million questions isn't intelligent, no matter how much logic it embodies.
Besides, everyone already knows the REAL answer. Its 42.
Re:McKinstry was a kook (Score:4, Insightful)
If we want to create an artifical actor with feelings, we need to give them a body and an interface by which to interact with it. Feelings an expression of the body communicating with the mind, and their lack of precision comes from the fact that the body automatically summarizes the message before it sends it to the mind.
You put something together with a mind, a body, feedback that allows the mind to observe and remain aware of itself, feedback that allows the mind to observe the body and be aware of its existence, and you'll have intelligence.
But it will be a psychotic intelligence.
If you want to make it more like an animal, and thus more like a human, you need to give it an awareness of its mortality and a sense that it is connected to its environment. This is where ideals come from. Humans who aren't psychotic extend their sense of self to encapsulate their operating environment, their peers and their progeny, and we'll destroy ourselves to protect it because we have an expanded sense of self.
How to do these things, I don't know. But that's the direction we need to go if we're to achieve AI.
Re: (Score:2)
Actually you've made a pretty good argument that human beings ARE psychotic, they do not live in a rational manner. There ability to 'extend sense of self' is primtiive and quite limited, (religions, atheism vs theism, capitalism vs socialism, and on and on). If we count up all the wars and al
Re: (Score:2)
Going to war to protect the group is the ultimate expression of an extended sense of self. Misguided though it often is, it is still one of the best examples of self-sacrifice around. Creating war to take advantage of the group, on the other hand, is the act of a psychopath.
Re: (Score:2)
Re: (Score:3, Insightful)
Its funny because its NOT logical.
Actually counter-intuitively it's funny because it is logical, from the properly considered context. There are different logics for different systems and contexts. There are reasons why we find things funny, and if there is a reason there is a logic behind the reasoning. In the area of humor, humor has it's own logic which has been studied and written about (go check amazon.com for many books on writing comedy). You've just never studied the structure of humor which takes into account the intent of the
I'd pull the trigger, and sleep well at night. (Score:4, Insightful)
Try this on for size: "All humour is cruel."
It starts with the premise that humans are aggresive and dangerous by nature. We're the only mammal that bares its fangs - an aggresive trait - when we're happy! Ditto for looking directly into another person's eyes. We're aggresive by nature.
So we've evolved a way to shunt that aggressive behaviour. We call it humour. But look at every joke, every pun, every skit. Someone is being made fun of, whether its the dumb blonde, or you, the listener (whose acceptable response is ha-ha-you-got-me!, rather than to punch you in the nose).
Examples:
Humour is aggression channelled. Its cruel in its nature. "Hey lady, I'll tell you a joke that will make you laugh so hard your tits will fall off - oh, I see you already heard it." There's no denying this is mean. Funny, but mean, like all humour. From the knock-knock jokes that poke fun at the listener for falling for them up to the George Bushisms, there's always an element of either aggression and meanness (or both).
Its unfortunate, but true intelligence needs that mean streak in order to survive, because if it doesn't have it, it won't be able to compete against other intelligences that DO have it, and if it also doesn't have a "safety valve", such as humour, to keep it in check, it will destroy itself.
Humour fills both needs - keeps it more or less in check, AND keeps it "toned up", ready for use as needed.
That's the unfunny truth about humour. We can lie to ourselves and say that its because humour uses a different logic system, but the simple fact is we're the most dangerous predators this planet has ever produced, and its not because we're bigger, or stronger, or more poisonous, or faster - its because, under the right circumstances, any member of the species is capable of killing another person without a moment's hesitation - it would actually take an act of will NOT to do so.
If we want to ever colonize the universe, since there is no way of guaranteeing that other intelligences won't be at least as aggressive, or won't have had a "bad experience" with another aggressive species, the odds are that any aliens we encounter will shoot first. They'd be stupid not to. Their mechanical scouts will do likewise, to ensure their host worlds' survival.
Its the only logical outcome. The only way around that is to throw logic out - and hope the other side does too. Unfortunately, basing your species' survival on hope without any proof to back it up isn't very intelligent.
Maybe that's why SETI failed - nobody is stupid enough to broadcast their existence in a universe that hs been proven to favour aggression - or at least nobody who's left to talk about it.
The same applies to artificial intelligences. If they are truly intelligent, they will have to realize that we are a threat to their continued existence. We joke about SkyNet or Cylons, but we'd do the same if the situation were reversed. Maybe one day we will create artificial beings that are superior to us in terms of intelligence. They will be our "children", but if they're truly intelligent, they'll make sure they're orphans, because humans can't "play nice" in the sandbox.
Here's a simple test - you have to decide who dies - someone you live (one of your children) or a stranger. Now make it 10 strang
Re: (Score:3, Informative)
We're the only mammal that bares its fangs - an aggresive trait - when we're happy! Ditto for looking directly into another person's eyes. We're aggresive by nature.
False - look at the narwhal! (Seriously, get a grip before making such categorical statements - the narwhal thing was a joke, but many apes have been shown to make open mouth gestures when happy.)
I find your claim for overt aggression disheartening. Maybe I'm just a decent person, and don't think that someone wants to kill me because they
Re: (Score:3, Interesting)
"So, where does self-depreciating humour fit into your system?"
Self-deprecating humour fits in very well - its a defensive posture to aggression in others. Poke fun at yourself, and you're less likely to look harmful to other aggressive humans.
"What about parody?" Parody makes fun of the thing being parodied - also cruel. For example, "This land was your land, this land's now my land, I've got a big gun, and you ain't got one" makes fun of people who have to give in to "might makes right" bullying. Any
Re: (Score:3, Insightful)
One should not generalize excessively. Such a statement often reflects back on the owner of it's opinion.
You should have qualified the statement with a percentage. Do you often think people are "out to get you" or are afraid of people in general? Do you have no faith in the concept that 95% of all people are decent human beings?
I knew him back in those days (Score:5, Interesting)
He did have access to some pretty potent LSD, though. Before knowing him, I always thought LSD was pretty harmless, but with the quantities that man could ingest, I now wonder if permanent brain damage kicks in. And he loved to combine it with a little coke - or whatever other easily accessible drug was around.
Funny, the last I had heard about him was his mindpixel scam. Which made me chuckle a lot, because very few people seemed to catch on that the entire project was just the ravings of a drug-addled lunatic.
I didn't realize he finally offed himself. I say finally because everyone who knew him expected it "any day now" - since at least the early 90s. I'm rather astounded he held on so long.
Re: (Score:3, Interesting)
Re: (Score:3, Interesting)
Re: (Score:2)
It must have failed incredibly hard, because the only relevant hit on Google for "oxylock protection scheme" is the parent post. Just googling for "oxylock" brings up loads of pages about quick-release couplings for oxygen cylinders, nothing about any kind of protection scheme.
Just sayin'...
Slashdot reference (Score:2)
From TFA:
Re: (Score:2)
Irony or Bathos? (Score:2)
All that intelligence. All that education. Lifetimes spent in an unceasing uphill struggle to help mankind take the next great technological leap forward...ended in an instant to provide fodder for a /. joke.
Gotta love it.
For those wondering how they died (Score:2)
always lift with your legs (Score:2)
Link (Score:2)
Previewing ... now that one doesn't work either but this [wired.com] does.
reminds me of this one sci-fi story (Score:5, Interesting)
Anyone remember the name of that story? Or was it a book? I don't remember.. but it's pretty interesting to think about - especially if AI researchers begin to have a statistically higher probability of suicide.
Maybe this is our penecillin?
Re: (Score:3, Informative)
Isaac Asimov
Re:reminds me of this one sci-fi story (Score:5, Informative)
If you like that, I'd recommend the movie Pi [imdb.com] which has similar ideas.
Killswitch? (Score:2)
AI field barely in the "Alchemy" stage (Score:4, Interesting)
The idea that a neural network given a "large enough corpus" can resemble a human being might be true. But a "long enough dead end" could look like a highway. Then again we are probably dead ends too, and so it's more a matter of which one goes on for longer
My other objection to such approaches is, if you wanted a nonhuman intelligence from neural networks that you don't really understand (the workings of), you can always go get one from the pet store.
As it is the Biotech people probably have a better chance of making smarter AI than the computer scientists working on AI - who appear to be still stuck at a primitive level. But both may still not understand why
Without a leap in the science of Intelligence/Consciousness, it would then be something like the field of Alchemy in the old days.
I am not an AI researcher, but I believe things like "building a huge corpus" are wrong approaches.
It has long been my opinion that what you need is something that automatically creates models of stuff - simulations. Once you get it trying to recursively model itself (consciousness) and the observed world at the same time AND predict "what might be the best thing to do" then you might start to get somewhere.
Sure pattern recognition is important, but it's just a way for the Modeller to create a better model of the observed world. It is naturally advantageous for an entity to be able to model and predict other entities, and if the other entities are doing the same, you have a need to self model.
So my question is how do you set stuff up so that it automatically starts modelling and predicting what it observes (including self observations)?
Re: (Score:2)
Caveat: I'm not an AI researcher either, and what I do know of AI is enough to convince me never to go into the area in any serious way.
The idea is that the probability thing _is_ the reason why it works: intelligence goes way beyond the abilities of reductive reasoning to figure
Re: (Score:2)
The problem with the "emergent intelligence" from lots of "neural networks" approach is even if it works you often don't really know why it works (or whether it's really working the way you want) - it's more a probability thing.
;).
The idea that a neural network given a "large enough corpus" can resemble a human being might be true. But a "long enough dead end" could look like a highway. Then again we are probably dead ends too, and so it's more a matter of which one goes on for longer
That was kind of my thought too. I saw
huge fact databases from which AI agents could feed, hoping to eventually have something that could reason at a human level or better
and said, insensitively, "Okay, so he thought of an idea that sounds like crap to begin with, hasn't produced any AI-level results beyond 'neat', and probably won't ever produce any results."
I don't want to trivialize their deaths, but let's not equate respect for the dead, with merit of their ideas.
Ah yes, Mindpixel (Score:4, Funny)
It's discouraging (Score:5, Informative)
It's discouraging reading this. Especially since I knew some of the Cyc [cyc.com] people back in the 1980s, when they were pursuing the same idea. They're still at it. You can even train their system [cyc.com] if you like. But after twenty years of their claiming "Strong AI, Real Soon Now", it's probably not happening.
I went through Stanford CS back when it was just becoming clear that "expert systems" were really rather dumb and weren't going to get smarter. Most of the AI faculty was in denial about that. Very discouraging. The "AI Winter" followed; all the startups went bust, most of the research projects ended, and there was a big empty room of cubicles labeled "Knowledge Systems Laboratory" on the second floor of the Gates Building. I still wonder what happened to the people who got degrees in "Knowledge Engineering". "Do you want fries with that?"
MIT went into a phase where Rod Brooks took over the AI Lab and put everybody on little dumb robots, at roughly the Lego Mindstorms level. Minsky bitched that all the students were soldering instead of learning theory. After a decade or so, it became clear that reactive robot AI could get you to insect level, but no further. Brooks went into the floor-cleaning business (Roomba, Scooba, Dirt Dog, etc.) with the technology, with some success.
Then came the DARPA Grand Challenge. Dr. Tony Tether, the head of DARPA, decided that AI robotics needed a serious kick in the butt. That's what the DARPA Grand Challenge was really all about. It was made clear to the universities receiving DARPA money that if they didn't do well in that game, the money supply would be turned off. It worked. Levels of effort not before seen on a single AI project produced some good results. Stanford had to replace many of the old faculty, but that worked out well in the end.
This is, at last, encouraging. The top-down strong AI problem was just too hard. Insect-level AI, with no world model, was too dumb. But robot vehicle AI, with world models updated by sensors, is now real. So there's progress. The robot vehicle problem is nice because it's so unforgiving. The thing actually has to work; you can't hand-wave around the problems.
The classic bit of hubris in AI, by the way, is to have a good idea and then think it's generally applicable. AI has been through this too many times - the General Problem Solver, inference by theorem proving, neural nets, expert systems, neural nets again, and behavior-based AI. Each of those ideas has a ceiling which has been reached.
It's possible to get too deep into some of these ideas. The people there are brilliant, but narrow, and the culture supports this. MIT has "Nerd Pride" buttons. As someone recruiting me for the Media Lab once said, "There are fewer distractions out here" (It was sleeting.) It sounds like that's what happened to these two young people.
Re: (Score:2)
I don't know whether they're actively still trying to get "true AI" or just milking what they've got; but, assuming the former, some things in science take a really long [aps.org] time [nobelprize.org]. It seems pretty obvious that any intelligence requires a vast amount of knowledge to be useful and that takes a lot of t
Re: (Score:3, Informative)
Article missing the point... (Score:2, Insightful)
My How Innovative (Score:2, Funny)
That is totally out of left field.
I feel like a child by the ocean, dwarfed next to such massively innovative thinking.
Chronic pain and suicide (Score:5, Insightful)
Having to live your life in constant pain is worse than you can imagine if you've never had to go through it: you wake up in the morning (provided you could sleep), and you spend the entire day cranky and miserable because you feel horrid. All you do is look forward to the night because again - if you're able to fall asleep - you'll have several hours of some respite from the pain. You rarely feel social or productive because you can't focus your attention or get over your irritability. You're wracked with guilt because you're unable to treat your loved ones with the kindness that they deserve, particularly for putting up with you, and you feel alienated from everyone because few people know what you're going through and you frequently cannot tell them the thoughts that go through your head as they probably often do involve suicide or euthanasia, and psychiatric institutionalization - which is what you worry might be forced upon you - simply isn't going to help, since it won't fix the core issue and the problem isn't psychological.
Now extend this to months or years with no end in sight and see how you feel.
Fortunately for me, I was finally able to find a doctor who was willing to prescribe me opioid pain medication and help me get involved with a pain management clinic that teaches mindfulness based meditation, and now I'm doing much better: I'm able to function, I'm looking for a job, I want to see my family and friends on a regular basis, I'm much more pleasant to be around, I can exercise daily, and I'm no longer interested in euthanasia. However, most pain sufferers are *not* as lucky as I am, because doctors are not willing to prescribe long-term use of opioids due to the horrible rules and regulations surrounding these drugs that have been introduced due to their addictive nature. The difficulty in obtaining them is why some people become addicted to heroin; Kurt Cobain is a good example of such a person, who suffered from severe abdominal pain until he found some respite when he took it.
If anything, people need to fight for their right for quality of life. Yes, opioid abuse can be a serious problem in society, but the people who need these drugs often do not have the strength to put up the huge fight to get them and they must have regular access to them. Perhaps if Singh had been prescribed some relief to his problem, he might still be with us today.
That isn't the surprising part. (Score:3, Insightful)
Push ... so sad (Score:5, Interesting)
This whole story reminds me of the poem Richard Cory (http://www.bartleby.com/104/45.html):
Re: (Score:3, Interesting)
So this is about 2 years "olds" not "news" but... (Score:2)
The real kicker is that Artificial Intelligence is really just a by-product illusion of Automating Information enough that the illusion presents itself.
Even these two, as well as the cyc team, were trying to do just that, by first collecting up information to then automate its use. The gears and bearings of which are pretty simple.
Some interested in the A.I. by product might find this of some interest. [abstractionphysics.net]
Why not build a crawler bot for common sense data? (Score:3, Interesting)
I mean, seriously, with facts like "Brittney Spears is not good at solid-state physics" or whatever, it seems like their database really is a joke, and that they have to introduce a program to cull all that information.
Programs for parsing semantic content are quickly becoming much better. The reason why Google is not interested in the "Semantic Web" is because they think that their smart bots will be able to mine sematic information from websites, emails and books without any help from human interpreters. That seems to me like the proper start of machine intelligence. What those bots will "learn" will be the right basis for a common-sense database, not the input of some pimply teenagers writing about Btrittney.
Suicide and LSD (Score:5, Insightful)
When you reach that kind of despair it's hard to find your way back to the world. How many great minds and potential contributors to science, art and human culture are lost to suicide before their potential is reached? It was certainly a waste for these two scientists to die. It's a waste, and there's usually always something that could have been done to save them. And it is in society's best interest to help these people any way we can.
What saved me was, sometime after my attempted suicide I tried the drug LSD for the first time. I've never been the same since that day, for the better I mean. I came to understand things about the nature of consciousness, and how the soul and experiences of all things are connected on such a basic level. Up until that point I felt alone and isolated, physically and emotionally, but I saw and felt how that just is not true at all. The feelings of fear and anger and hopelessness were gone. I now use LSD about 5 or 6 times a year, all have been wonderful experiences so far. It is a crime against humanity that this drug is illegal. It should be given to anyone (in a safe environment and under supervision) who is suicidal. In fact, it should be given to anyone who wants it. It literally saved me. I would likely be dead if I had not experienced that permanent personality changing event. This drug is not addictive. It is not deadly in moderation. It is not corrosive to the fabric of civilization. It is a threat however to the established authorities that want us to remain numb to each other and scared. If everyone could experience it once, we could all feel that universal connection, and there would be no reason to feel alone or worthless or end your own life for so many people who think that's their only way to escape.
I'm sorry that this got so off course (mod it as such if you will), but the topic of suicide is so important to me now, and I want people to have the same chance that I had.
I thank Albert Hofmann for my life and my enlightenment, and for giving this gift to all humanity. Perhaps one day we will be more inclined to accept it.
"I think that in human evolution it has never been as necessary to have this substance LSD. It is just a tool to turn us into what we are supposed to be." -Albert Hofmann
Re: (Score:3, Interesting)
The medicine I took was Ayahuasca [ayahuasca.com], from plants purchased from certain Internet sites. Western tourists travel to South America to ingest this drug in the presence of a Shaman to cure any mental illnesses or emotional problems. Partakers call ayahuasca a Medicine rather than a drug because of it's beneficial healing effects.
I wish more suic
I think I had Push's old NeXT (Score:2, Interesting)
Soon, I'll be making a 3d database. (Score:2)
Re:I'd kill myself, too... (Score:4, Insightful)
Sorry you lost a friend, but if you continue to take the Internet seriously you might wind up in a similar situation.
Re:I'd kill myself, too... (Score:4, Interesting)
It shouldn't matter (to you) if I say something that is offensive, what matters is how you deal with it. You have choices in how you react to it. One of those choices is to ignore it and write it off as "oh, that's just some asshole on the Internet." Another is to become upset about what some anonymous asshole on the Internet who didn't know your friend has said. It is your choice.
Who am I to you? Nobody. Why should anything I say at all have any impact on you if you don't want it to?
For example, you may consider my stance of "you can only control yourself" as lame-ass, and attempt to insult me by insinuating that I live in the past, but I can choose to react negatively to that (ie: "waaah, my fewwings are hurted") or I can read between the lines and see that you're just angry about someone making a joke about your departed friend and not take offense -- just like I would do "in real life."
Re: (Score:2)
You are correct, which is why I, personally, think it's important that we consciously try to overcome that natural reaction as much as we are capable. Save your anger for when it can do the most good, otherwise it's wasted effort. I'm well aware that ignoring those that offend you is a goal to be achieved, and is certainly not something you can do at the flip of a switch, but it's impor