Cutting-Edge AI Projects? 346
Xeth writes "I'm a consultant with DARPA, and I'm working on an initiative to push the boundaries of neuromorphic computing (i.e. artificial intelligence). The project is designed to advance ideas all fronts, including measuring and understanding biological brains, creating AI systems, and investigating the fundamental nature of intelligence. I'm conducting a wide search of these fields, but I wanted to know if any in this community know of neat projects along those lines that I might overlook. Maybe you're working on a project like that and want to talk it up? No promises (seriously), but interesting work will be brought to the attention of the project manager I'm working with. If you want to start up a dialog, send me an email, and we'll see where it goes. I'll also be reading the comments for the story."
Cyberdyne Systems (Score:5, Funny)
Just a small company, I'm sure no-one's noticed it.
Cyberdyne Systems [wikipedia.org]
Fundamental research? (Score:4, Interesting)
Hello,
I'm studying theoritical computer science, meaning it's often called math (things like complexity theory, lambda calculus, even linear logic...).
I always loved AIs, but I was often told that there is no research on it which is that theoretical; that it's more like a collection of applied domains, like learning neural networks or computer vision.
So, what is the most theoritical aspect of AI research that you know? Or put otherwise, is there a branch of AI research where you prove theorems rather than writing code?
I know it's slightly off topic, but people working on that kind of thing are probably wondering if they should mention it here (wondering if it interests DARPA or not).
Re:Fundamental research? (Score:5, Informative)
Re: (Score:3, Informative)
Re: (Score:3, Informative)
What about cognitive science [wikipedia.org]. Using logic isn't cognition - "you're not thinking you're using logic"
Re: (Score:2, Interesting)
This is an interesting post. As far as I knew, things like logic in some way part of AI. One of my PhD supervisors is a monster in logics, and he has published in AI Journal [Elsev.](one of the most presigious journals for AI).
I think a lot of research on Agents is related with logics, including but not limited to CTL, ATL, first order L., that combined with game theory.
What I would suggest you is looking at academia, mostly in Europe (UK, Netherlands [they are good at theoretical research], Germany). If yo
I think he's a buzzword consultant (Score:2)
Re: (Score:2)
Re:I think he's a buzzword consultant (Score:5, Interesting)
Possibly: what exactly the research on self-awareness/sentient learning systems comprises for you guys. The reason I bring this up is that I left AI as a main research field because even the most flexible research goals set by academia today are just tiny steps of applicative advances in statistical inference and mathematical techniques in general. Not that these things are not useful (machine learning is quite amazing, actually) but the initial goals have little to do with all this. We have surpassed brains in many ways, but the massively parallel brain does not even "learn", let alone think(which is what we want) in the manner in which any AI system today is based. And yes, that includes adaptive ones that change their own set of heuristics.
I spent a lot of my time thinking about neuroscience and reading psychology , and while I slowly moved towards rationalizing certain things, the main obstacles to what I needed to know were deeply biological. How exactly does the mind "light up" certain areas of memory while recalling things (sounds, sights..etc) stored nearby? How "randomized" is this? And how can it be represented mathematically (von neumann architecture)? Is there ANYTHING we do that is not entirely memory based (even arithmetic seems to stem from adaptive memory)? Why do we need to sleep, and what part of the whole "algorithm", if there is one, is dependent on this sleeping business? What exactly does it change in our reasoning?
If we know precisely some good answers, rather than the guesswork in literally all major texbooks, we can begin to model something useful and perhaps introduce much more powerful intelligence with all we have developed in NN, probabilistic systems..etc. I think once sentience is conquered, human beings are going to be immediately made inferior. It's just this abstraction business that is so damn complicated.
Re: (Score:2)
If you want to have an argument about Integrity in the Sciences, that is to be found next door at http://science.slashdot.org/article.pl?sid=08/06/23/2157214 [slashdot.org]
Now begone, Anodized Cowherd!
It it just me? (Score:4, Insightful)
Why is it that the first application that I can think of for such project developed by DARPA is that to use it against the citizens?
Re:It it just me? (Score:5, Interesting)
Give me a break (Score:5, Interesting)
Why is it that the first application that I can think of for such project developed by DARPA is that to use it against the citizens?
Like it or lump it, you are in this boat with everyone else. If AI is solved, it will be used for good and evil. If your country does not use it for evil (extremely doubtful), somebody else's country will. Better yours than theirs. What I mean is that true AI will be an extremely powerful thing; if any country other than yours gets an early monopoly on AI, you can bet they are going to use it to kick your country's ass. I don't think you'd like that very much.
Having said that (and to get back on topic), I have been working on ageneral AI project called Animal [rebelscience.org] for some time. Animal is biologically inspired. It attempts to uses a multi-layer spiking neural network to learn how to play chess from scratch using sensors, effectors and a motivation mechanism based on reward and punishment. It is based on the premise that intelligence is essentially a temporal signal-processing phenomenon [rebelscience.org]. I just need some funding. The caveat is that my ideas are out there big time and there is a bunch of people in cyberspace who think I am kook. LOL. But hey, send me some money anyway. You never know. :-D
Humanity's Problem (Score:2, Insightful)
Yes, that is the precise kind of thinking that demonstrates why mankind deserves to be wiped off the planet. Man, what a wonderful place the universe would be without this single species.
Re:Humanity's Problem (Score:5, Insightful)
Re: (Score:2)
I can think of one country [wikipedia.org] that wouldn't kick the US's ass (in a military confrontation) or any one else's if it's citizens solved the Strong AI problem.
Re: (Score:2)
I see this line of thinking very dangerous. Many European countries, including one I live in, are certainly able to obtain nuclear weapons, but simply refuse to do so. So, what if "bad guys" (say, terrorists) activate nuclear bomb in here? First, society is reasonably certain our military is not *humiliating* civilians in other countries by randomly killing (especially children) or
Re: (Score:2)
Well, good luck. But I can see why they think you're a kook. This AI stuff is a bit too close to "free energy" and "cold fusion" for my liking; decades of research and no progress to speak of, wild promises of what *might* happen if all the problems were magically solved, and an army of AI believers who won't let go of the dream. That's not to say you can't do it, just that lots of smart people have already tried and failed, which is generally a bad sign.
But AI has already succeeded in some ways. It's contr
Re: (Score:2)
Louis, could you send me an email please, j@ww.com
thanks !
Re: (Score:2)
Re: (Score:3, Insightful)
Where did you get the idea that humans will veto unjust orders? You might want to read up about the Milgram experiment, or maybe just consider how the Holocaust happened.
I assure you, the Nazis didn't manage to put together an army of thoroughly evil people -- the vast majority of the Nazis were perfectly ordinary human beings receiving evil orders. We like to think we're different, but that's an incredibly dangerous opinion. It's much better to accept the fact that we are human, and that humans are over
Yea, lots (Score:5, Funny)
Yea, I have lots of ideas and things I've been working on.
Fund me! :-)
DARPA turn to Slashdot? (Score:5, Funny)
Re:DARPA turn to Slashdot? (Score:4, Funny)
"If DARPA is now so desperate as to seek out totally random and unknown readers of slashdot...my god the US is screwed."
Dont' be an idiot genius and smarts reside in unexpected places. I'm sure many slashdotters have come across some very smart people.
Re: (Score:2)
I'm sure many slashdotters have come across some very smart people.
We have, but not on slashdot.
Re: (Score:2)
-Stor
Re: (Score:2)
Yes. Next thing you know they will be running competitions will million dollar prizes.
DUDE! Sorry... (Score:4, Funny)
Games (Score:2, Insightful)
Re: (Score:2)
Games? It is the best scratch pad for AI experiments.
Bollocks. Very few games have AI that's even approximately interesting. The most advanced stuff that's commonly used is stuff like algorithms for navigating around a map and obstacle avoidance that were basically mastered by the robotics community in the late 80s and early 90s.
Show me a game that does something truly novel in terms of AI, and I'll be impressed. I don't see any, though.
Re: (Score:2)
The AI in games are not as free-thinking as you might think.
AIs, for instance, can not figure out how a map works in a first person shooter entirely on their own. Bots in older games (or on maps without waypoints) will often walk into a wall, stop, get their bearings, and then move in another direction. I loved watching Foxbots in TFC just stand around during a CTF map walking in circles around the flag.
In Half Life 2, the enemy AI runs on paths. There are multiple plotted paths, and it follows them and
Whatever you do... (Score:2, Funny)
Re: (Score:2, Interesting)
An obvious one. (Score:5, Informative)
numeta [numenta.com]
It's mainly a teaching + learning system for a system with input and output. I don't see anything built with it answering any rational questions or coming up with new ideas anytime soon, but if you do AI and don't know about them, you better catch up.
Re:An obvious one. (Score:5, Informative)
I think the Deep Belief Networks of Hinton et al are way ahead of Numenta.. in that they are real science with measurable results that has been reproduced by multiple implementations. The 2006 paper that started it all and Hinton's presentation on google video:
http://www.gatsby.ucl.ac.uk/~ywteh/research/ebm/nc2006.pdf [ucl.ac.uk]
http://video.google.com.au/videoplay?docid=228784531481853811 [google.com.au]
A formal analysis:
http://www.cs.utoronto.ca/~ilya/pubs/2007/inf_deep_net_utml.pdf [utoronto.ca]
Application to natural language processing:
http://www.cs.swarthmore.edu/~meeden/cs81/s08/DahlLaTouche.pdf [swarthmore.edu]
http://www.machinelearning.org/proceedings/icml2007/papers/425.pdf [machinelearning.org]
Reproducing Hinton and extension to and evaluation in other domains:
http://www.machinelearning.org/proceedings/icml2007/papers/331.pdf [machinelearning.org]
Use in Computer animation of facial expressions:
http://aclab.ca/users/josh/downloads/pubs/23_Susskind_Hinton_Movellan_Anderson.pdf [aclab.ca]
Most impressive:
http://www.cs.utoronto.ca/~ilya/pubs/2007/aistats_multilayered.pdf [utoronto.ca]
A C++ implementation (although it has much Python love):
http://plearn.berlios.de/ [berlios.de]
So yeah, there's some pretty good demonstrations of how powerful DBNs are.. Numenta is lagging behind.
Re: (Score:2)
TRBMs have been used in DBNs, look at Learning Multilevel Distributed Representations for High-Dimensional Sequences, [utoronto.ca] ;)
Ilya Sutskever and Geoffrey Hinton, AISTATS 2007. But yeah, if you're looking for a job, Numenta are a good place to look. Of course, once you join the company you'll think Numenta's technology is the "one true path" and not even bother looking at the rest of the field
OpenCog (Score:5, Informative)
http://www.opencog.org/wiki/Main_Page [opencog.org]
http://www.agiri.org/OpenCog_AGI-08.pdf [agiri.org]
http://justingibbs.com/how-to-make-singularity-bearable-in-its-infancy [justingibbs.com]
http://www.innergybv.biz/blog/?p=175 [innergybv.biz]
http://ieet.org/index.php/IEET/more/goertzel20080620/#When:22:49:00Z [ieet.org]
http://xlaurent.blogspot.com/2008/06/opensim-for-opencog.html [blogspot.com]
There's a number of GSoC projects for OpenCog currently underway also:
http://code.google.com/soc/2008/siai/about.html [google.com]
So the first release should be very interesting.
Re: (Score:3, Informative)
All I want is some proof. Is that so hard?
No, and you're right to ask for it. Skepticism is healthy.
There's three main components to OpenCog:
* The architecture
* The Probabilistic Logic Network algorithm
* The competent generic algorithms system (MOSES)
and it is openly admitted that this isn't enough.. some sub-symbolic algorithms will be needed to complement all this "neat" (although probabilistically weighted) work.
And then hanging on the side is all the Natural Language Processing stuff, and the embodied simulation stuff,
Re: (Score:2)
Hi (Score:4, Funny)
Re:Hi (Score:4, Funny)
And here I was, thinking that everyone from the Anderson fallout already had a new job...
Applications more then Research.... (Score:2)
The 'Semantic Web' companies that are springing up all over like Twine, AdaptiveBlue, etc. are the best examples. They seem to be using some basic NLP, classifiers and statistical models to provide various services on the web. This may not be cutting edge artificial intelligence research but,
Experience Based Language Acquisition (Score:2, Informative)
http://sourceforge.net/projects/ebla [sourceforge.net]
http://acl.ldc.upenn.edu/W/W03/W03-0607.pdf [upenn.edu]
Funding (Score:2)
It looks like DARPA is trying new methods [slashdot.org] to get some more funding.
Blue Brain (Score:3, Informative)
Take a look at the project http://bluebrain.epfl.ch/ [bluebrain.epfl.ch]
A problem, divided (Score:4, Interesting)
You've got to quit trying to advance on separate fronts. People have been exploring and reinventing the same old niches for sixty years. Little has changed except for the availability of powerful hardware with which to realize these disconnected bits and pieces. What is needed is a way to bring the many different segments of the AI and robotic communities together, because the solution is not to find the "winning approach", but to realize the value of the various perspectives and combine efforts. This is not a new idea, it is an old one which apparently just doesn't fit into the established research environments. Go to the library and read some old books on AI if you really want an appreciation of how pathetic the progress of ideas (not hardware) has been. To whet your appetite try some of Marvin Minsky's old papers - http://web.media.mit.edu/~minsky [mit.edu] He recognized this situation nearly 40 years ago.
Basics..... (Score:2)
Why doesn't the Government start wokrng on making Congress work?
Oh, wait..... They already are robots.
Re: (Score:2)
Why do you think he needs better AI? they aren't currently acting very human, a lot of us don't believe they are.
Dear Friend (Score:5, Funny)
Dear Friend,
Compliment of the day to you and your entire family how are you today? Hope all is well with you I hope this email meets you in a perfect condition. I am using this opportunity to thank you inform you that I have come upon a large repository of AI source code left to me by my brother, Prince Abdullah of Nigeria.
It is my desire to transfer this source of of my home country to a place where it will be safe, and I wish your association in this business matter. I've been recommended to you by Mr. Smith of New York. I would like to transfer the source to your FTP server as an escrow service. In recompense, I will offer you 10% of the code, which is LoC 150,000,000.
To complete this transaction which will be beneficial to both of us, please contact my secretary with the following information:
The name and contact address of MY SECRETARY is as follows below.
MR.Brwon Adebayor
14 Island Street Lagos Nigeria
E-MAIL brwonadebayor@yahoo.com
TEL +2348083322221
In the moment, I am very busy here in Paraguay because of the investment projects which myself and my new partner are having at hand IN PARAGUAY.Finally, remember that I have forwarded instruction to my SECRETARY MR.Brwon Adebayor, his E-mail, (brwonadebayor@yahoo.com) to assist you on your behalf to send the source code to you as soon as you contact him.
Please I will like you to accept this grant offer with good faith as this is from the bottom of my heart. You should contact my secretary for the claim of you'r 10% which i willingly offer to you immediately you receive this mail, Presently I am in Paraguay.
pls make sure that you inform me as soon as you collect the bank draft so that we can share the joy together. Thanks and God bless you and your family.
Best Regards,
MR. RICHARD WANG
PRESENTLY IN PARAGUAY
AI should fix mistakes, not make them. (Score:3, Interesting)
For many decades, there has been a push to have an AI that acts just like a human. In other words, it makes rash decisions, based on bad anecdotes and stereotypes, full of mistakes, and then tries to rationalize that everything was planned with intelligence.
AI should understand the failings of human intelligence and fix it. For example, I have the sad job of normalizing health data. Every day, I dread coming into work and going through another million or so prescriptions. Doctors and nurses seem to continually find new ways to screw up what should be a very simple job: What is the name of the medication? What is the dosage? How often should it be taken? When should the prescription start? When should it end? How many refills/extensions on the prescription are allowed before a new prescription must be written? Instead of something reasonable like: "Coreg 20mg. Every evening. 2008-06-10 to 2006-07-10. 5 Refills." -- I get: "Correk 20qd. 10/6/08x5." It seems to me that some form of AI could learn how stupid humans are and easily make sense of the garbage. Of course, there's no reason the AI couldn't replace the doctor and write the prescriptions itself in a very nice normalized form.
Re: (Score:2)
That being said, standardizing human behavior is possible. The easiest way is to set up a standard form for prescriptions with nice fields for name, type, dosage, and etc, and then adding a nice 'other' field just in case. You know, just li
Re: (Score:2)
Sure, it'll be cool describing your symptoms to a robot Doc one day when you get a sore throat, and having it drum up some perfectly sripted scrip that actually fixes your throat, but what happens when the robot Doc gets a sore throat?
And it will.
I know we're only talking about the kind of AI that can answer phone calls
Re: (Score:2)
A system that performs set tasks isn't necessarily intelligent nor does it necessarily require intelligence. Or to put it another way, just because humans perform some tasks doesn't mean that task requires intelligence to perform.
Intelligence is defined by the likes of free thought, the ability to learn from mistakes, come up with ideas, adapt fluidly to changing situations and so on. If it's not capable of making mistakes it's not capable of learning how to deal with the unpredictable.
What you're after isn
No. (Score:3, Insightful)
Re: (Score:2)
I agree. But at the same time I understand why they're asking. The current administration seems to have depleted its supply of natural intelligence.
Re: (Score:2)
Not so much depleted as driven away. However, this administration is coming to an end soon. Maybe then.
You need better computer vision (Score:5, Interesting)
Basically I say that the better computer vision you make, the better software you can write advanced bots leading up to AI. I see AI as being something we'll naturally get to even if no one makes an effort to it: Our 3d cards are getting better, video games are making better 3d worlds, memory is getting bigger, and computer speeds are getting faster. Even if you couldn't hold AI on a current computer's memory, you have wireless internet that links up with a supercomputer to make thin client bots. So there really isn't anything in current technology that is holding us back except computer vision.
Now I am not so good in the computer vision field, but as I see it(excuse pun), there are two ways to do vision.
1) Exact matching. You model an object in 3d via CAD, a Pixar style, or using Video Trace [acvt.com.au] First you database all the objects that your AI will see in its environment then you make a program that identifies objects it "sees" with computer cameras and laser range finding devices. So then the AI can reconstruct its environment in its head. Then the AI can perceive doing actions on the objects.
I'm currently not in the loop here. I can't talk to anyone at Video Trace because I'm just a person, and they don't want to let me in on their software. So I can't database my desk. So I can't make the program that would identify things.
2) Even better than exact matching is similar matching. No two people look alike besides twins, so you can't really just database in a person and say that is a human. And as humans go, there are different categories such as male and female, and some are androgynous so we can't tell their sex. Similar matching has a lot of potential in its ability to detect things like trees and rocks. Similar matching is good at an environment that is tougher to put into exact matching situations. So just from this information alone, I wouldn't start on similar matching unless you had exact matching working in a closed environment. I'm not saying that some smart individual couldn't come up with similar matching before exact matching. I'm just saying that for myself, I'd start with exact matching, and then extend it with similar matching. There are a lot of clues you can pick up on if you know exact locations of things.
And then once you have singular location vision working, you can add multi point vision working. Multi point vision would mean that if you had more robotic eyes on a scene that you'd gain more detail about it. You could even get as advanced as conflict resolution when one robotic eye thinks it sees something, but another thinks it is something different. The easiest way to think of a good application for this would be if you had a robotic car driving behind a normal semi trick and another robotic car infront of the semi. The robotic car in the back can't see past the semi to guess traffic conditions of when the semi will slow down, but the car in front of the truck can see well, so they can signal to each other information that would let the car in behind the semi truck follow closer. If you get enough eyes out there, you could really start to put together a big virtual map of the world to track people.
I wouldn't say AI that learns like humans is desirable. After all, you'd have to code in trusting algorithms to know who to listen to. I'd say AI that downloads its knowledge from a reliable source is the way to go. It is easy to see: Sit in class for years until you learn a skill, or download it all at once like Neo on training seat.
Anyway, you can do a lot with robots that have good computer vision. Thething that has to be done next is natural language understanding. So far we've discussed the AI viewing a snap shot of a scene and being able to identify the objects. Next you'll have to introduce verbs and moving.
Re: (Score:2)
You wouldn't happen to be a student of the late Prof. Sheldon Klein, would you?
http://pages.cs.wisc.edu/~sklein/sklein.html [wisc.edu]
[In one of his classes, we tried (in a very rudimentary way) to give computers a "3D imagination space" by extracting spatial information from natural language and displaying it in a virtual reality environment. (We could visualize sentences such as "the chair is behind the table"). There was also much discussion of visual/spatial metaphors that humans use to understand abstract sen
Re: (Score:2)
Better visual recognition and understanding of natural language would certainly aid us in producing better systems but I'm not convinced they'll allow us to simply create intelligent robots.
The search for an intelligent machine requires more than this, it's not just about sensing your environment of which vision is just one facet that is for example (i.e. blind people are still intelligent).
I've had a quick read of your website and whilst interesting I'm not sure that it's entirely correct, I think it overs
Implement the research! (Score:4, Interesting)
I recently threw together a prototype for my company using OpenCV. That OpenCV exists for this sort of thing is a godsend. One of our interns recently completed a UI research project that also relied on OpenCV.
But one of the problems I had while doing it was that whenever I searched for more documentation about the algorithms I was trying to write, all I could find where either papers describing how some researcher's system was better than mine, or some magic MATLAB code that worked on a small set of test images. There were no solid implementations written in C for any of these systems.
I would love to dick around for weeks implementing all these research papers and then evaluating their results and real world performance, but I don't think my boss or my company's shareholders would enjoy that. Like every company, resources are limited for something that isn't making money.
With that said, the best way to further AI research, particularly in the highly marketable fields of machine learning and computer vision (but probably others as well), is to add implementations of cutting edge research to existing BSD-licensed libraries like OpenCV for companies to evaluate. If products that use that research become profitable, private companies are likely to throw a lot more money and researchers at the problem, all competing to one-up the other.
If you think I'm being unrealistic, you should check out the realtime face detection that recent Cannon cameras use for autofocus. Once upon a time, object recognition was considered a cutting edge AI problem.
I'm working on my Ph.D. in AI (Score:4, Interesting)
Let's see.... what I'm working on....
Pure pareto multiobjective genetic algorithms (just submitted a paper to IEEE TEVC)
Hinge-loss function discriminative training of neural nets as classifiers
Computer vision as a KNOWLEDGE problem (i.e. not just mostly signal processing and statistics)
Persistent surveillance (entity tracking)
Sensor asset allocation (using a GA)
Various things involving abductive inference
http://www.cse.ohio-state.edu/~millerti/ [ohio-state.edu]
Re: (Score:3)
Damn it! And I thought I was clever because I'm automating creation of Xen instances for a MySQL cluster. Oh well, at least it pays better than graduate work.
Do it in hardware (Score:2)
I hear that AI is only about 15 years away, so you could try just waiting until then. Unfortunately, that estimate hasn't changed for 30 years.
Given the slow progress in AI research, I think a radical approach is in order. I doubt that we'll see any breakthroughs from a small crew of programmers with quad cores and c++.
The human brain is a massively parallel, self-reconfiguring network of nodes. How far have we come in building any sort of scalable technology that can operate in such a manner? I know there
Matrix Logic (Score:4, Interesting)
Instance created October 12, 2007 (Score:2)
Xeth,
What truth do you know of the following statement?
CENNS stands for Core Engine Neural Network System, and started as a research consolidation project under DARPA's Intelligent Systems and Software program in 1995. It was a joint effort with the RAND institute to leverage all A.I. research in the past 50 years under a single initiative.
Project SUR paved the way for systems HARPY and HEARSAY-I, then abandoned until 1984, under the Strategic Computing Program. HEARSAY-II introduced the concept of a comm
Re: (Score:3, Informative)
I doubt he'll comment. But I will: it sounds like bullshit to me. Unless you can propose how exactly somebody might interface a neural network to a knowledge-based system. That's substantially more advanced than any ANN system I've encountered so far, and I've looked at some fairly esoteric ANN designs.
Cutting Edge AI?!? (Score:5, Interesting)
The smart knife. Cutting edge AI at its finest. Prospecitive investers, feel free to contact me @ bill_AT_ultimatesalsaparty_DOT_COM
Platas (Score:2)
A neural processor. (Score:2, Interesting)
I am actually working on an neural processor. It is primarily, a platform for developing neural applications as appose to an application itself. Similar to how a database provides middle ware functionality. And temporarily coined Neurox.
Neurox is subdivided into two parts:
Firstly a database where neurons have position and are allowed to move or create new connections (plasticity) in a more permanent manner. This can be a slower process. And secondly a processing node, or cluster of nodes, Where a slice o
Experimental test of Ockham's Razor (Score:2)
Peter Turney (whose programs have achieved human level performance on the SAT verbal analogy test) and I have been discussing an experimental test of Ockham's Razor in AI. This is a question that is both fundamentally important and experimentally tractable.
I recommend you read our discussion [wordpress.com] of an experiment to test Ockham's Razor (and related theories such as MDL, algorithmic probability...).
Cyc corp (Score:2)
Also, dear DARPA official, don't you think that an AI researcher could have ethical reservation about working with the US Army ? I don't try to troll here, this story is already tagged 'skynet', don't you think that many AI researchers are very worried about the mix of milit
My own AI project (Score:2)
A few resources I haven't seen mentioned (Score:3, Interesting)
I have had an interest in AI over the years and have found Gerald Edelman's books particularly insightful.
See:
_Neural Darwinism_ (ISBN 0-19-286089-5)
_Bright Air Brilliant Fire: On the Matter of the Mind_ (ISBN 0-465-00764-3)
The ideas in these books might be outdated by now but I doubt it. I think the works of Norbert Weiner are still relevant.
I particularly liked the NEAT project, however crude it may be. I like the changing neural topology via genetic evolution concept and think this is consistent with what Edelman tells us really happens in biology.
See: http://www.cs.ucf.edu/~kstanley/neat.html
My other suggestion is to define the many different scopes of the AI. For some, it seems the bar has been placed at natural language processing and full-on human cognition. Without the frame of reference and body of experience of a human though, this seems to be an unrealistic goal. I just don't think we can "program" a computer to do it. To pull it off, this would seem to require duplicating the nervous system of a human to enough of a degree that the AI can experience sensory input compatible with our shared human experience. Think about how many years it takes for a human to reach the level of intelligence we are seeking in AI. I don't think there are any overnight solutions here. We need to teach it like a baby, child, adolescent, and adult. While we may be able to speed train an AI, it may be that there is something to the lack of interesting input that enables us to reflect and refine our mental models of the world. The AI must also continue to interact with the human world in order to stay current.
But AI doesn't have to match a human. There are much simpler organisms we can model as a start that may pay off in other ways. Nature seems to excel at reusing novel patterns and we should exploit that code/model library. The AI produced from this research may not be able to hold a conversation, but it can probably keep an autonomous robot alive and on it's mission, whatever that may be. And I think it's a better foundation for the eventual human equivalent and beyond.
For some possible hardware platforms, see:
http://www.neurotechnology.neu.edu/
http://biorobots.cwru.edu/
http://birg.epfl.ch/
DARPA ? (Score:2)
Umm this guy works for DARPA and hes asking for help on here? Something is amiss.
Been Working on Just Such a Project for years (Score:3, Interesting)
Awfully broad goals: (Score:2)
"understanding biological brains, creating AI systems, and investigating the fundamental nature of intelligence."
Maybe begin with a bit of background on the complexity of shotgunning the task: some Hofstadter, maybe some Dennett, maybe something like John Pollock's "How to Build a Person: A Prolegomenon".
Then define, in the sense of a formal systems analysis, the #1 task DARPA would have an AI system perform in 5-10 years and then specialize and concentrate and specialize and concentrate some more in resea
Re:Dear Slashdot... (Score:5, Interesting)
As if I didn't see that coming? I think my UID says I've been here awhile.
It's not that I'm asking Slashdot to do my work for me; I've already got some very strong leads to work on. However, Slashdot occasionally surprises me with people that are thoughtful and working in interesting fields, so I figured I'd give it a shot. Most of the changes in my life have come from sudden and unexpected directions; I wanted to see what serendipity might bring me that deliberation would not.
Re:Dear Slashdot... (Score:4, Funny)
Re: (Score:3, Funny)
Re: (Score:2, Interesting)
Re:Dear Slashdot... (Score:4, Funny)
Get off my lawn ?
Re:Dear Slashdot... (Score:4, Funny)
Get off my lawn ?
Re: (Score:2)
You're going "old timer" with a 6-digit UID?
Won't be long now until a real old timer comes along and tells you to get off their lawn.
Re:Dear Slashdot... (Score:5, Funny)
Why back in my day we had to post questions to the legs of carrier pigeons. Gosh darn it! We liked it that way!
Re:Dear Slashdot... (Score:5, Funny)
Paper and pigeons! What'll be the next? A magical tablet that translates your handwaving to images of the Wonders of the Worlds? Pah!
In my day, we had to write the questions using cuneiform script on a damp clay tablet, pack it in an envelope of clay, and then deliver it personally to the priesthood.
/Crafack
Re: (Score:3, Funny)
I'll just post this and wait for one of the 3 digits that only stick around for these threads to show up.
Re: (Score:2)
There are already plenty of nonhuman intelligences around (see your local pet store). And how we handle them is not that great.
I personally am not sure if creation of AI will be a big benefit to humans in the long term. Perhaps augmentation of humans or animals would be more useful.
Given it's DARPA, examples of augmentation
Re: (Score:2)
One of my theories is brains predict possible futures (by modelling reality in parallel), and consciousness is what happens when a brain recursively tries to simulate and predict itself.
Wouldn't that be an application of Baysian Estimation [wikipedia.org]?
You may enjoy this book [mit.edu] if you haven't already.
Re: (Score:2)
Re:Bad grammar day (Score:5, Funny)
Hopefully whatever your researching ...
Re: (Score:2)
"Also, who reads the comments?"
Well... you, apparently. And I thought nobody ever RTFA, not RTFC.
Re: (Score:2)
Re: (Score:3, Funny)
But should I help out DARPA? I don't think so. Someone else can help you kill people, poison the environment, and support the growing neo-con empire.
Yeah, they can go to Hell and take their fucking internet with them.
Re: (Score:2)
Re: (Score:2)
Shit, forgot to log in before posting. There. Oh, and for all of you young folk that think your ID is low, no, it isn't. Mine isn't even low. I forgot the login credentials for my first acct (in the 100s). Damn.
Re: (Score:2)
Er, my post was the "True AI won't happen until...". Seems I'm not too intelligent, myself, sometimes.
Re:True AI won't happen until... (Score:4, Informative)
As far as the requirement for "free will" in computer systems, you've put the cart before the horse and assumed that free will must exist for a system to simulate the mind, without ever proving that the mind is anything other than a deterministic system of unbelievable complexity. To presume that it is nondeterministic because you cannot adequately predict its behavior is pretty obviously bad logic.
The human brain does not take advantage of any known large-scale quantum effects, and, so far as we know, does not exploit any of them to produce random behavior. Once again, the inability to demonstrate a pattern is not evidence that a pattern does not exist.
Asynchronous computing does not produce or take advantage of quantum uncertainty. The levels of quantum uncertainty involved are swallowed by the impact of the deterministic systems they are filtered through, and drowned out by the impact of chaotic but deterministic variations in process scheduling, resource locking, and timing conflicts. The same goes for parallel computing for the same reasons- network latency is a chaotic, not random, phenomenon.
In terms of the use of quantum uncertainty for intelligent systems, there is no doubt that quantum computing holds tremendous promise, but also that its applications are hugely misunderstood. It is not a cure-all for general computing problems, and it particularly does not solve the problem of being insufficiently able to describe the your problem.
Bottom line is that chaos != randomness, and unpredicted != unpredictable. What you've got is good philosophy, but does not accurately depict the state of AI or what we know about the systems you are describing.
Re: (Score:2)
Chaos == determinate, non-predictable. I claim that determinism inherently rules out free will. I assume the human mind has free will, and that free will is a prerequisite for true intelligence (not simulating real intelligence). That assumption is a leap, and I fully understand and accept that. But, if you accept that assumption, a chaotic system cannot have true intelligence.
Asynchronous computing, using current models, does not produce or take advantage of quantum uncertainty. But I posit that they
Re: (Score:2)
Re: (Score:2)