Cutting-Edge AI Projects? 346
Xeth writes "I'm a consultant with DARPA, and I'm working on an initiative to push the boundaries of neuromorphic computing (i.e. artificial intelligence). The project is designed to advance ideas all fronts, including measuring and understanding biological brains, creating AI systems, and investigating the fundamental nature of intelligence. I'm conducting a wide search of these fields, but I wanted to know if any in this community know of neat projects along those lines that I might overlook. Maybe you're working on a project like that and want to talk it up? No promises (seriously), but interesting work will be brought to the attention of the project manager I'm working with. If you want to start up a dialog, send me an email, and we'll see where it goes. I'll also be reading the comments for the story."
An obvious one. (Score:5, Informative)
numeta [numenta.com]
It's mainly a teaching + learning system for a system with input and output. I don't see anything built with it answering any rational questions or coming up with new ideas anytime soon, but if you do AI and don't know about them, you better catch up.
OpenCog (Score:5, Informative)
http://www.opencog.org/wiki/Main_Page [opencog.org]
http://www.agiri.org/OpenCog_AGI-08.pdf [agiri.org]
http://justingibbs.com/how-to-make-singularity-bearable-in-its-infancy [justingibbs.com]
http://www.innergybv.biz/blog/?p=175 [innergybv.biz]
http://ieet.org/index.php/IEET/more/goertzel20080620/#When:22:49:00Z [ieet.org]
http://xlaurent.blogspot.com/2008/06/opensim-for-opencog.html [blogspot.com]
There's a number of GSoC projects for OpenCog currently underway also:
http://code.google.com/soc/2008/siai/about.html [google.com]
So the first release should be very interesting.
Experience Based Language Acquisition (Score:2, Informative)
http://sourceforge.net/projects/ebla [sourceforge.net]
http://acl.ldc.upenn.edu/W/W03/W03-0607.pdf [upenn.edu]
Blue Brain (Score:3, Informative)
Take a look at the project http://bluebrain.epfl.ch/ [bluebrain.epfl.ch]
Re:An obvious one. (Score:5, Informative)
I think the Deep Belief Networks of Hinton et al are way ahead of Numenta.. in that they are real science with measurable results that has been reproduced by multiple implementations. The 2006 paper that started it all and Hinton's presentation on google video:
http://www.gatsby.ucl.ac.uk/~ywteh/research/ebm/nc2006.pdf [ucl.ac.uk]
http://video.google.com.au/videoplay?docid=228784531481853811 [google.com.au]
A formal analysis:
http://www.cs.utoronto.ca/~ilya/pubs/2007/inf_deep_net_utml.pdf [utoronto.ca]
Application to natural language processing:
http://www.cs.swarthmore.edu/~meeden/cs81/s08/DahlLaTouche.pdf [swarthmore.edu]
http://www.machinelearning.org/proceedings/icml2007/papers/425.pdf [machinelearning.org]
Reproducing Hinton and extension to and evaluation in other domains:
http://www.machinelearning.org/proceedings/icml2007/papers/331.pdf [machinelearning.org]
Use in Computer animation of facial expressions:
http://aclab.ca/users/josh/downloads/pubs/23_Susskind_Hinton_Movellan_Anderson.pdf [aclab.ca]
Most impressive:
http://www.cs.utoronto.ca/~ilya/pubs/2007/aistats_multilayered.pdf [utoronto.ca]
A C++ implementation (although it has much Python love):
http://plearn.berlios.de/ [berlios.de]
So yeah, there's some pretty good demonstrations of how powerful DBNs are.. Numenta is lagging behind.
A few leading groups (Score:1, Informative)
This is an area with lots of crackpots, but also lots of really interesting stuff.
How do you tell the good stuff from the crackpot?
The good ones are published in top machine learning, computer vision, robotics, and AI conferences and journal. The crackpot stuff doesn't survive peer review.
Here are a few good examples:
- Geff Hinton (U. Toronto): http://www.cs.toronto.edu/~hinton/ [toronto.edu]
- Yoshua Bengio (U. Montreal: http://www.iro.umontreal.ca/~bengioy/ [umontreal.ca]
- Yann LeCun (NYU): http://www.cs.nyu.edu/~yann/index.html [nyu.edu]
- Andrew Ng (Stanford): http://ai.stanford.edu/~ang/ [stanford.edu]
- Sebastian Seung (MIT): http://hebb.mit.edu/people/seung/ [mit.edu]
- David Lowe (U British Columbia): http://www.cs.ubc.ca/~lowe/ [cs.ubc.ca]
Re:Fundamental research? (Score:5, Informative)
Re:Fundamental research? (Score:3, Informative)
Re:Fundamental research? (Score:3, Informative)
What about cognitive science [wikipedia.org]. Using logic isn't cognition - "you're not thinking you're using logic"
Re:OpenCog (Score:3, Informative)
There's three main components to OpenCog:
* The architecture
* The Probabilistic Logic Network algorithm
* The competent generic algorithms system (MOSES)
and it is openly admitted that this isn't enough.. some sub-symbolic algorithms will be needed to complement all this "neat" (although probabilistically weighted) work.
And then hanging on the side is all the Natural Language Processing stuff, and the embodied simulation stuff, both of which are meant to generate interesting input for OpenCog to process.
To date, the PLN algorithm hasn't not been made public, although that could change in the next month. The MOSES stuff is public, but not integrated yet. The Natural Language Processing stuff is written in Java (the rest of the framework is written in C++) and this won't change as the XML interfaces between the two are sufficiently powerful that there is just no need.
Probably not until September 2008 or later will all these parts start coming together.. and it'll likely be well into next year before something "impressive" can be demoed without outright cheating. Then, hopefully, things will take off. But who knows. We have to wait and see and optimistic predictions do more harm than good.
Re:True AI won't happen until... (Score:4, Informative)
As far as the requirement for "free will" in computer systems, you've put the cart before the horse and assumed that free will must exist for a system to simulate the mind, without ever proving that the mind is anything other than a deterministic system of unbelievable complexity. To presume that it is nondeterministic because you cannot adequately predict its behavior is pretty obviously bad logic.
The human brain does not take advantage of any known large-scale quantum effects, and, so far as we know, does not exploit any of them to produce random behavior. Once again, the inability to demonstrate a pattern is not evidence that a pattern does not exist.
Asynchronous computing does not produce or take advantage of quantum uncertainty. The levels of quantum uncertainty involved are swallowed by the impact of the deterministic systems they are filtered through, and drowned out by the impact of chaotic but deterministic variations in process scheduling, resource locking, and timing conflicts. The same goes for parallel computing for the same reasons- network latency is a chaotic, not random, phenomenon.
In terms of the use of quantum uncertainty for intelligent systems, there is no doubt that quantum computing holds tremendous promise, but also that its applications are hugely misunderstood. It is not a cure-all for general computing problems, and it particularly does not solve the problem of being insufficiently able to describe the your problem.
Bottom line is that chaos != randomness, and unpredicted != unpredictable. What you've got is good philosophy, but does not accurately depict the state of AI or what we know about the systems you are describing.
Re:Instance created October 12, 2007 (Score:3, Informative)
I doubt he'll comment. But I will: it sounds like bullshit to me. Unless you can propose how exactly somebody might interface a neural network to a knowledge-based system. That's substantially more advanced than any ANN system I've encountered so far, and I've looked at some fairly esoteric ANN designs.
Regarding the fundamental nature of intelligence.. (Score:1, Informative)
As has been mentioned above, AI has historically been focused on soving small problems, and has generally had the problem of missing the forest for the trees. Two groups that have been working on full blown general artificial intellegence are Novamente and The Singularity Institute for Artificial Intelligence.
http://www.novamente.net/
http://www.singinst.org/
Both are working towards an archetecture that would allow true sentience to emerge once the system has gained enough experience. While SIAI is focused on theoretical research, raising AI awareness, and fundraising; Novamente is focused on actual implementation, and have been working largely in the Second Life realm to avoid all those nasty robotics and computer vision issues.
Another promising company is Cyc.
http://www.cyc.com/
They started hard coding facts about life and the universe into a database over 20 years ago, and in recent years have begun to reach that critical threshold where the system knows enough to learn more on its own, and reason about the knowledge it already has in order to infer new information. It then checks its ideas for acuracy through searching google.