MIT & Harvard On Brain-Inspired A.I. Vision 27
An anonymous reader writes with this excerpt from TGDaily:
"Researchers from Harvard and MIT have demonstrated a way to build better artificial visual systems with the help of low-cost, high-performance gaming hardware. [A video describing their research is available.] 'Reverse engineering a biological visual system — a system with hundreds of millions of processing units — and building an artificial system that works the same way is a daunting task,' says David Cox, Principal Investigator of the Visual Neuroscience Group at the Rowland Institute at Harvard. 'It is not enough to simply assemble together a huge amount of computing power. We have to figure out how to put all the parts together so that they can do what our brains can do.' The team drew inspiration from screening techniques in molecular biology, where a multitude of candidate organisms or compounds are screened in parallel to find those that have a particular property of interest. Rather than building a single model and seeing how well it could recognize visual objects, the team constructed thousands of candidate models, and screened for those that performed best on an object recognition task. The resulting models outperformed a crop of state-of-the-art computer vision systems across a range of test sets, more accurately identifying a range of objects on random natural backgrounds with variation in position, scale, and rotation. Using ordinary CPUs, the effort would have required either years or millions of dollars of computing hardware. Instead, by harnessing modern graphics hardware, the analysis was done in just one week, and at a small fraction of the cost."
Inconsiderate (Score:4, Funny)
Instead, by harnessing modern graphics hardware, the analysis was done in just one week, and at a small fraction of the cost.
How inconsiderate. Think about all the potential engineers, administrators, janitors and etc, that would have been needed to do all that work the slow way; thus creating jobs for many for years to come. With one swoop all that potential future effort was made redundant, once again "researchers" have proven that they are unable to see the big picture!
Real soon now (Score:1)
One of these guys will read about GA, realize this has been done before in other problem spaces, and already has a name.
Try not to get stuck on a local maxima!
Not Really a GA (Score:1)
They didn't really use a GA. They had a genome that described the structure of the neural net they wanted to test, but they didn't "evolve" the population through any process of mutation or crossover. They just kept generating new random individuals until they had a good one.
It's like only doing the first step of a GA, but you keep generating random starting points until you find one who's fitness is fairly high (although they did a uniform sampling over all parameter values for their starting point, not qu
Re: (Score:3, Informative)
Low hardware (Score:3, Interesting)
How much power requires that pattern recognition? By standards approachs probably a lot, but the approach they seem to use there (like in compare how much fits what they have with thousands of candidate models) could require less, and far better if you use for that hardware that are more adequated for that task.
Re: (Score:1)
What I think we're going to find is that one type of system won't be sufficient for us create an AI. Early work was done on symbolic systems. Those eventually worked pretty well on idealized domains. They fell apart when they tried to interact with the real world. Neural nets are starting to handle simple real worlds tasks, but can't handle complex domains.
My thought is that we'll see something like this:
Audio/Visual/Etc. Input --> Neural Net-based symbol extractor --> Symbolic Planning and Decision S
Genetic algorithms? (Score:3, Insightful)
The team drew inspiration from screening techniques in molecular biology, where a multitude of candidate organisms or compounds are screened in parallel to find those that have a particular property of interest. Rather than building a single model and seeing how well it could recognize visual objects, the team constructed thousands of candidate models, and screened for those that performed best on an object recognition task.
Without reading the article, because that would be silly, this sounds a lot like using genetic algorithms. Not actually a new technique.
Not really a GA (Score:1)
As I posted in a reply above, they didn't really use a GA. There were no mutations or crossover. They just kept generating new random individuals until they had a good one.
Still not a new idea (Score:2)
Re: (Score:1)
First of all, the authors have chosen the wrong metaphor. HTS is about sending a large number of different organisms through the same assay, looking for candidate targets that produce a positive response to that one test. The folks at Rowland are doing the inverse. They're sending a large number of assays at a single target (many candidate algorithm variations (assays) seeking to recognize an image or a specific pattern therein).
Second, they're generating candidate algorithms, not just algorithm paramete
Re: (Score:1)
Third, the kind of algorithms they describe sound more like neural nets with a large combinatoric variation of units, layers, feedback components, and sundry other possible variants. These seem to me to be both a biologically plausible mechanism for a biologically inspired vision model, not to mention a viable candidate for the pattern recognition of a wide range of vision targets. GA-based mechanism would be too symbolic and boolean to be biologically plausible, and too rigid in matching specific input patterns.
Actually, using GAs (an optimization method) to train a neural network (an optimization problem) is a fairly common technique [google.com].
But does it work? (Score:1, Insightful)
It seems to me that they are just using random functions to see what works best. But what they neglect to say is how good their best functions do. What percentage of of identification is correct. The assumption is that the brain uses some mathematical function for its processing which may not be the case.
Re: (Score:1)
Combinatorial Chemistry (Score:1)
huh (Score:1)
That part about magic cost reduction sounds a little funny.
Also, I think it should be said that a decent single core graphics card might cost $150 vs. one of those power-wise full featured $300 PS3's with the Cell processor (not the mention a stream-processor carrying Nvidia/ATI video card).