Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Software Technology Games

MIT & Harvard On Brain-Inspired A.I. Vision 27

An anonymous reader writes with this excerpt from TGDaily: "Researchers from Harvard and MIT have demonstrated a way to build better artificial visual systems with the help of low-cost, high-performance gaming hardware. [A video describing their research is available.] 'Reverse engineering a biological visual system — a system with hundreds of millions of processing units — and building an artificial system that works the same way is a daunting task,' says David Cox, Principal Investigator of the Visual Neuroscience Group at the Rowland Institute at Harvard. 'It is not enough to simply assemble together a huge amount of computing power. We have to figure out how to put all the parts together so that they can do what our brains can do.' The team drew inspiration from screening techniques in molecular biology, where a multitude of candidate organisms or compounds are screened in parallel to find those that have a particular property of interest. Rather than building a single model and seeing how well it could recognize visual objects, the team constructed thousands of candidate models, and screened for those that performed best on an object recognition task. The resulting models outperformed a crop of state-of-the-art computer vision systems across a range of test sets, more accurately identifying a range of objects on random natural backgrounds with variation in position, scale, and rotation. Using ordinary CPUs, the effort would have required either years or millions of dollars of computing hardware. Instead, by harnessing modern graphics hardware, the analysis was done in just one week, and at a small fraction of the cost."
This discussion has been archived. No new comments can be posted.

MIT & Harvard On Brain-Inspired A.I. Vision

Comments Filter:
  • by Narpak ( 961733 ) on Saturday December 05, 2009 @11:50AM (#30335314)

    Instead, by harnessing modern graphics hardware, the analysis was done in just one week, and at a small fraction of the cost.

    How inconsiderate. Think about all the potential engineers, administrators, janitors and etc, that would have been needed to do all that work the slow way; thus creating jobs for many for years to come. With one swoop all that potential future effort was made redundant, once again "researchers" have proven that they are unable to see the big picture!

  • One of these guys will read about GA, realize this has been done before in other problem spaces, and already has a name.

    Try not to get stuck on a local maxima!

    • They didn't really use a GA. They had a genome that described the structure of the neural net they wanted to test, but they didn't "evolve" the population through any process of mutation or crossover. They just kept generating new random individuals until they had a good one.

      It's like only doing the first step of a GA, but you keep generating random starting points until you find one who's fitness is fairly high (although they did a uniform sampling over all parameter values for their starting point, not qu

      • Re: (Score:3, Informative)

        by linhares ( 1241614 )
        Ok, just skimmed the paper [ploscompbiol.org]. First impressions: it's a good idea. The problem I see is that, after finding a great model, they have absolutely no clue as to why that one works. That is, a functional theory isn't improved by this kind of work; though it is indeed promising and the theory can be improved if someone can understand what the heck that model is doing.
  • Low hardware (Score:3, Interesting)

    by gmuslera ( 3436 ) on Saturday December 05, 2009 @12:43PM (#30335756) Homepage Journal
    Eyes, brain raw power, could be considered somewhat "low" technology, But you need to be smart to implement a pattern recognition engine (and integration with existing data) as the brain have. Think that you can have "vision" with something far less precise than eyes (with i.e. this [seeingwithsound.com] and similar low res devices).

    How much power requires that pattern recognition? By standards approachs probably a lot, but the approach they seem to use there (like in compare how much fits what they have with thousands of candidate models) could require less, and far better if you use for that hardware that are more adequated for that task.

    • What I think we're going to find is that one type of system won't be sufficient for us create an AI. Early work was done on symbolic systems. Those eventually worked pretty well on idealized domains. They fell apart when they tried to interact with the real world. Neural nets are starting to handle simple real worlds tasks, but can't handle complex domains.

      My thought is that we'll see something like this:

      Audio/Visual/Etc. Input --> Neural Net-based symbol extractor --> Symbolic Planning and Decision S

  • by jjh37997 ( 456473 ) on Saturday December 05, 2009 @01:51PM (#30336460) Homepage

    The team drew inspiration from screening techniques in molecular biology, where a multitude of candidate organisms or compounds are screened in parallel to find those that have a particular property of interest. Rather than building a single model and seeing how well it could recognize visual objects, the team constructed thousands of candidate models, and screened for those that performed best on an object recognition task.

    Without reading the article, because that would be silly, this sounds a lot like using genetic algorithms. Not actually a new technique.

    • As I posted in a reply above, they didn't really use a GA. There were no mutations or crossover. They just kept generating new random individuals until they had a good one.

      • I personally have been doing this as well as the same thing with mutations for 6 years in an artificial life/neural net simulation. And I'm just a hobbyist (many researchers have and are doing all kinds of this type of stuff). It's definitely a powerful technique and fun to read about their success, but hardly new.
    • First of all, the authors have chosen the wrong metaphor. HTS is about sending a large number of different organisms through the same assay, looking for candidate targets that produce a positive response to that one test. The folks at Rowland are doing the inverse. They're sending a large number of assays at a single target (many candidate algorithm variations (assays) seeking to recognize an image or a specific pattern therein).

      Second, they're generating candidate algorithms, not just algorithm paramete

      • Third, the kind of algorithms they describe sound more like neural nets with a large combinatoric variation of units, layers, feedback components, and sundry other possible variants. These seem to me to be both a biologically plausible mechanism for a biologically inspired vision model, not to mention a viable candidate for the pattern recognition of a wide range of vision targets. GA-based mechanism would be too symbolic and boolean to be biologically plausible, and too rigid in matching specific input patterns.

        Actually, using GAs (an optimization method) to train a neural network (an optimization problem) is a fairly common technique [google.com].

  • But does it work? (Score:1, Insightful)

    by Anonymous Coward

    It seems to me that they are just using random functions to see what works best. But what they neglect to say is how good their best functions do. What percentage of of identification is correct. The assumption is that the brain uses some mathematical function for its processing which may not be the case.

  • Combinatorial chemistry [wikipedia.org] techniques seem to be the inspiration here. Also known as trial and error (albeit in a rapid well-organized fashion). Not exactly a new idea, but this is an interesting new implementation.
  • Who knows if Nvidia OR AMD (think: Fusion) has been funding this research?
    That part about magic cost reduction sounds a little funny.
    Also, I think it should be said that a decent single core graphics card might cost $150 vs. one of those power-wise full featured $300 PS3's with the Cell processor (not the mention a stream-processor carrying Nvidia/ATI video card).

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...