Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software Science

Recognizing Scenes Like the Brain Does 115

Roland Piquepaille writes "Researchers at the MIT McGovern Institute for Brain Research have used a biological model to train a computer model to recognize objects, such as cars or people, in busy street scenes. Their innovative approach, which combines neuroscience and artificial intelligence with computer science, mimics how the brain functions to recognize objects in the real world. This versatile model could one day be used for automobile driver's assistance, visual search engines, biomedical imaging analysis, or robots with realistic vision. Here is the researchers' paper in PDF format."
This discussion has been archived. No new comments can be posted.

Recognizing Scenes Like the Brain Does

Comments Filter:
  • by Anonymous Coward on Sunday February 11, 2007 @05:40PM (#17975714)

    and seeing the spam for what it is

    oh and here is the PDF
    http://cbcl.mit.edu/projects/cbcl/publications/ps/ serre-wolf-poggio-PAMI-07.pdf [mit.edu]

    not that Roland would even understand what it says, he just reads press releases via RSS, copies the summary and hits submit
    We appreciate that the Editor removed his spammy link to ZDNet (no wonder they are losing cash)
    but is Slashdot that short of good stories that they have to choose a known plagiarisers articles and actively edit them over the hundreds of original submissions they get daily ?

    i would of chosen to read Digg instead but that is even worse, full of credit card scams, made for adsense blogs and millions of MLM bloggers all hawking their refferal links and real estate blogs hoping people will click on their crappy asbestos and insurance links

    sheesh can't a geek get some decent news for a change ? obviously not, Internet 2 anybody

  • by The Living Fractal ( 162153 ) <banantarr@hot m a i l.com> on Sunday February 11, 2007 @05:46PM (#17975748) Homepage
    I understand the reasoning behind modeling these systems on our own highly-evolved (ok, maybe not in some people) biological systems. What I want to see, however, is something capable of learning and improving its' own ability to learn. If our intelligent systems are always evolution-limited by the progress of our own biological systems then I can't see how A.I. smarter than a human will ever ben achieved. But if we are able to give these systems our own abilities as a starting point and then watch it somehow create something more intelligent than we are... then we really have something. Whether or not what we have is good at that point I can't say, though there are many people and communities in the world who are working on making sure this post-human intelligence doesn't essentially destroy us. Foresight for example.

    I'm not knocking the MIT research, I think it's amazing. It just seems to me like imitation rather than imagination. Granted, highly evolved and complicated imitation. But does it even have the abilities of a parrot?

    TLF
  • nothing new (Score:4, Insightful)

    by Anonymous Coward on Sunday February 11, 2007 @06:01PM (#17975862)
    After scanning this paper, their model extends nothing in the state of the art in cognitive modeling. Others have produced much more comprehensive and much more biologically accurate models. There's no retinal ganglion contrast enhancement, no opponent color in LGN (or color at all), no complex cells, no Magno/Parvocellular pathways, no cortical magnification, no addressing of aperture problem (seem to treat scene as a sequence of snapshots, while the brain... does not) the object recognition is not biologically inspired. Some visual system processes can be explained with feedforward only mechanisms, but all visual system processes can't.
  • by the grace of R'hllor ( 530051 ) on Sunday February 11, 2007 @06:05PM (#17975888)
    Of course it's imitation. So is machine-learning and machine procreation. What makes you think we're currently limited by our biological capabilities? We're biologically almost identical to cave men, but where they smeared charcoal and spit animal paintings on walls, we now land probes on Mars. We're on a roll.

    Give machines our own capabilities? We can't even have them move about in a reliable fashion, what makes you think we're even *close* to endowing machinery with creativity and abstract thought at human levels? Or even parrot levels, since you mention it? There are many hurdles to be cleared before we can consider creating an AI that has a practical chance of surviving to do anything useful, and machine vision (and the processes involved in making this robust) are critically important.
  • by S3D ( 745318 ) on Sunday February 11, 2007 @06:08PM (#17975908)
    Gabor wavelets, newral networks, hierarchical classifiers in some semi-new combination - there are dozens image recognition papers like this every month. Why this exact paper is special ?
  • by suv4x4 ( 956391 ) on Sunday February 11, 2007 @06:42PM (#17976166)
    If our intelligent systems are always evolution-limited by the progress of our own biological systems then I can't see how A.I. smarter than a human will ever ben achieved.

    You know this is pretty misleading so you can't take any blame for thinking so. Lots of people also think that we're also "a hundred years smarter" than those living in the 1900's, just because we were lucky to be born in a higher culture.

    But think about it: what is our entire culture and science, if not ultra sped-up evolution. We make mistakes, tons of mistakes, as human beings, but compared to completely random mutations, we have supreme advantage over evolution in the signal/noise ratio of the resulting product.

    Can we ever surpass our own complexity in what we create? But of course. Take a look at any moderately complex software product. I won't argue it's more complex than our brain, but something else: can you grasp and asses the scope of effort and complexity in, say (something trivial to us), Windows running on a PC, as one single product? Not just what's on the surface, but comprehend at once every little detail from applications, dialogs, controls, drivers, kernel, to the processor microcode.

    I tell you what: even the programmers of Windows, and the engineers at Intel can't.

    Our brain works in "OOP" fashion, simplifying huge chunks of complexity into a higher level "overview", so we could think about it in a different scale. In fact, lots of mental diseases, like autism or obsessive compulsive disorders revolve around the loss of ability to "see the big picture" or concentrate on a detail of it, at will.

    Together, we break immensely complex tasks into much smaller, manageable tasks, and build upon the discoveries and effort we made yesterday. This way, although we still work on tiny pieces of a huge mind-bogglingly complex puzzle, our brain can cope with the task properly. There aren't any limits.

    While I'm sure we'll see completely self-evolving AI in the next 100 years, I know that developing highly complex sentient AI with only elements of self-learning is quite in the ability of our scientists. Small step, by small step.
  • by Xemu ( 50595 ) on Sunday February 11, 2007 @07:00PM (#17976294) Homepage
    Each of us can almost always look at a scene and determine the difference between a jogger and a purse thief on the run or a businessman late for an appointment.

    Actually, we can't, we just base this recognition on stereotypes. A well known Swedish criminal called "the laser man" exploited this in the early 90s when robbing banks. He would rob the bank and then change clothes to a business man or a jogger, and then escape the scene. The police would more often than not let him pass through because they were looking for a "escaping robber", not for a "business man taking a slow paced walk".

    The police caught on eventually and caught the guy. Computers would of course have even greater difficulties to think "outside the box".

  • by rm999 ( 775449 ) on Sunday February 11, 2007 @07:24PM (#17976488)
    Creating "biologically inspired" models of AI is by no means a new topic of research. From what I can tell, most of these algorithms work by stringing together specialized algorithms and mathematical functions that are, at best, loosely related to the way the brain works (at a high level). By contrast, the brain is a huge, complicated, connectionist network (neurons connected together).

    That isn't my real problem with this algorithm and the 100s of similar ones that have come before it. What bothers me is that they don't really get at the *way* the brain works. It's a top-down approach, which looks at the *behavior* of the brain and then tries to emulate it. The problem with this technique is it may miss important details by glossing over anything that isn't immediately obvious in the specific problem being tackled (in this case vision). This system can analyze images, but can it also do sound? In a real brain, research indicates that you can remap sensory inputs to different parts of the brain and have the brain learn it.

    I'm still interested in this algorithm and would like to play around with the code (if it's available), but I am skeptical of the approach in general.
  • by Maxo-Texas ( 864189 ) on Sunday February 11, 2007 @07:42PM (#17976620)
    It's going to change everything.

    Robotic vision is a tipping point.

    A large number of humans become unemployable shortly after this becomes a reality.

    Anything where the only reason a human has the job is because they can see is done in the 1st world.

    Why should you pay $7.25 an hour (really $9.25 w/benefits & overheard for workers comp, unemployment tax, etc.) when you can buy a $12,000 machine to do the same job (stocking grocery shelves, cleaning, painting, etc.).

    The leading edge is here with things like roomba's.
  • I think you overestimate the human ego. I don't know about you, but I'd be perfectly happy to give up the mundane task of driving to an intelligent machine if it can do it better than I can. That frees me up to read the paper on the drive to work, or countless other more useful things I could be doing if I didn't have to constantly keep my eyes on the road.

    I do agree with you on one point, but not for the reason you do: the problem of control. If there's any reason that an intelligent driving system wouldn't take off it would be because there isn't a human in control, so who gets blamed when something does go wrong? How would insurance companies handle this? Do our rates go down because we now have a machine in control that does a better job than we do? Do our rates go up if somehow there is an accident, even though it wasn't due to human error? Will people even accept an artificially intelligent driving machine if it has a less than completely, 100% reliable and error free record?

    My gut reaction tells me probably not, because when something goes wrong, people look for someone to blame. If you can't blame the driver, do we blame the company that makes the IDS? If someone dies in an accident involving one of these systems, do we hold the company liable for it, even if it reduces the number of overall auto fatalities by, say, 90%? 95%? What level of imperfection are people prepared to accept? Is there ANY level that would be acceptable when you take the control out of the hands of humans, who we know and accept to be imperfect and therefore don't expect to be?

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...