Facebook Thinks Occlusion Is the Next Great Frontier For Image Recognition 32
An anonymous reader writes: Researchers at Facebook AI Research (FAIR) have published a paper contending that image recognition research is now advanced enough to consider the problem of occlusion, wherein the objects AI must identify are either partially cropped or partially hidden. Their solution is the predictably labor-expensive route of human annotation of existing image-set databases, in this case 'finishing off' occluded objects with vector outlines and assigning them a z-order. This article looks at the practical and even philosophical problems of getting IR algorithms to 'guess' objects usefully, and asks whether practical IR research might not be currently limited both by the use of over-specific image datasets and — in the field of neural networks — by problems of theory and limited 'local' processing power in critical real-time situations.
Re: (Score:3)
Facebook does conduct research into AI. They need such technology to more effectively mine their vast database for advertising information.
Occlusion handling is the difference between 'subject identified as Joe Bloggs' and 'Subject identified as Joe Blogs wearing Adidas trainers and posing in front of a Skoda. Increase targeting of well-known fashion brands, decrease targeting for automotive products.'
Re: (Score:2)
But...
Will it be susceptible to optical illusions?
Re: (Score:3)
It will have to be. The ability to figure out what you're looking at with incomplete information is exactly what leads to optical illusions, you can't really have one without the other.
Re: (Score:2)
Thank you, I am starting to work on countermeasure right away so I can keep my private life. I'll arrange so it thinks I am some politician, a giraffe, an SUV or something else. There is all kinds of illusionist shows on TV so it shouldn't be that hard.
Re: (Score:2)
Will it be susceptible to optical illusions?
Vision systems based on artificial neural nets are susceptible to many of the same optical illusions as people, and for mostly the same reasons. The basic vertebrate eye has been around for 530 million years. If optical illusions were easy to avoid, nature would have figured out a way to do it by now.
i spy with my little eye (Score:2)
Re: (Score:1)
> Especially black faces
This! I cook a lot and post pictures to Facebook. It can never find my face, but it thinks my stovetop is a face.
Re: (Score:2)
Thanks for the idea. I'm going to go right now and arrange two fried eggs and a strip of bacon in a smiley face and post it as my Facebook profile picture.
Re: (Score:1)
Funny thing is that virtually all AI vision systems have problems with black faces. It isn't human racism that is the cause or 'machine' racism, its the physics of cameras and optics and light itself. At least with modern HDR cameras it is a problem we have some hope of beating..
Time should be used in occlusion problem (Score:2)
If a series of images is available and observer or target or intermediate objects are moving, occlusion will vary image to image and the nature of the delta portions should be highly informative for recognition. This requires an object/region re-identification subsystem.
Also, scene context statistics should be used, much as preceding utterances are used in speech recognition. Given that we've already recognized a situation type with this that and the other object-type in it in this (possibly dynamic) relati
Re: (Score:1)
Doesn't matter how much research they do, this kind of vision only will only work if part of a Strong AI. The keyword is 'dynamic processing' and its pretty difficult even with Strong AI. I know, its a field I have worked on directly.
Why supervised? (Score:2)
So if regular object recognition is such a solved problem, why to they need people to manually prepare the images? I'd just take a normal image, recognize the objects, and then partially cover some of them to train their algorithm.
makes sense, but don't tell them (Score:2)
That sure makes sense. Don't tell them, though. The inability of image recognition software to handle cropped pictures is one thing which my better replacement for CAPTHCA depends on. CAPTHCA sucks because humans aren't much better at computers at recognizing squiggly letters. We are, however, MUCH better at recognizing certain specifc types of images when they are cropped and rotated.
Re: (Score:2)
Yes. One can synthetically create cropped images to train CNNs. Then if you recognize "person standing" in the left side of an image and "front end of commercially relevant automobile" in the right side of the image you can likely expect that this is a person standing in front of the automobile, unless the template for junkyard is also signalling recognition. Then you zero in on which of your friends is standing there, and try to get that friend to recommend to you that you need a new car just like his. Alm
Seems like lots of work ahead still (Score:2)
The use of vector completion and all is a good idea, but it seems systems like that would work better in conjunction with other techniques, like trying to consider context of the area where you are in. What is behind a tall narrow object varies a lot depending on if you are in a jungle vs. a parking garage...
Re: (Score:2)
Example:
https://i.ytimg.com/vi/7I95IFw... [ytimg.com]
Enhance 34 to 46. (Score:4, Funny)
Pull back. Wait a minute. Go right. Stop.
Enhance 57 to 19. Track 45 left. Stop.
Enhance 15 to 23.
Gimme a hard copy right there.
Re: (Score:2)
No only did that Decker have access to a plainly ridiculous level of zoom, when panning around, the perspective of the image changes, and object that were hidden from the original perspective appear. https://www.youtube.com/watch?... [youtube.com] We're left having to assume that "enhance" operation can do wonders on an old snapshot, or that it's something of an old snapshot from a holographic Polaroid. It would sure make image occlusion an easier problem to solve.
Re: (Score:2)
I figured the device was "looking around the corner" by extrapolating from visible reflections. A human can easily do that given a properly-placed mirror, even a curved or broken one, but a computer might be able to piece it together from distorted fragments around the room — a shiny doorknob here, a beercan there, a metallic light fixture up above. Sort of reverse raytracing?
Re: (Score:1)
GA (Score:2)
What about a kind of genetic algorithm to evolve candidate 3D models, and the model that best matches observations and context "wins". However, that is computationally intensive. But, it is highly parallelizable.
The ultimate goal (Score:2)
In fact, this won't stop at merely recognising faces that are partially obscured - in the not so distant future, they will be able to recognise faces that are completely absent!