Forgot your password?
typodupeerror
AI Technology

New Object Recognition Algorithm Learns On the Fly 100

Posted by samzenpus
from the we'll-do-it-live dept.
Zothecula writes "Scientists at Brigham Young University (BYU) have developed an algorithm that can accurately identify objects in images or videos and can learn to recognize new objects on its own. Although other object recognition systems exist, the Evolution-Constructed Features algorithm is notable in that it decides for itself what features of an object are significant for identifying the object and is able to learn new objects without human intervention."
This discussion has been archived. No new comments can be posted.

New Object Recognition Algorithm Learns On the Fly

Comments Filter:
  • by barlevg (2111272) on Monday January 20, 2014 @02:49PM (#46016877)
    ...but I don't think an evolutionary algorithm approach to pattern recognition is anything new.
    • Re: (Score:2, Funny)

      by Anonymous Coward

      The big news here is that it's being trained to detect bullshit. It's currently 0-10.

    • by medv4380 (1604309)
      Depends on what they mean by "evolution constructed". I assume that rather than have a human input the best guess at what features to look for, like eye spacing/nose placement, they use a Genetic Algorithm to narrow down to a good enough input. Still requires a person to load a bunch of pictures of air planes, and tell it to "learn" these objects. It's not really learning objects on its own, but rather learning which traits are best for identifying the object on its own.

      With our algorithm, we give it a set of images and let the computer decide which features are important.

      Without seeing their code that is my

      • by durrr (1316311) on Monday January 20, 2014 @03:30PM (#46017375)

        The way I understand it is that it identifies recurrent features and learns them. Meaning that giving it a huge image library with no labels would mean it can recognize say, roses, but it would call them "object 193131", not "rose".

        • by Anonymous Coward on Monday January 20, 2014 @03:59PM (#46017713)

          Well, you know what they say... "a rose by any other name would posess eigenvalues closely matching those of object 193131."

        • by Tablizer (95088)

          But how is that different from current AI?

        • by timeOday (582209) on Monday January 20, 2014 @04:56PM (#46018285)
          There is no way an image-based unsupervised algorithm can learn to recognize objects. "Shapes that frequently go together," yes. But what we consider "objects" is not an objective reality, it is a mental construct that is largely functionally-determined. It will never figure out all the different forms to which we ascribe the label "chair."

          Sitting on a street, a "bicycle" is an object because it is most like to be operated on as a unit. But to a bicycle mechanic, a bicycle is a collection of objects, such as a frame, a seat.. and so on because they need to decompose the "bicycle" construct to do their job. To somebody on an assembly line putting together bicycle seats, a seat is (at least initially) several different objects.

          So, truly unsupervised algorithms cannot do useful recognition - that is, classify objects the same way people do. (A robot that could experiment with its environment and learn to use "objects" could come closer).

          • by JanneM (7445)

            So, truly unsupervised algorithms cannot do useful recognition - that is, classify objects the same way people do.

            That's an overly narrow defintion of "useful recognition".

            • by fatphil (181876)
              And a human who looks at a bike and says "that's a nice collection of different parts" isn't doing "useful recognition" either, in the eyes of most people.
          • by Lamps (2770487)

            The unsupervised algorithm discussed in the article seems to code some sort of visual input and, I'd infer, to perform clustering, which permits it to assign labels (i.e. let's say, 'tree', 'human', etc.) to objects it has encountered. It can use this schema which it has constructed to assign objects it hasn't seen before to a cluster - that is, it labels novel inputs in accordance with its schema. Thus, the algorithm 'recognizes' classes of objects. I'd imagine if you granularize the detail level of detail

          • Your logic is flawed.

            By splitting object-recognition into two cases...
            A: Atomic shapes
            B: Collections of shapes ... you have not proven that algorithms cannot perform object-recognition. You still need to show that one of the cases cannot be performed by an algorithm. As case A is trivial, and case B is what this algorithm does your argument falls apart. Collections of shapes can be recognised by probability of occurrence. There is no need for interactivity, simply enough video

            • by timeOday (582209)
              If you used video, rather than images (or frames from images taken in isolation), and the video showed how things are used, then you could get somewhere. For example each grape in a bowl of grapes is an "object" for eating purposes whereas each bump on a raspberry is not, despite how visually similar those are.
        • by medv4380 (1604309)
          I don't see it that way because in the article and the BYU article the examples clearly lable a Tree a Tree, an Airplane an Airplane, and a Human a Human. Ether you're right and someone went though and relabeled the object ID labels with human readable versions, or you've given it more credit then then they are trying to take.
      • by MLBs (2637825)
        The features part is really the tricky part. You could supply a set of pictures of aircraft and the algorithm would need to determine what is common to all those examples.
        With regards to training, It is possible to perform this learning task without direct supervised (tagged data) training.
        Imagine the following:
        Take a trillions of images from the web, and use unsupervised, clustering methods to group images into groups of equivalence, given that you have great features that allow you to do that.
        Then,
        • by fatgraham (307614)

          This can't be far off, I read a paper a while ago (still trying to find it, this post is a bit redundant without it) which would "detect" the capital of a country from how often they were found in text together. (Probably pre-loaded with country names, this would just have the image as the needle)

          I'm sure "Object 1387" and "Nyan cat" will soon be matched :)

      • "What's that mommy?" "Why, Jimmy, that's an airplane".
        I am not a programmer so could someone explain to me how that is different?
        • by medv4380 (1604309)
          Well the OP was asking about whether or not this is anything new, not about whether or not this was "AI" that was similar to being human. So your sarcasm is lacking proper context. To answer your sarcastic question with a questions, do you honestly believe that when you are told that an object is X that you generation a thousand different possible representations of that object then generate a "test" to see which set is closest to what you are seeing then jumbling the closest together again and again until
      • by Zalbik (308903)

        Still requires a person to load a bunch of pictures of air planes, and tell it to "learn" these objects. It's not really learning objects on its own

        But that is exactly what people do. Other than a few hard-wired patterns (e.g. faces), we "learn" to recognize objects by being exposed to multiple examples of those objects and told the label to apply to them.

        Though the argument could be made that people don't learn on their own either...

  • Let it loose at the Adult Entertainment Expo [huffingtonpost.com].

    If it can figure out what half of that stuff is, it's a brilliant algorithm.

    If not, it will probably be hilarious to see the results.

    • If not, it will probably be hilarious to see the results.

      Let me check...BYU...sex toys...yes, it probably would!

  • by Gaygirlie (1657131) <gaygirlie.hotmail@com> on Monday January 20, 2014 @02:59PM (#46016981) Homepage

    I know it's popular for people to immediately start with all the Terminator-claims and whatnot, but that's not the first thing that comes to my mind when reading stuff like this. Personally, I think of coupling this with something like e.g. Google Glass, so that you can tell the system to identify the item in the center of the view and then ask for it to automatically search for instructions on use or repair or whatnot. Even better if you have a device that covers both of your eyes so that the system can overlay things in your whole visual field, identifying things and showing their connections and whatnot.

    • by icebike (68054)

      Like I've said in the past, mankind simply can't seem to stop itself from building Skynet piecemeal. Too dumb and trusting to think all those interesting things could ever be made into weapons or instruments of control.

      • Governments are already building them into weapons. The question is, are we going to have the same arsenal? All this tech is coming, we can't just stick our heads in the sand and hope for sane competent leadership when it does arrive.

        • by icebike (68054)

          But we can demand technology kill switches.
          Just as security was bolted onto the internet to make up for the lack of it being designed in, we shouldn't find ourselves in a position of having to bolt security onto our televisions, cars, and robotic servants.

          This particular algorithm has a lot of uses. We'd want that garbage sorting automaton to the entire system stop dead in its tracks when a human hand came through in the stream of cans and bottles and waste paper being dumped into the maw of a waste sorting

      • by barlevg (2111272)
        Why does everyone assume that our AIs are going to turn into Skynet? Why couldn't we end up with Tachikomas [wikipedia.org]?
        • by MetalOne (564360)
          If they are sentient, and require the same resources as humans, and are more intelligent than humans, it seems safe to assume they will want all the resources for themselves. If they are smarter, then its not hard to imagine them attempting to eliminate humans.
  • by Anonymous Coward

    At least there's one professor at BYU that believes in evolution!

    • Re:Evolution at BYU (Score:5, Informative)

      by ComfortablyAmbiguous (1740854) on Monday January 20, 2014 @03:27PM (#46017325)

      At least there's one professor at BYU that believes in evolution!

      I understand why it might be tempting to put BYU in a basket along with the rest of the evangelical christian universities. However, on the issue of evolution it could not be more different. I graduated from there with a degree in microbiology and my college at least evolution was the coin of the realm, just like it is in any serious biology department. I did not have a single professor that did not see evolution as you might expect a biologist to see it; as the only serious explanation of the data at hand, the only theory that works with what we know and provides valid predictions of future results. Not once did I hear even the smallest bit of credibility being given to creationism or its various variants (intelligent design, etc).

      And yes, my professors were all Mormons. You might ask yourself how they square this. It turns out that while there are certainly Mormons that take a very literal reading of the bible on this issue, that is not the official church position, and there are many members that don't see it that way at all. Basically I had several professors that explained it as religion was about how to live life, science was about how life works, and we really have no idea how the two come together. The bible, while providing a lot of information to believers on a moral life, provides no real information on how the world works in any of the scientific fields.

      Interestingly, many believe this is on purpose, that God has no interest in proving his existence; it's a matter of faith for a reason. Because of this He stays out of offering scientific explanations. I realize that sounds distinctly like a cope-out, but frankly it leads to a fairly rational place where you can function as a scientist an still be a Mormon. And by function I don't mean some half-way hands over eyes sort of a way, but in a real, go where the evidence takes you sort of a way.

      Take it for what it's worth, but that was my experience

      • I graduated from the Y too - and while most of my professors were not irrational about science, much of the student body was. I had a professor in a 100 level geology class who would start off most of his lectures by saying, "Now I know for some of you, your testimonies may tell you the earth is only such and such many years old. I'm not here to rock your testimonies or shake your faith, but simply to present scientific evidence as we understand it today."

        I laughed every time he had to make a disclaimer
        • Yea, I expect if you signed up for biology you tended to get over that pretty quick, or generally dislike your life. It's easier to have an irrational viewpoint about science as an English major where you don't have to face it to function each day.
        • "testimonies"? ...is that some specialized mormon terminology like being 'sealed' rather than married? i've read endless screeds about how one can make religion and science happily co-exist. but in the final analysis, it can't happen; at least not with standard faith-based religions. science essentially demands that nothing can be taken on faith; and religion essentially demands that anything important (the root of one's philosophical tree, if you will) must be taken on faith. if you're a faithful you
          • by fishybell (516991)
            Indeed. "Testimonies" are very much along the lines of "I testify that I hold X/Y/Z ideas on faith." Every month (usually the first Sunday of each month) there is a special meeting fast and testimony meeting where members of the congregation get up and "bear their testimony." As a kid growing up with a dad teaching at BYU, these Sundays were the worst. They often dragged on longer than normal not just because church service ran longer, but the fasting portion of "fast and testimony" meant we were hungry a
          • I suppose this depends on a number of things, and perhaps in the end you may be right. But at the moment there really isn't a clear conflict; the conflict is more manufactured than real, especially if you see religion as a road to life happiness and not an explanation of all things. I admit that there is a certain about of dealing with ambiguity that is required. Frankly, I tend to be much more of an agnostic or a deist than your average Christian. I tend to believe that my life is mine to live, there i

  • by fatgraham (307614) on Monday January 20, 2014 @03:28PM (#46017339) Homepage

    Does it work in real time? I can't find any more information than marketing buzz in the article (and the BYU article)...

    Is there a paper or anything with a bit more [technical] detail?

  • I would hope they'd get a sensor that would be able to identify new objects. Maybe let a robot pick it up, spin it around, get a 3d digitization model of the object then label it something temporary until someone tells it what it is. A quick look at how AI is probably going to be done when all the techs come together [botcraft.org]
  • by StripedCow (776465) on Monday January 20, 2014 @03:41PM (#46017527)

    Anyone got a link to the actual paper?

    I wonder if this can be used for image compression. Because if you know e.g. what a bicycle looks like, you don't have to compress it.

    • Re:Paper? (Score:4, Informative)

      by wagnerrp (1305589) on Monday January 20, 2014 @03:52PM (#46017643)
      Data compression is often considered to be a key component of artificial intelligence. There is a competition that gives out prizes for compression of a sample of the Wikipedia database.
    • by Lamps (2770487)

      Is this [sciencedirect.com] the one? It doesn't appear that the researchers have posted a manuscript, and I'm not sure that Elsevier would take kindly to it if they posted the published draft (although many researchers do so anyway). That, along with a lack of public interest in reading articles upon which pop science articles (like the one in the link) are based, probably explains the lack of a link or reference to the original article. If you have access to a library that subscribes to Pattern Recognition, you can get the ar

  • Nothing is performed on the fly. It's just another feature extraction and selection pipeline.
    1) Deep Neural Networks also save the feature engineering step (for instance http://media.nips.cc/nipsbooks/nipspapers/paper_files/nips26/1210.pdf [media.nips.cc])
    2) If as suggested by the title you are interested by on-the-fly object recognition, look at Tracking-Learning-Detection (TLD) (http://info.ee.surrey.ac.uk/Personal/Z.Kalal/tld.html)

  • Hope it will not fail "Tranny or Female" test

  • New object recognition algorithm learns on the fly

    I know wearable computing is the next big thing but putting one there - especially if it has a camera attached - is going to look a little bit... weird.

    • No, they are making very small devices you'll not notice on the fly. And they'll add circuits to control the fly. The result will be a biological espionage drone which nobody will suspect.

      However people will try to kill it anyway.

  • *Sigh* (Score:4, Interesting)

    by Wootery (1087023) on Monday January 20, 2014 @04:53PM (#46018233)

    notable in that it decides for itself what features of an object are significant for identifying the object and is able to learn new objects without human intervention

    For Christ's sake. The AdaBoost face-detection algorithm - the one that everyone uses today - does precisely this, and was developed in the the 90's.

    • by Wootery (1087023)

      ...that's right, the the 90's...

  • by jopet (538074) on Monday January 20, 2014 @04:56PM (#46018275) Journal

    Why should I get excited about something written by a journalist where there must be something writeen by scientists? Where is the PDF of the scientific paper to download?

  • First of all, is this the right paper [sciencedirect.com]?

    It seems that the topic of the linked article is a new unsupervised [wikipedia.org] algorithm that categorizes images. The linked article says that 'the Evolution-Constructed Features algorithm is notable in that it decides for itself what features of an object are significant for identifying the object', which unsupervised algorithms do implicitly, no? It is also stated that the algorithm 'is able to learn new objects without human intervention' - so if I'm interpreting this and the a

  • actual link to paper (Score:3, Informative)

    by Anonymous Coward on Monday January 20, 2014 @09:02PM (#46020431)

    team,

    fyi: http://contentdm.lib.byu.edu/utils/getfile/collection/ETD/id/3021/filename/503.pdf

    -me

  • Perhaps if this was explained in terms of an example whereby you describe how existing AI uses or learns the training set and how the newfangled way does it different.

  • This sounds a lot like the Never Ending Image Learner project: http://www.neil-kb.com/ [neil-kb.com] which is crawling the web and trying to extract visual knowledge.

  • Looks like a bit of click-bait sensationalism by Gizmag. This algorithm is a couple of years old, the new research is just related to a new paper on domain specific usage (classifying fish). It's an unsupervised genetic algorithm, that uses basic image processing steps as the genes, hence Gizmag trying to tout it as 'learning on its own'. It's a cool technique outright, but not as world changing as they make out.

"Our reruns are better than theirs." -- Nick at Nite

Working...