subh_arya writes: Researchers from NYU, UToronto and MIT have come up with a technique that captures human learning abilities for a large class of simple visual concepts to recognize handwritten characters from World's Alphabet. Their computational model (abstract) represents concepts as simple programs that best explain observed examples under a Bayesian criterion. Unlike recent deep learning approaches that require thousands of examples to train an efficient model, their model can achieve human-level performance with only one example. Additionally, the authors present several "visual Turing tests" probing the model's creative generalization abilities, which in many cases are indistinguishable from human behavior.
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's now on IFTTT. Check it out! Check out the new SourceForge HTML5 Internet speed test! ×