Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
AI Software Technology

Breakthrough In Automatic Handwritten Character Recognition Sans Deep Learning (technologyreview.com) 66

subh_arya writes: Researchers from NYU, UToronto and MIT have come up with a technique that captures human learning abilities for a large class of simple visual concepts to recognize handwritten characters from World's Alphabet. Their computational model (abstract) represents concepts as simple programs that best explain observed examples under a Bayesian criterion. Unlike recent deep learning approaches that require thousands of examples to train an efficient model, their model can achieve human-level performance with only one example. Additionally, the authors present several "visual Turing tests" probing the model's creative generalization abilities, which in many cases are indistinguishable from human behavior.
This discussion has been archived. No new comments can be posted.

Breakthrough In Automatic Handwritten Character Recognition Sans Deep Learning

Comments Filter:
  • by Anonymous Coward

    their model can achieve human-level performance with only one example

    Yeah? Well, I've never encountered a human - myself included! - who can read my handwriting, so suck it, you AI mofos!

    • Now all doctors will have to go back to school and learn how to write even worse than they do now. I presume it'll be something along the lines of:

      -just scribble for 3-4" on the paper, using a felt marker, making no attempt to actually move in the shape of any known letter
      -wet tip of finger
      -rub the scribble for several seconds

    • by Chrisq ( 894406 )

      Yeah? Well, I've never encountered a human - myself included! - who can read my handwriting, so suck it, you AI mofos!

      Some people are incredibly good at this. My wife used to type up dissertations in the days before students did their own on computers, and she can read things that to me are completely illegible. I thought my handwriting was bad but she says she has seen much worse.

      Of course the question is, are they aiming for average human ability or someone who is practiced in reading difficult handwriting?

  • by JoeyRox ( 2711699 ) on Wednesday December 16, 2015 @01:16AM (#51127613)
    Maybe they'll also invent a better way to untangle corded phone cables.
  • Great (Score:4, Insightful)

    by sexconker ( 1179573 ) on Wednesday December 16, 2015 @01:43AM (#51127687)

    I'll never solve the new captchas.

  • by Zero__Kelvin ( 151819 ) on Wednesday December 16, 2015 @01:46AM (#51127699) Homepage
    Seriously? Cleverer? Oodles ? What editor left those in the paper? Slashdot editors must be working for these guys on the side. I know somebody will say cleverer is technically correct, and while that may be true, it is a disaster aesthetically.

    "“The real inflection point in AI is going to come when machines can actually understand language,” he says. “Not just doing mediocre translations, but really understanding what you mean.”"

    Until we understand what it means to understand, how can we possibly know if we have taught these systems to understand? Even if it responds intelligibly, and what it says makes complete sense, is that the same as understanding? I suppose as Billy C. once said: "It depends on what the meaning of the word 'is' is".

    • Until we understand what it means to understand, how can we possibly know if we have taught these systems to understand?

      If I am talking to you, I can generally tell if you understood what I said or not, even if the meaning can't be clearly defined. Presumably an AI will respond similarly, and I'll be able to tell if it understood or not.

      If we get AI that smart, then we will have advanced a long way.

      • "If I am talking to you, I can generally tell if you understood what I said or not, even if the meaning can't be clearly defined."

        Really? What does it mean to understand, since you seem to be the only person I know of who claims they can answer that question? Your reply shows that you didn't understand what I wrote at all; ironic, isn't it ;-)

        • There may be a grey area, certainly, where it's hard to tell if something understands or not, but for today's AI, we can certainly say they don't understand (unless you come up with some narrow definition of "understand")
    • by Prune ( 557140 )

      Even if it responds intelligibly, and what it says makes complete sense, is that the same as understanding?

      Good job poorly rehashing an argument made in 1980 [stanford.edu]

      which has been refuted time and again.

      • by HiThere ( 15173 )

        Unfortunately, the arguments are not conclusive. It really does depend on the meaning of understanding. It's also, however, true that the presumption of lack of understanding isn't defensible.

        Lacking an exact definition of understanding, the only thing we have to go on is something like "Well, if I had reacted that way, then it would mean that I understood.". This is clearly inadequate as a non-observer relative description...something which even quantum physics manages to come up with, though it puts li

        • by Prune ( 557140 )

          It really does depend on the meaning of understanding.

          This is the same sort of sophistry that philosophers used to engage in when discussing qualia, until Dennett showed it was all bullshit and they're just emergent ephiphenomena.

          Understanding just means a sufficient level of integration of some information with knowledge already extant in your mind -- the various semantic elements of what you understand are linked to the rest of your knowledge, so that you can relate these and also make use of the new information as a model of the target of your understandi

      • Are you some kind of idiot? Because someone else also understood a problem in 1980 I have no right to talk about it now? I wasn't aware there was a huge checklist of topics, and that each time someone covers one nobody else can ever understand it or talk about it again.

        which has been refuted time and again.

        BTW, it was a rhetorical question; you clearly are an idiot. There is nothing to refute in anything I wrote in my original post. It was a series of questions. How exactly do you refute a series on

        • by Prune ( 557140 )

          How exactly do you refute a series on non-rhetorical questions anyway

          Because, while I might (for the sake of argument) accept they're not outright rhetorical questions, they're also certainly not questions made in good faith -- they're implying that one can reasonably suppose there's a possibility of a difference between understanding and the functional competence exhibited by an entity that understands (equivalently, that, above some threshold, the appearance of intelligence can possibly be different from actual intelligence -- and feel free to replace "intelligence" with "

          • The problem you have is that you have made my point while attempting to refute it :-)
            • by Prune ( 557140 )
              Do you often find that some things are obvious to you but others just don't understand? Of course, the most effective trolls are the ones that actually believe what they're purveying. Uncle Al in the newsgroups comes to mind...
    • It depends on what the meaning of the word 'is' is

      This has become one of the best memes ever to sort the kind of people who just snort and move on in the rush to stop thinking from those of use who perceive Clinton's comical perch as resting upon a legitimate labyrinth of linguistic complexity.

      The same impatient mind is at some point informed that the Chinese language has no tense system as we know it from most European languages. "How does that even work?" these people ponder for a few tense milliseconds

    • The viability of the "-er" comparative ending in English varies from place to place; in Canada, for example, more "-er" words are acceptable than here in the United States (e.g. "funnier" I believe works in Canada). This is never merely a matter of what is technically correct, however, because our aesthetic aversion to this or that form is already determined beforehand by common practice, such that I feel that "cleverer" is awkward simply because it is not proper in the USA. If I had grown up in Canada, my
  • Improvements to OCR? (Score:5, Interesting)

    by pipedwho ( 1174327 ) on Wednesday December 16, 2015 @01:57AM (#51127733)

    I hope this heralds in some significant improvements to basic OCR. It amazes me that OCR against a printed document still doesn't always yield 100% success. Even worse are OCRs on printed music manuscripts. The recognition and transcription quality is atrocious.

    And yet, these guys can recognise handwriting with incredible accuracy.

    I keenly await when these algorithms can be expanded to general OCR / document recognition. Even if there need to be specific models for each type of document.

    • by Richard Kirk ( 535523 ) on Wednesday December 16, 2015 @09:41AM (#51128765)

      Suppose you had a bit of your handwriting that you could not read. How do you figure out what you wrote. One thing that I do, and you may do too, is to try and imagine writing the thing, and work out the rhythm of what you are writing. If you can get some sense of how your hand is writing, you may see that what was a 'u', or maybe an 'n' or half of am 'm' makes sense because of the way it joins up to other stuff. We seem to have some sort of kinematic two-and-a-half axis model for writing. We use different muscles if we are writing with a pen (fingers and wrist), a blackboard (wrist and upper arm), a spray-can (upper and lower arm), or a tiny engraving tool (just fingers) and yet our handwriting remains much the same. So some computer that can try and fit the same kinematic model should make better guesses for a word it has not met before than anything that just trained on the shape.

      This does not directly transfer to OCR. If you have a page of fixed-width text, then every letter has its own little rectangle, and you can either recognize that using the traditional OCR model, or you can't. However, there is something we can do along the same lines. Suppose you have a document that you guess was rendered from PostScript. If you have a guess for a particular word, and the font it was rendered in; you could render that part of text. You can then degrade that rendered image to mimic the properties of the printing and scanning, and check the fit. The best solution will probably be the one that achieves the best fit with the shortest, and hence most probable bit of PostScript. When you have more text, you can pick up hints from the spacing, the justification, and other larger page layout structures.

      I actually worked on OCR, and tried both of these once. It might have worked with a large software team, but I hadn't got one.

    • Much as I agree this could herald a much needed improvement in OCR the downside is that Capcha will become virtually useless.
  • by Anonymous Coward

    n/t

  • by Anonymous Coward

    If only the /. editors would do some minimal investigation... Oh wait, this is still /.

    https://github.com/brendenlake/BPL

    • If only the /. editors would do some minimal investigation... Oh wait, this is still /.

      https://github.com/brendenlake/BPL

      At the moment the code requires both Matlab and Lightspeed. Until someone ports the code to an OSS library or alternate language, it won't see significant adoption.

  • by Anonymous Coward

    This is just clever way to describe features on the images. They have built a codec for hand-writing. Instead of a binary array you are learning a set of brush-strokes. I bet that if deep learning algorithm is given the decoded brushstrokes as learning material, it will outperform this.

  • The paper:
    https://www.sciencemag.org/con... [sciencemag.org]

    A short article and interview with Lake:
    http://www.ibtimes.com/say-hel... [ibtimes.com]

Some people carve careers, others chisel them.

Working...