Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Technology

The Next Frontier in AI: Nothing (ieee.org) 60

How an overlooked feature of deep learning networks can turn into a major breakthrough for AI. From a report: Traditionally, deep learning algorithms such as deep neural networks (DNNs) are trained in a supervised fashion to recognize specific classes of things. In a typical task, a DNN might be trained to visually recognize a certain number of classes, say pictures of apples and bananas. Deep learning algorithms, when fed a good quantity and quality of data, are really good at coming up with precise, low error, confident classifications. The problem arises when a third, unknown object appears in front of the DNN. If an unknown object that was not present in the training set is introduced, such as an orange, then the network will be forced to "guess" and classify the orange as the closest class that captures the unknown object -- an apple! Basically, the world for a DNN trained on apples and bananas is completely made of apples and bananas. It can't conceive the whole fruit basket.

While its usefulness is not immediately clear in all applications, the idea of "nothing" or a "class zero" is extremely useful in several ways when training and deploying a DNN. During the training process, if a DNN has the ability to classify items as "apple," "banana," or "nothing," the algorithm's developers can determine if it hasn't effectively learned to recognize a particular class. That said, if pictures of fruit continue to yield "nothing" responses, perhaps the developers need to add another "class" of fruit to identify, such as oranges. Meanwhile, in a deployment scenario, a DNN trained to recognize healthy apples and bananas can answer "nothing" if there is a deviation from the prototypical fruit it has learned to recognize. In this sense, the DNN may act as an anomaly detection network -- aside from classifying apples and bananas, it can also, without further changes, signal when it sees something that deviates from the norm. As of today, there are no easy ways to train a standard DNN so that it can provide the functionality above.

This discussion has been archived. No new comments can be posted.

The Next Frontier in AI: Nothing

Comments Filter:
  • Damn, there is a whole new category here: AI Nutters. I thought Space Nutters were annoying, but AI Nutters are over the top! I just trained my model to look for bananas, and then trained it to look for apples, then trained it to look for sheep with bananas on their head, then trained it for sheep with bananas on their head in a field. Only ten million quadrillion more combinations to go!

    • This is totally stupid. It's just using "nothing" as a placeholder for unknown. Just have the AI tag unknown objects as the class unknown, rather than abusing NULL.

      AI retards need to talk to any half decent DBA so they can learn the value of proper semantics.

      • As a developer with a strong DBA background, I totally agree with you, but it's already hard enough to get other devs to stop abusing NULL

      • by ebyrob ( 165903 )

        It's actually not stupid at all. 3-value logic is very different than the more typical 2-value logic. Removing the law of the excluded middle makes a huge difference in the form of mathematics you wind up creating. (lookup the ideas of: L. E. J. Brouwer)

        Very interesting stuff, I really wish we used a lot more 3-value logic (which clearly calls out "unknown" when appropriate) instead of pretending everything must be black or white which possibly leads us to claim we know things we actually don't...

        • by b3e3 ( 6069888 )

          ...I really wish we used a lot more 3-value logic (which clearly calls out "unknown" when appropriate) instead of pretending everything must be black or white which possibly leads us to claim we know things we actually don't...

          Insert obligatory ./ political axe-grinding here.

      • Except that it's harder than that in some ways. All the neural net research has been focused on discrimating A from B from C, etc. If you have a category "I dunno", then when you start training the net then they're ALL unknowns. So being able to distinguish "I know this is not a banana or an apple, even though it looks a lot like an apple if I squint" is kind of hard.

        • Depends if your classification scheme can handle a confidence rating.

        • When training your DNN, you don't give it the ability to have a nothing category. After you are satisfied with the training of specific categories, you then bring in the "nothing" option. This can both test for accuracy and let's it put unknown objects in a category.

          Trying to teach it with the ability of "unknown" would likely lead to a lot of unknowns.

          • Trying to teach it with the ability of "unknown" would likely lead to a lot of unknowns.

            If it doesn't yet know, then those would be correct classifications.

            Which is more important, having lots of answers, or having correct answers?

        • by ceoyoyo ( 59147 )

          Not really. In fact, the whole article seems to be reaching to make this some kind of new idea. If you're doing segmentation, which is just classifying every pixel, you always have a background class. It means "something that's not one of the other classes".

    • I had graduate level courses in AI. And yes, some of the people in the field were indeed nutters

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Yes, it's called progress. I don't really understand the ignorance towards AI on Slashdot, people here seem to fairly prolifically proclaim that because we haven't gone from nothing to the end game of AI in 40 years, then AI isn't really, is full of nutters, is broken, a lie, or whatever other else drivel gets thrown at it.

      Here's some news, we also haven't got a grand unified theory of everything out of physics in hundreds of years. Guess what? Reaching the end game takes a long, long time. Possibly longer

      • I don't really understand the ignorance towards AI on Slashdot, people here seem to fairly prolifically proclaim that because we haven't gone from nothing to the end game of AI in 40 years, then AI isn't really

        You seem to have a firm grasp of ignorance, actually.

        This category of technology is in its infancy, but the practitioners are constantly making absurd claims. This has resulted in the public perception that self-driving cars are about to hit the markets, and other idiocies.

        The choices are not merely "idiot" or "Luddite," there are other options. You could, for example, recognize that much of the work being called "AI" is just academic bullshit being driven by an influx of money into certain types of trained

    • by rtb61 ( 674572 )

      The next big development in AI, bug brains. For shallow thinkers, why bug brains, what a waste of money, for clear thinkers, bug brains cool. Compact AI behaving like a predatory bug, that can track down and eliminate various pest species, probably more remote AI than built in AI, the stuff of sci fi and sure train it to tell the difference between the fruit it is protecting and the bug it is eliminating.

  • Any AI worth it's weight not only decides Apples or Bananas, but also reports a certainty value.

    That is, 80% likely to be Apple, vs 60% likely to be Banana.

    Just rule anything >=60% means "Other" rather than putting it into Banana.

    • by Shaitan ( 22585 )

      Nope because that doesn't provide the ability to provide direct feedback that the network was correct when it indicated "Other." If I feed a picture of a hammer there is a good chance when comparing apples to bananas the network will rightfully indicate a high chance of a banana because it is relative to the possibility it is an Apple and might pass your filter. The proper output might be something more like 10% Apple, 30% Banana, 60% neither.

      Your solution is just applying rule based logic to try to filter

      • You are claiming a different model. Both the one I used and the one you described are in use currently.

        Basically, I was using the independent model, where the object is examined for identity independently. That is, an object could possibly be graded as 95% Apple and also be graded as 70% Banana. (though that would be a weird object)

        You were using the choose identity. So that if there were only two definitions, the object graded as 95% Apple would by definition be 5% Banana.

        But honestly, my rule works in

  • If you show an AI trained on Apples and Bananas an "unknown" Pineapple, all it has to do is do is do a reverse image search using that image - Tineye, Google image search can do that. That more often than not identifies the image as a "Pineapple", and leads to millions of other images of pineapples the AI can train on. How do difficult is "IF NOT APPLE OR BANANA OR UNSURE THEN DO REVERSE IMAGE SEARCH"? Is the AI locked in a steel box with no internet connection?
    • Dear God.

    • You're correct if you're classifying things that are common in the world generally, but "apple vs banana vs unknown" is merely an example. If we start talking about something more esoteric, like, say, types of cancer vs the "nothing" of healthy cells, there isn't a readily available reverse image search to fall back on.
    • What you describe would be a powerful thing to implement (and it's trivial to do, really) but it's not AI, it's more along the lines of a self-directed search engine and results parser.

      It's hard to define AI, but I don't doubt that something sufficiently AI-ish will appear eventually, and it'll be good enough at what it does that whether it's genuinely intelligent or not will be a distinction without a difference.

  • by bobstreo ( 1320787 ) on Friday December 20, 2019 @02:21PM (#59542444)

    until there are AI trainers to teach the new AI instances.

    Think about it...

    • That is like, totally, Inception.

    • That’s going to require some human supervision. When the teaching AI is labeling the pictures of humans as “target” I’d like a little intervention.

      More seriously though that suggests the problem has been solved. If you look at small children, they’re taught by adults that can already solve the problem. What you’re essentially saying is that AI classifiers will be useless until we have a working AI classifier.

      There are other types of AIs that do use this type of appr
    • Vonnegut wrote a novel about this it's called "Player Piano."
    • Based on what data do you make this claim?
  • Except we can learn! And stick a qualifier word in front of "apple" to make orange "China's apple". Or "Sinaasappel" in Dutch, "Apfelsine" in German ("Orange" is correct too), etc.

    Maybe they should focus on NNs that aren't frozen for production, but still have some plasticity left, and stimulators to tell them if they have been good.

    Why not?
    What's the matter, Sandurz? ... Chicken??

    • AI's in productions with learning capability? What a great idea!

      Let's build a chatbot with that "feature" and deploy it online. We can call it Tay [theguardian.com]. It'll be awesome!

      • by ceoyoyo ( 59147 )

        Kinda was, to be honest. I'm sure there are sociologists who would love to study that kind of interaction.

      • by cusco ( 717999 )

        I was surprised that the article didn't mention that the twits at 4chan deliberately set out to corrupt Tay. The Guardian is generally better than that.

  • Well, this seems to be the case for most AI examples.
    While as they might work on their training set, there are always going to be things outside its field of encounter with which it must deal, sort of like the Godel Incompleteness Theorem for algorithms.
    I have heard it said that old ideas about the brain were that left-brain deals with analytics, while as the right brain deals with creativity. The newer reformulation of this idea is that left hemisphere deals with predictability and established pattern
    • This was sort of the idea behind a void in 17th century thinking. While as nature abhors a vacuum, there wasn't really a void in the vacuum of space. However, if you talk about the expanding universe, you begin to think of the nothing "into which" it is expanding.

      In the beginning, there was nothing.
      Then God said, "Let there be light!"
      And there still was nothing
      But you could see it.
  • An AI that can track my productivity at work!

  • ...they call it "Not Hotdog."
  • Nothing you say. Other I say!

    Woop woop woop!
  • Unless you provide no data, there's always something. An empty tabletop, or an empty driveway, etc. It's easy to recognize when there's actually nothing, because there's no input.

    Reminds me of a Metallica song...

    • It's easy to recognize when there's actually nothing, because there's no input.

      Congratulations, you've disproven the existence of sight, because blind people.

  • Knowing what you don't know is essential to wisdom, be it artificial or natural. AFIK all modern AI systems simply guess; they never consider answering âoenone of the aboveâ, or the likelihood this question is atypical, or even whether they shouldn't answer because they're clueless.

    Curiously, in the olden days of AI before deep nets, logic- and knowledge- based methods were much more common. Use of circumspection and counterfactuals were central to building models and systems that could reason a

  • The label "nothing" seems to be just the author referring to his example of an empty box containing nothing, something people learn to recognize as a concept of "zero" items in a box.

    What he actually seems to be talking about is identifying anomalies, or instances that seem to greatly deviate from previously seen instances. The author being a CEO of an "AI-powered visual inspection company". So an example of his problem is to find defective items (e.g. scracthed) on an assembly line.

    With very few instances

  • I think the next major step for AI is Skynet.

  • Actually, there is a difference between nothing, null, noone, and nohow.

    Nothing is literally nothing. We observed the absence.

    Null is we didn't test it, it's a cat in a box, could be there, could not be there, could be a Zargon.

    Noone is we did test it, we know it's not a person. Still could be a Zargon. Here, kitty kitty.

    Nohow is we did observe it, the results are squirrelly, it's definitely a Zargon, cause it ain't human.

    AI frequently can't handle any of those. People who code them aren't good at set theor

  • Today, it tells Nothing. Tomorrow, it tells Coffee without Cream from Coffee without Milk.

  • ... when a third, unknown object appears ...

    Digital information systems have been dealing with 'yes', 'no', 'unknown' outcomes for decades: It's revealing that AI 'science' is only starting to include this.

    Part of the reason, is the small problem-space that AI originally faced: Only valid data was input into the computer, so the 'unknown' outcome wasn't required. Nowadays, that data comes directly from 3D (or worse, 2D) cameras, as pictures of the real world containing a lot of unknowns.

    Much of AI is devoted to finding a yes/no answer but th

    • by ceoyoyo ( 59147 )

      No, it's a bullshit article from somebody who doesn't know what they're talking about.

      It's quite common to include a background or "none of the above" class. In fact, it's essential in many applications, particularly segmentation.

  • inclusion of "belonging" and "not belonging" identification? e.g., { [Apple], [Fruit], [not banana], [not orange], [not Nothing] } Sure, there are an infinite number of things an Apple is not, but within the training schema, the Apple is definitely not other things in the schema.

If all the world's economists were laid end to end, we wouldn't reach a conclusion. -- William Baumol

Working...