The Next Frontier in AI: Nothing (ieee.org) 60
How an overlooked feature of deep learning networks can turn into a major breakthrough for AI. From a report: Traditionally, deep learning algorithms such as deep neural networks (DNNs) are trained in a supervised fashion to recognize specific classes of things. In a typical task, a DNN might be trained to visually recognize a certain number of classes, say pictures of apples and bananas. Deep learning algorithms, when fed a good quantity and quality of data, are really good at coming up with precise, low error, confident classifications. The problem arises when a third, unknown object appears in front of the DNN. If an unknown object that was not present in the training set is introduced, such as an orange, then the network will be forced to "guess" and classify the orange as the closest class that captures the unknown object -- an apple! Basically, the world for a DNN trained on apples and bananas is completely made of apples and bananas. It can't conceive the whole fruit basket.
While its usefulness is not immediately clear in all applications, the idea of "nothing" or a "class zero" is extremely useful in several ways when training and deploying a DNN. During the training process, if a DNN has the ability to classify items as "apple," "banana," or "nothing," the algorithm's developers can determine if it hasn't effectively learned to recognize a particular class. That said, if pictures of fruit continue to yield "nothing" responses, perhaps the developers need to add another "class" of fruit to identify, such as oranges. Meanwhile, in a deployment scenario, a DNN trained to recognize healthy apples and bananas can answer "nothing" if there is a deviation from the prototypical fruit it has learned to recognize. In this sense, the DNN may act as an anomaly detection network -- aside from classifying apples and bananas, it can also, without further changes, signal when it sees something that deviates from the norm. As of today, there are no easy ways to train a standard DNN so that it can provide the functionality above.
While its usefulness is not immediately clear in all applications, the idea of "nothing" or a "class zero" is extremely useful in several ways when training and deploying a DNN. During the training process, if a DNN has the ability to classify items as "apple," "banana," or "nothing," the algorithm's developers can determine if it hasn't effectively learned to recognize a particular class. That said, if pictures of fruit continue to yield "nothing" responses, perhaps the developers need to add another "class" of fruit to identify, such as oranges. Meanwhile, in a deployment scenario, a DNN trained to recognize healthy apples and bananas can answer "nothing" if there is a deviation from the prototypical fruit it has learned to recognize. In this sense, the DNN may act as an anomaly detection network -- aside from classifying apples and bananas, it can also, without further changes, signal when it sees something that deviates from the norm. As of today, there are no easy ways to train a standard DNN so that it can provide the functionality above.
AI Nutters (Score:1, Funny)
Damn, there is a whole new category here: AI Nutters. I thought Space Nutters were annoying, but AI Nutters are over the top! I just trained my model to look for bananas, and then trained it to look for apples, then trained it to look for sheep with bananas on their head, then trained it for sheep with bananas on their head in a field. Only ten million quadrillion more combinations to go!
Re: AI Nutters (Score:1)
This is totally stupid. It's just using "nothing" as a placeholder for unknown. Just have the AI tag unknown objects as the class unknown, rather than abusing NULL.
AI retards need to talk to any half decent DBA so they can learn the value of proper semantics.
Re: (Score:2)
I think they made a really bad sitcom out of that idea.
Re: AI Nutters (Score:1)
As a developer with a strong DBA background, I totally agree with you, but it's already hard enough to get other devs to stop abusing NULL
Re: (Score:3)
It's actually not stupid at all. 3-value logic is very different than the more typical 2-value logic. Removing the law of the excluded middle makes a huge difference in the form of mathematics you wind up creating. (lookup the ideas of: L. E. J. Brouwer)
Very interesting stuff, I really wish we used a lot more 3-value logic (which clearly calls out "unknown" when appropriate) instead of pretending everything must be black or white which possibly leads us to claim we know things we actually don't...
Re: (Score:1)
Insert obligatory ./ political axe-grinding here.
Re: (Score:3)
Except that it's harder than that in some ways. All the neural net research has been focused on discrimating A from B from C, etc. If you have a category "I dunno", then when you start training the net then they're ALL unknowns. So being able to distinguish "I know this is not a banana or an apple, even though it looks a lot like an apple if I squint" is kind of hard.
Re: (Score:2)
Depends if your classification scheme can handle a confidence rating.
Re: (Score:2)
When training your DNN, you don't give it the ability to have a nothing category. After you are satisfied with the training of specific categories, you then bring in the "nothing" option. This can both test for accuracy and let's it put unknown objects in a category.
Trying to teach it with the ability of "unknown" would likely lead to a lot of unknowns.
Re: (Score:2)
Trying to teach it with the ability of "unknown" would likely lead to a lot of unknowns.
If it doesn't yet know, then those would be correct classifications.
Which is more important, having lots of answers, or having correct answers?
Re: (Score:2)
Not really. In fact, the whole article seems to be reaching to make this some kind of new idea. If you're doing segmentation, which is just classifying every pixel, you always have a background class. It means "something that's not one of the other classes".
Re: (Score:2)
I had graduate level courses in AI. And yes, some of the people in the field were indeed nutters
Re: (Score:2, Insightful)
Yes, it's called progress. I don't really understand the ignorance towards AI on Slashdot, people here seem to fairly prolifically proclaim that because we haven't gone from nothing to the end game of AI in 40 years, then AI isn't really, is full of nutters, is broken, a lie, or whatever other else drivel gets thrown at it.
Here's some news, we also haven't got a grand unified theory of everything out of physics in hundreds of years. Guess what? Reaching the end game takes a long, long time. Possibly longer
Re: (Score:1)
I don't really understand the ignorance towards AI on Slashdot, people here seem to fairly prolifically proclaim that because we haven't gone from nothing to the end game of AI in 40 years, then AI isn't really
You seem to have a firm grasp of ignorance, actually.
This category of technology is in its infancy, but the practitioners are constantly making absurd claims. This has resulted in the public perception that self-driving cars are about to hit the markets, and other idiocies.
The choices are not merely "idiot" or "Luddite," there are other options. You could, for example, recognize that much of the work being called "AI" is just academic bullshit being driven by an influx of money into certain types of trained
Re: (Score:2)
The next big development in AI, bug brains. For shallow thinkers, why bug brains, what a waste of money, for clear thinkers, bug brains cool. Compact AI behaving like a predatory bug, that can track down and eliminate various pest species, probably more remote AI than built in AI, the stuff of sci fi and sure train it to tell the difference between the fruit it is protecting and the bug it is eliminating.
Where did I hear this before? (Score:2)
Easy to do (Score:2)
Any AI worth it's weight not only decides Apples or Bananas, but also reports a certainty value.
That is, 80% likely to be Apple, vs 60% likely to be Banana.
Just rule anything >=60% means "Other" rather than putting it into Banana.
Re: (Score:3)
Nope because that doesn't provide the ability to provide direct feedback that the network was correct when it indicated "Other." If I feed a picture of a hammer there is a good chance when comparing apples to bananas the network will rightfully indicate a high chance of a banana because it is relative to the possibility it is an Apple and might pass your filter. The proper output might be something more like 10% Apple, 30% Banana, 60% neither.
Your solution is just applying rule based logic to try to filter
Re: (Score:2)
You are claiming a different model. Both the one I used and the one you described are in use currently.
Basically, I was using the independent model, where the object is examined for identity independently. That is, an object could possibly be graded as 95% Apple and also be graded as 70% Banana. (though that would be a weird object)
You were using the choose identity. So that if there were only two definitions, the object graded as 95% Apple would by definition be 5% Banana.
But honestly, my rule works in
Re: (Score:2)
That won't work. And the reason is kind of subtle: neural networks are good at "memorizing" random data and will fit quite nicely to random collections of images. But what they can't do in that case is generalize well at all.
A lot of the art and science of getting a neural network to work is feeding it appropriate training data that it can not only fit to but generalize from.
References: https://arxiv.org/pdf/1611.035... [arxiv.org]
Re: (Score:2)
Complete Nonsense (Score:2)
Re: (Score:1)
Dear God.
Re: (Score:2)
Thank you, I couldn't find the words.
Re: (Score:2)
Re: (Score:2)
What you describe would be a powerful thing to implement (and it's trivial to do, really) but it's not AI, it's more along the lines of a self-directed search engine and results parser.
It's hard to define AI, but I don't doubt that something sufficiently AI-ish will appear eventually, and it'll be good enough at what it does that whether it's genuinely intelligent or not will be a distinction without a difference.
AI training will probably never be practical (Score:3)
until there are AI trainers to teach the new AI instances.
Think about it...
Re: (Score:1)
That is like, totally, Inception.
Re: (Score:2)
More seriously though that suggests the problem has been solved. If you look at small children, they’re taught by adults that can already solve the problem. What you’re essentially saying is that AI classifiers will be useless until we have a working AI classifier.
There are other types of AIs that do use this type of appr
Re: (Score:2)
Re: (Score:2)
It's like humans work too. (Score:2)
Except we can learn! And stick a qualifier word in front of "apple" to make orange "China's apple". Or "Sinaasappel" in Dutch, "Apfelsine" in German ("Orange" is correct too), etc.
Maybe they should focus on NNs that aren't frozen for production, but still have some plasticity left, and stimulators to tell them if they have been good.
Why not? ... Chicken??
What's the matter, Sandurz?
Re: (Score:2)
AI's in productions with learning capability? What a great idea!
Let's build a chatbot with that "feature" and deploy it online. We can call it Tay [theguardian.com]. It'll be awesome!
Re: (Score:2)
Kinda was, to be honest. I'm sure there are sociologists who would love to study that kind of interaction.
Re: (Score:2)
I was surprised that the article didn't mention that the twits at 4chan deliberately set out to corrupt Tay. The Guardian is generally better than that.
many different kinds of nothing (Score:2)
While as they might work on their training set, there are always going to be things outside its field of encounter with which it must deal, sort of like the Godel Incompleteness Theorem for algorithms.
I have heard it said that old ideas about the brain were that left-brain deals with analytics, while as the right brain deals with creativity. The newer reformulation of this idea is that left hemisphere deals with predictability and established pattern
Re: (Score:2)
In the beginning, there was nothing.
Then God said, "Let there be light!"
And there still was nothing
But you could see it.
Finally! (Score:2)
An AI that can track my productivity at work!
The Chinese already have this tech, but... (Score:2)
Re: (Score:2)
Why not Zoidberg? (Score:2)
Woop woop woop!
There's always something (Score:2)
Unless you provide no data, there's always something. An empty tabletop, or an empty driveway, etc. It's easy to recognize when there's actually nothing, because there's no input.
Reminds me of a Metallica song...
Re: (Score:2)
It's easy to recognize when there's actually nothing, because there's no input.
Congratulations, you've disproven the existence of sight, because blind people.
A better title than âoeNothingâ is â (Score:2)
Knowing what you don't know is essential to wisdom, be it artificial or natural. AFIK all modern AI systems simply guess; they never consider answering âoenone of the aboveâ, or the likelihood this question is atypical, or even whether they shouldn't answer because they're clueless.
Curiously, in the olden days of AI before deep nets, logic- and knowledge- based methods were much more common. Use of circumspection and counterfactuals were central to building models and systems that could reason a
nothing to worry about (Score:1)
The label "nothing" seems to be just the author referring to his example of an empty box containing nothing, something people learn to recognize as a concept of "zero" items in a box.
What he actually seems to be talking about is identifying anomalies, or instances that seem to greatly deviate from previously seen instances. The author being a CEO of an "AI-powered visual inspection company". So an example of his problem is to find defective items (e.g. scracthed) on an assembly line.
With very few instances
Whats next? (Score:1)
I think the next major step for AI is Skynet.
Nothing Nulls Noone Nohow (Score:2)
Actually, there is a difference between nothing, null, noone, and nohow.
Nothing is literally nothing. We observed the absence.
Null is we didn't test it, it's a cat in a box, could be there, could not be there, could be a Zargon.
Noone is we did test it, we know it's not a person. Still could be a Zargon. Here, kitty kitty.
Nohow is we did observe it, the results are squirrelly, it's definitely a Zargon, cause it ain't human.
AI frequently can't handle any of those. People who code them aren't good at set theor
Meir Gott. (Score:1)
Today, it tells Nothing. Tomorrow, it tells Coffee without Cream from Coffee without Milk.
Pictures of the real world (Score:1)
Digital information systems have been dealing with 'yes', 'no', 'unknown' outcomes for decades: It's revealing that AI 'science' is only starting to include this.
Part of the reason, is the small problem-space that AI originally faced: Only valid data was input into the computer, so the 'unknown' outcome wasn't required. Nowadays, that data comes directly from 3D (or worse, 2D) cameras, as pictures of the real world containing a lot of unknowns.
Much of AI is devoted to finding a yes/no answer but th
Re: (Score:2)
No, it's a bullshit article from somebody who doesn't know what they're talking about.
It's quite common to include a background or "none of the above" class. In fact, it's essential in many applications, particularly segmentation.
shouldn't there be a complete (Score:2)