Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Compare cell phone plans using Wirefly's innovative plan comparison tool ×
AI Technology

Research Highlights How AI Sees and How It Knows What It's Looking At 130

anguyen8 writes Deep neural networks (DNNs) trained with Deep Learning have recently produced mind-blowing results in a variety of pattern-recognition tasks, most notably speech recognition, language translation, and recognizing objects in images, where they now perform at near-human levels. But do they see the same way we do? Nope. Researchers recently found that it is easy to produce images that are completely unrecognizable to humans, but that DNNs classify with near-certainty as everyday objects. For example, DNNs look at TV static and declare with 99.99% confidence it is a school bus. An evolutionary algorithm produced the synthetic images by generating pictures and selecting for those that a DNN believed to be an object (i.e. "survival of the school-bus-iest"). The resulting computer-generated images look like modern, abstract art. The pictures also help reveal what DNNs learn to care about when recognizing objects (e.g. a school bus is alternating yellow and black lines, but does not need to have a windshield or wheels), shedding light into the inner workings of these DNN black boxes.
This discussion has been archived. No new comments can be posted.

Research Highlights How AI Sees and How It Knows What It's Looking At

Comments Filter:
  • by HornWumpus ( 783565 ) on Wednesday December 17, 2014 @04:19PM (#48620347)

    Unfortunately they are wrapped around a tree; just around the corner. Mistook a bee 3 inches from the camera for a school bus.

    • by peon_a-z,A-Z,0-9$_+! ( 2743031 ) on Wednesday December 17, 2014 @06:29PM (#48621627)

      Everytime I see this topic appear on Slashdot (Last time [slashdot.org]) I think:

      You're putting a neural network (NN) through a classification process where it is fed this image as a "fixed input", where the input's constituent elements are constant, and you ask it to classify correctly the same way as a human would. The problem with this comparison is the human eye does not see a "constant" input stream; the eye captures a stream of images, each slightly skewed as your head moves and the images changes slightly. Based on this stream of slightly different images, the human identifies an object.

      However, in this research, time and again a "team" shows a "fault" in a NN by taking a single, nonvarying image input to a NN and calling it a "deep flaw in the image processing network", and I just get a feeling that they're doing it wrong.

      To your topic though: You better hope your car is not just taking one single still image and performing actions based on that. You better hope your car is taking a stream of images and making decisions, which would be a completely different class of problem than this.

      • by reve_etrange ( 2377702 ) on Wednesday December 17, 2014 @06:35PM (#48621691)

        You better hope your car is not just taking one single still image and performing actions based on that.

        In fact, most of them don't use computer vision much at all. Google's self-driving car for example uses a rotating IR laser to directly measure its surrounds.

        • Great point.

          It's reassuring that the decision-makers in that process consider alternative ideas; basing the goal on 'human-like' sight would leave a lot of room for error (given limitations of even human perception and classification capabilities!)

          • It's reassuring that the decision-makers in that process consider alternative ideas; basing the goal on 'human-like' sight would leave a lot of room for error

            It's true, but using 3D laser mapping feels a little bit like cheating - after all, human drivers don't need nearly that much information. A successful computer vision approach would be a lot more impressive, even if it was too dangerous for the highway.

        • That is because computer vision is not yet good enough.
          However, Google's rotating laser costs $70,000. Just the laser, you still have to pay for the car under it.

          While large scale production would be able to lower that significantly it might be better to start with a $100 camera and a $1000 neural net computer.

          • I guess the price goes a long way towards showing how much more impressive the computer vision approach would be, if it worked. A (stereoscopic) camera produces much less information than LIDAR, and visual tasks are always deceptively complex.
  • makes it seem like the computers are morons. Anything that is black and yellow is a school bus...mmmmm nope.
    • makes it seem like the computers are morons. Anything that is black and yellow is a school bus...mmmmm nope.

      black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow bl

    • by babymac ( 312364 )
      But don't worry. I'm sure the armchair experts of Slashdot will be along any minute to tell us how this all just a bunch of hype and that the computers are stupid (I'm not disagreeing - for the moment) and AI is at least ten millions years away and will likely NEVER come to pass. Seriously though, I think a large portion of this site's users have their heads in the sand. I don't work in the field, but I am very interested in it and I read a lot of material from a lot of reputable sources. It seems to me
      • There's a tremendous gap between the "AI" that researchers are working on and and artificial general intelligence. The algorithms used in AI systems are almost always very simple. These algorithms are simply not going to make this leap and become what we would consider intelligent. It's like expecting Google search to suddenly gain sentience. My favorite quote about this is "Believing that writing these types of programs will bring us closer to real artificial intelligence is like believing that someone cli
        • Re:This synopsis (Score:4, Interesting)

          by Anguirel ( 58085 ) on Wednesday December 17, 2014 @10:21PM (#48622725)

          There's also a tremendous gap between what we consider complex and what we consider simple. For example, the brain is complex. However, individual elements of our brains are incredibly simple. Basic chemical reactions. Neurons firing or not. It's the sheer number of simultaneous simple pieces working together that makes it complex.

          Lots of simple AI algorithms all working together make the complexity. This isn't climbing a tree. It's one person poking at chemicals until they get high-energy combustible fuels, and another playing with paper to make paper airplanes better, and a third refining ceramics and metals to make them lighter and stronger and to handle different characteristics, and then they all get put together and you have a person on the moon.

          The illusion is that you think we need to make a leap to get from here to there. There's never a leap. It's lots of small simple steps that get you there.

          • Well, if you have the idea of a rocket, yes you can put the parts together and make a rocket. But no one has an idea of how to make a working general artificial intelligence. That's the leap. What are the parts we need? How do we put them together? No one has a clue! If you know how to do it, write it up in a thesis, collect your PhD, and make billions.
            • by Anguirel ( 58085 )

              Plenty of people have ideas for how to make a working general artificial intelligence. Some of them might even be correct. No one has the funding necessary for what some would call the "easy" versions, because they require a lot of research into things that aren't computers. If we study neurons to the point where we can simulate them reasonably accurately, we can probably simulate a brain, and have it work more-or-less correctly. However, we're not even that good at figuring out what a single neuron is goin

          • There are exceptions - Calculus is a good example, That's why everyone knows the name Newton more than three centuries after his death, calculus and his laws of motion enabled the leap called the industrial revolution and inspired the social leap known as the enlightenment.
            • by jbengt ( 874751 )

              Calculus is a good example, That's why everyone knows the name Newton more than three centuries after his death . . .

              Then how come so few know the name Liebniz?

            • by Anguirel ( 58085 )

              "If I have seen further it is by standing on the shoulders of giants."

          • The illusion is that you think we need to make a leap to get from here to there. There's never a leap. It's lots of small simple steps that get you there.

            That is true if the ultimate goal is not impossible.

            No number of small simple steps is going to lead to time travel.

            The only way to prove that true AI (General AI or whatever you want to call it) is possible is to make something with true AI.

        • It's like expecting Google search to suddenly gain sentience

          Meet Watson [youtube.com], it beat the best humans in the open ended problem domain of "game show trivia" using natural language processing. When it won the Jeopardy championship it had 20 tons of air-conditioning and a room full of servers. Today it runs on a "pizza box" server and you can try it out yourself [ibm.com]. After Jeopardy it went back to working with various medical institutes where it was trained and fed on a steady diet of medical journals, it's now well past the point where it became knowledgeable enough to pass

      • You must read a different version of slashdot than me.

        I thought the consensus here was that AI is "just an engineering problem" (like terraforming Mars) and will probably be here by next Tuesday.

        What is odd is that people here seem to think that computer programming will be exempt from the effects of real AI. I'd think it would be one of the first things to go.

    • makes it seem like the computers are morons.

      Makes it seem like the people choosing the training sets are morons.

      • by Entrope ( 68843 )

        The people choosing the training sets are not morons at all. This "research" is almost exactly analogous to finding that this year's SAT can be passed by feeding it a fixed pattern of A, C, D, A, B, and so forth -- and then declaring that this means standardized testing is easy to fake out. They are exploiting the particular structure of a particular instance of a DNN. It is not surprising that they can find odd images that make a DNN answer "yes" when the only question it knows how to answer is "is this

    • by bouldin ( 828821 )
      No, they just aren't anywhere "near-human."
  • Reverse OCR (Score:5, Interesting)

    by yarbo ( 626329 ) <moderkakaNO@SPAMgmail.com> on Wednesday December 17, 2014 @04:27PM (#48620421)

    Reminds me of the reverse OCR tumblr. It generates patterns of squiggles a human could never read but the OCR recognizes as a word.

    http://reverseocr.tumblr.com/ [tumblr.com]

  • by shadowrat ( 1069614 ) on Wednesday December 17, 2014 @04:31PM (#48620473)
    idk, these results seem more similar to how humans see than they do different. When people don't know exactly what they are looking at, the brain just puts in it's best guess. people certainly see faces and other familiar objects in tv static. They see bigfoot in a collection of shadows or a strange angle on a bear. i even feel like i did sort of see a peacock in the one random image labeled peacock. it's sort of like the computer vision version of a rorschach test.
    • If the network was trained to always return a "best match" then it's working correctly. To return "no image", it would need to be trained to be able to return that, just like humans are given feedback when there is no image.
      • by jfengel ( 409917 )

        It's not just returning a matched image, though. It's also returning a confidence level, and in the cases they've discovered, it's returning 100% confidence. That's clearly wrong.

        • It's also returning a confidence level, and in the cases they've discovered, it's returning 100% confidence. That's clearly wrong.

          What, you've never been SURE you were right, and then later found out you were wrong?

          Nothing wrong with being wrong with confidence. Sounds like the majority of humanity the majority of the time.

          Now, does this mean that the AI is useful? Well, it's useful for finding out why it's 100% certain, but wrong. In the field, not so much.

          • Nothing wrong with being wrong with confidence. Sounds like the majority of humanity the majority of the time.

            Right, and it has created a great deal of misery throughout human history. Just because it is prevalent does not mean it is not a problem.

            More specifically, the overconfidence displayed by the networks here should lead to a corresponding skepticism, in a rational observer, to the notion that they have cracked the image recognition problem.

          • by jfengel ( 409917 )

            Nothing wrong with being wrong with confidence. Sounds like the majority of humanity the majority of the time.

            Oh, it definitely sounds like the majority of humanity the majority of the time. I just don't think it's one of our more admirable traits.

            In our case, it's necessary, because we evolved with mediocre brains. I'd like to see our successors do better. They aren't yet, which is what this article is pointing out. This promising system isn't ready yet. It's just not wrong for the reasons that the GGP post thought.

      • The DNN examples were apparently trained to discriminate between a members of a labeled set. This only works when you have already cleaned up the input stream (a priori) and guarantee that the image must be an example of one of the classes.

        These classifiers were not trained on samples from outside the target set. This causes a forced choice: given this random dot image, which of the classes have the highest confidence? Iterate until confidence is sufficiently high, and you have a forgery with the same fe

        • The DNN examples were apparently trained to discriminate between a members of a labeled set. This only works when you have already cleaned up the input stream (a priori) and guarantee that the image must be an example of one of the classes.

          These classifiers were not trained on samples from outside the target set.

          This is not some network hastily trained by people who are ignorant of a very basic and long-known problem: "Clune used one of the best DNNs, called AlexNet, created by researchers at the University of Toronto, Canada, in 2012 – its performance is so impressive that Google hired them last year." From a paper by the developers of AlexNet: "To reduce overfitting in the globally connected layers we employed a new regularization method that proved to be very effective."

          It does not seem plausible that this

      • If the network was trained to always return a "best match" then it's working correctly. To return "no image", it would need to be trained to be able to return that, just like humans are given feedback when there is no image.

        It seems highly unlikely that such an elementary mistake was made: "Clune used one of the best DNNs, called AlexNet, created by researchers at the University of Toronto, Canada, in 2012 – its performance is so impressive that Google hired them last year."

        The fact that the net returns a confidence level implies that it does have a way to return a result of 'not recognized'.

    • It might be similar but it's not the same mechanism. When you see an object in static, your brain knows that it's just making a guess so the guess is assigned low confidence. But here they showed that you can actually design a picture that looks random but is assigned very high confidence of being an object.

      This type of phenomenon is very well known. It's not news, people have known about this sort of stuff in artificial neural nets since the 80's. I guess they just sort of assumed that deep belief nets wou

      • These are Deep Neural Networks, not Deep Belief Networks... I think DBNs might be able to calculate a novelty metric that DNNs aren't capable of calculating... This might enable them to say that an image is unlike anything it has been trained on, and therefore lower the confidence of the rating, and so, may not be vulnerable to this problem.

        Of course, no one has managed to train a DBN to be as successful at the ImageNet problems as DNNs, so there's still some way to go before this hypothesis can be tested.

        • The distinction between deep belief networks (based on graphical models) and deep neural networks (based on perceptions and backprop) is an imprecise one. You could argue that DNNs are just a subtype of DBNs, and yes, the only 'successful' DBNs so far have been DNNs. When people speak of DBNs they almost always mean DNNs.

          • DNNs are generally consist of (ignoring convolution and max-pooling) rectified linear units trained with back propagation...

            DBNs are basically stacked RBM autoencoders using binary stochastic units.

            So, while people may basically mean the same thing, I don't think they are... DNNs create hyperplanes that carve up the input space, and while DBNs actually do the same thing, you can calculate an 'entropy' or 'energy level' between layers... with familiar images being low energy and novel images having high ener

    • by Kjella ( 173770 )

      I think it was fairly clear what was going on, the neural networks latch on to conditions that are necessary but not sufficient because they found common characteristics of real images but never got any negative feedback. Like in the peacock photo the colors and pattern are similar, but clearly not in the shape of a bird but if it's never seen any non-bird peacock colored items how's the algorithm supposed to know? At any rate, it seems like the neural network is excessively focusing on one thing, maybe it

      • I think it was fairly clear what was going on, the neural networks latch on to conditions that are necessary but not sufficient because they found common characteristics of real images but never got any negative feedback.

        You seem to be suggesting that it is 'simply' a case of overfitting, but overfitting is a problem that has been recognized for some time. I don't doubt that the developers of these networks have thought long and hard about the issue, so this study suggests that it is a hard and as-yet unsolved problem in this domain.

        One thing that humans use, but which these systems do not seem to have developed, is a fairly consistent theory of reality. Combine this with analytical reasoning, and we can take a difficult i

    • When people don't know exactly what they are looking at, the brain just puts in it's best guess. people certainly see faces and other familiar objects in tv static. They see bigfoot in a collection of shadows or a strange angle on a bear.

      Yes, I think it's very interesting when you look at Figure 4 here [evolvingai.org]. They almost look like they could be an artist's interpretation of the things they're supposed to be, or a similarity that a person might pick up on subconsciously. The ones that look like static may just be the AI "being stupid", but I think the comparison to human optical illusions is an interesting one. We see faces because we have a bias to see them. Faces are very important to participating in social activities, since they give many

      • The computer isn't trying to find food or avoid predators, so what is it "trying to do" when it "sees"

        Fortunately we know this because we (in the general sense) designed the algorithms.

        It's trying very specifically to get a good score on the MINST or ImageNet datasets. Anything far away from the data results in funny results. I'm not being glib. This results in the following:

        One generally assumes that the data lies on some low dimensional manifold of the 256x256 dimensional space (for 256x256 greyscale imag

        • I think I understand... vaguely. To simplify, you're saying it's been trained on a specific dataset, and it chooses whichever image in the dataset the input is most like. It doesn't really have the ability to choose "unknown" and must choose an image from the dataset that it's most like. Its "confidence" in the choice is not really based on similarity to the image it has chosen, but instead based on dissimilarity to any of the other images. Therefore, when you give it garbage, it chooses the image that

          • I think I understand... vaguely. To simplify, you're saying it's been trained on a specific dataset, and it chooses whichever image in the dataset the input is most like.

            A bit.

            It's easier to imagine in 2D. Imagine you have a bunch of height/weigt measurements and a lable telling you whether a person is overweight. Plot them on a graph, and you will see that in one corner people are generally overweight and in another corner, they are not.

            If you have a new pair of measurements come along with no label, you c

    • i even feel like i did sort of see a peacock in the one random image labeled peacock.

      I know what you mean, but did you see a peacock before you read the label?

  • For example, DNNs look at TV static and declare with 99.99% confidence it is a school bus.

    Unless it's static of an image of a school bus, these things sound utterly useless.

    According to TFS, Charlie Brown is a schoolbus.

    It's OK, if AI is this stupid, we need not worry about it taking over any time soon.

    • by vux984 ( 928602 )

      It's OK, if AI is this stupid, we need not worry about it taking over any time soon.

      Or that when it takes over it will make catastrophically bad decisions for us.

    • by TheCarp ( 96830 )

      > It's OK, if AI is this stupid, we need not worry about it taking over any time soon.

      If only that worked for congress.

    • by itzly ( 3699663 )
      Depends on what you mean by "soon". In the early '80s people were laughing about computers trying to play chess. In the late '90s, a (large) computer beat the world champion in a match. Today, a smartphone could do the same. Humans make silly mistakes with optical illusions too, by the way.
      • by dcw3 ( 649211 )

        You're off by about a decade. I was playing chess on machines like Boris, and Chess Challenger back in those days. And while they were easy for a serious chess player to beat, they'd typically beat a novice. This is from http://www.computerhistory.org... [computerhistory.org]

        Until the mid-1970s, playing computer chess was the privilege of a few people with access to expensive computers at work or school. The availability of home computers, however, allowed anyone to play chess against a machine.

        The first microprocessor-based

      • In the early '80s people were laughing about computers trying to play chess.

        Were they? I'm not sure they were laughing about it. By the early 90s you could buy rather slick chess computers which had a board with sensors under each square (pressure in the cheap ones, magnetic in the fancy ones), and LEDs up each side to indicate row/column.

        You could play them at chess and they'd tell you their moves by flashing the row/column lights. Those weren't just programs by that stage they were full blown integrated c

  • My composter helped me wreck a nice beach.

  • Here is an article from Vice of all places about this research, from June http://motherboard.vice.com/re... [vice.com]

    Research paper here: http://cs.nyu.edu/~zaremba/doc... [nyu.edu]

    Also, a funny video demonstrating the rudimental nature of nintendo ds brain training pattern recognition: https://www.youtube.com/watch?... [youtube.com]

  • e.g. a school bus is alternating yellow and black lines, but does not need to have a windshield or wheels

    Then this [nocookie.net] is also a school bus.

  • I have been assured many, many times by the experts of Slashdot that computers are nowhere near achieving artificial intelligence.
    • I have been assured many, many times by the experts of Slashdot that computers are nowhere near achieving artificial intelligence.

      er... and?

  • Research Highlights How a Deep Neural Network Trained With Deep Learning Sees and How It Knows What It's Looking At

    There, fixed that for you.

    Why is using the term "AI" wrong in this headline?
    #001: Because industry experts don't agree on what AI is
    #010: Because most of the definitions of AI are much broader than what the article is talking about
    #011: Because at least one definition of AI says something like "if it exists today, it's not AI" - including "beyond the capability of current computers" or something similar as a defining condition of the term "AI"

  • I know how they created the images, so I know its not really an image of a backpack really so much as static that has been messed with by someone in photoshop....however, if you showed me that, backpack would be high on my list of guesses.

    That one really does look to me like someone washed out an image of a backpack with static.

  • by preaction ( 1526109 ) on Wednesday December 17, 2014 @05:02PM (#48620859)

    a DNN is only interested in the parts of an object that most distinguish it from others.

    So it needs to learn that these exact images are tricks being played on it, so it can safely ignore it. This is exactly what machine learning is. What's the story?

    • They tried that, but it didn't make a huge difference (the resulting network was still easily 'fooled' with similar images).

      The big thing to realize here is that the algorithm that generates the fooling images specifically creates highly regular images ("images [that] look like modern, abstract art". The repeated patterns are very distracting to the human eye, whereas the DNN pretty much ignores them. See figure 10 in the paper (http://arxiv.org/pdf/1412.1897v1.pdf ). It is necessary to take into account t

      • by Entrope ( 68843 )

        The researchers also basically cheated by "training" their distractor images on a fixed neural network. People have known for decades that a fixed/known neural network is easy to fool; what varies is exactly how you can fool it. The only novel finding here is their method for finding images that fool DNNs in practice -- but the chances are overwhelmingly high that a different DNN, trained on the same training set, would not make the same mistake (and perhaps not make any mistake, by assigning a low probab

        • The researchers also basically cheated by "training" their distractor images on a fixed neural network.

          That's hardly fair: they were trying to find images that fooled the network. What better way to do that than feeding images in until you find a good one (with derivatives).

          The only novel finding here is their method for finding images that fool DNNs in practice -- but the chances are overwhelmingly high that a different DNN, trained on the same training set, would not make the same mistake (and perhaps not

          • by Entrope ( 68843 )

            Why was my characterization of their approach "hardly fair"? Someone -- either the researchers or their press people -- decided to hype it as finding a general failing in DNNs (or "AI" as a whole). The failure mode is known, and their particular failure modes are tailored to one particular network (rather than even just one training set). I think the "hardly fair" part is the original hyperbole, and my response is perfectly appropriate to that. The research is not at all what it is sold as.

            Don't multi-c

            • Why was my characterization of their approach "hardly fair"?

              You called it cheating.

              Someone -- either the researchers or their press people -- decided to hype it as finding a general failing in DNNs (or "AI" as a whole).

              It pretty much is. If you input some data far away from the training set you'll wind up at a completely arbitrary point in the decision boundary.

              The research is not at all what it is sold as.

              The research shows very nicely that the much-hyped deep learning systems are no different in many ways

              • by Entrope ( 68843 )

                I called it cheating because they violated both one of the prime rules of AI: train on a data set that is more or less representative of the data set you will test with, and one of the prime rules of statistics: do not apply a priori statistical analysis when you iterate with feedback based on the thing you estimated. Their test images are intentionally much different from the training images, which is one of the first things an undergraduate course on AI will talk about. They also use what are essentiall

                • I called it cheating because they violated both one of the prime rules of AI: train on a data set that is more or less representative of the data set you will test with, and one of the prime rules of statistics

                  But they're not trying to do that. They're trying to debunk the claims of "near human" performance, which they do very nicely by showing that the algorithms make vast numbers of mistakes when the data in is not very, very close to the original data.

                  They also present a good way of finding amusing failu

    • So it needs to learn that these exact images are tricks being played on it, so it can safely ignore it.

      No. Learning that the "exact images" presented here are tricks would not be a solution to the problem revealed by this study. The goal in any form of machine learning is software that can effectively extrapolate beyond the training set.

      What's the story?

      Once you understand the problem, you will see what the story is.

  • These are computer programs, not artificial intelligences as some have come to think of them. They are simply some charges flipping around in some chips. There is no seeing or recognizing in human terms. We apply all that consciousness crap.

    In this case, the neural networks are randomly formed nets that match up a few pixels here and there then spit out a result. There is no seeing. Increase the complexity a thousand times over and there will still be no seeing, but there might, might, might be less sh

    • These are computer programs, not artificial intelligences as some have come to think of them. They are simply some charges flipping around in some chips.

      And minds are just charges flipping around in some brain (at one level of abstraction, it is chemical, but chemistry is explained by the movement of charges.)

      As John Searle said, brains make minds.

      Everything else is just speculating.

      If you look at John Searle's arguments in detail, they ultimately end up as nothing more than "I can't believe that this is just physics." Searle's view is actually rather more speculative than the one he rejects, as it implies an unknown extension to atomic physics [wikipedia.org].

      Nevertheless, none of what I write here should be construed as a claim that artificial

      • Brains are charges and chemistry, but minds are something else, though clearly connected. Brains make minds, we know that. There is no reason to think that anything else can make a mind. There are some philosophers who say that a thermostat has a mind, but that's pretty clearly bullshit. These neural nets are simply primitive and chaotic data filters. Yes, at some point an AI will be able to convince us that it is concious, but there will be no reason to think it is anything but a parlor trick. Until

  • Computer learns to pick out salient features to identify images. Then we are shocked that when trained with no supervision the salient features aren’t what we would have chosen.

    I see this as a great ah-ha moment. Humans also have visual systems that can be tricked by optical illusions. The patterns presented while seemingly incomprehensible to us make sense to computers for the same reason our optical illusions do to us -- taking short cuts in visual processing that would fire on patterns not often

    • by Entrope ( 68843 )

      The neural networks in question were absolutely trained with supervision. Unsupervised learning [wikipedia.org] is a quite different thing.

    • Computer learns to pick out salient features to identify images. Then we are shocked that when trained with no supervision the salient features aren’t what we would have chosen.

      There is a huge difference: humans pick relevant features guided by a deep understanding of the world, while machine learning, unguided by any understanding, only does so by chance.

      Now that we know what computers are picking out as salient features, we can modify the algorithms to add additional constraints on what additional salient features must or must not be in an object identified, such that it would correspond more closely to how humans would classify objects. Baseballs must have curvature for instance not just zig-zag red lines on white.

      Hand-coded fixes are not AI - that would be as if we have we had a higher-level intelligent agent in our heads to correct our mistakes (see the homunculus fallacy [logicallyfallacious.com]).

  • I mean an AI that looks at static and says it's a school bus 99.99% of the time seems to be about as broken as could be. The researchers have to be the most optimistic folks in the world if they still think there's a pony in there. I'd be seriously thinking about scrapping the software (or, at least, looking for a bad coding error) and/or looking for an entirely new algorithm after achieving results that bad.

    • by snkline ( 542610 )
      It doesn't see a school bus in static 99.99% of the time, the percentage is a measure in the confidence measure of the ANN. Given certain images of static the program will say "I am 99.99% confident that is a school bus".
  • I think this distillation by a neural network could also prove useful for making new icons and symbols though. Could prove useful in a reverse application by using them to break down stuff, have a human review it, and modify it back again into something recognizable by us on a more fundamental level.
  • In the pictures from the last link, I clearly see the gorilla and the backpack.

    Those images remind me of what you get with some edge-detection filters commonly used to enhance image features.

  • by troll ( 4326 )

    Think of the global implications to surrealism!

  • by volvox_voxel ( 2752469 ) on Wednesday December 17, 2014 @06:31PM (#48621651)

    I've done some image processing work.. It seems to me that you can take the output of this Neural network and correlate it with some other image processing routines, like feature detection, feature meteorology, etc; A conditional probability based decision chain,etc.

    I work on a LIDAR sensor meant for Anti-. I work at a start-up that makes 3D laser-radar vision sensors for robotics and autonomous vehicles /anti-collision avoidance. The other day, I learned that such sensors allow robots to augment their camera vision systems to have a better understanding of their environment. It turns out that it's still an unsolved problem for a computer vision systems to unambiguously recognize that it's looking at a bird or a cat, and can only give you probabilities.. A LIDAR sensor instantly gives you a depth measurement out to several hundred meters that you can correlate your images to . The computer can combine the color information, along with depth information to have a much better idea of what it's looking at. For an anti-collision avoidance system, it has to be certain what it's looking at, and that cameras alone aren't good enough. I find it pretty exciting to be working on something that is useful for AI (artificial intelligence) research. One guy I work with got his Ph.D using Microsoft's Kinect sensor, which is something that gives robots depth perception for close-up environments..

    “In the 60s, Marvin Minsky (a well known AI researcher from MIT, whom Isaac Asimov considered one of the smartest people he ever met) assigned a couple of undergrads to spend the summer programming a computer to use a camera to identify objects in a scene. He figured they'd have the problem solved by the end of the summer. Half a century later, we're still working on it.”

    http://imgs.xkcd.com/comics/ta... [xkcd.com]

    • I've done some image processing work.. It seems to me that you can take the output of this Neural network and correlate it with some other image processing routines, like feature detection, feature meteorology, etc;

      If you look at the convolutions learned in the bottom layers, you typically end up with a bunch that look awfully like Gabor filters. In other words, it's learning a feature detection stage and already doing that.

      Some sort of depth sensing certainly does help.

  • Machines got a lot of imagination, don't they? Next thing you know you'll be looking at the clouds with your robot buddy and it'll say "99.99% chance of that cloud looking like a puppy. BEEP". Oooorrrrr maybe a school bus, but you get what I mean.

    Oh right I forgot this is Slashdot. MACHINES WILL DOMINATE US HELP. Peasants. Not like this display of reality will stop the rampart paranoia of people that works with computers and machines all day long... ...
    ironic.

  • Cool. Image recognition is far further along than I thought. It makes the same type of mistakes as humans although in a different way.
    We humans see faces in everything. Smoke, clouds and static for example. This just means that this is inherent in the attempt of recognition.

  • Since the core of tis story is fooling a DNN rather than image recognition, I wonder whether the same exercise could be repeated with DNNs tasked to recognize human behavior and build digital profiles of humans based on for example browsing habits, keywords in online communication, movement is space, etc. How does a white noise terrorist look like? What would be its indirectly encoded best representation? We tend to be scared of digital profiling because we believe that our digital representation actually l

  • Far from showing weakness, this study seems to demonstrate a creatively brilliant algorithm. These are very, very strong results. I am deeply impressed.

    Text recognition in white noise can be fixed with virtual saccades.

    Aside from adding "human" sensibilities (do we only want it to only recognize objects in real, photo-realistic settings, and not drawings / art?), I would say it's good to go.

Sometimes, too long is too long. - Joe Crowe

Working...