Research Highlights How AI Sees and How It Knows What It's Looking At 130
anguyen8 writes Deep neural networks (DNNs) trained with Deep Learning have recently produced mind-blowing results in a variety of pattern-recognition tasks, most notably speech recognition, language translation, and recognizing objects in images, where they now perform at near-human levels. But do they see the same way we do? Nope. Researchers recently found that it is easy to produce images that are completely unrecognizable to humans, but that DNNs classify with near-certainty as everyday objects. For example, DNNs look at TV static and declare with 99.99% confidence it is a school bus. An evolutionary algorithm produced the synthetic images by generating pictures and selecting for those that a DNN believed to be an object (i.e. "survival of the school-bus-iest"). The resulting computer-generated images look like modern, abstract art. The pictures also help reveal what DNNs learn to care about when recognizing objects (e.g. a school bus is alternating yellow and black lines, but does not need to have a windshield or wheels), shedding light into the inner workings of these DNN black boxes.
Automatic cars are just around the corner... (Score:5, Funny)
Unfortunately they are wrapped around a tree; just around the corner. Mistook a bee 3 inches from the camera for a school bus.
Re:Automatic cars are just around the corner... (Score:5, Interesting)
Everytime I see this topic appear on Slashdot (Last time [slashdot.org]) I think:
You're putting a neural network (NN) through a classification process where it is fed this image as a "fixed input", where the input's constituent elements are constant, and you ask it to classify correctly the same way as a human would. The problem with this comparison is the human eye does not see a "constant" input stream; the eye captures a stream of images, each slightly skewed as your head moves and the images changes slightly. Based on this stream of slightly different images, the human identifies an object.
However, in this research, time and again a "team" shows a "fault" in a NN by taking a single, nonvarying image input to a NN and calling it a "deep flaw in the image processing network", and I just get a feeling that they're doing it wrong.
To your topic though: You better hope your car is not just taking one single still image and performing actions based on that. You better hope your car is taking a stream of images and making decisions, which would be a completely different class of problem than this.
Re:Automatic cars are just around the corner... (Score:4, Informative)
You better hope your car is not just taking one single still image and performing actions based on that.
In fact, most of them don't use computer vision much at all. Google's self-driving car for example uses a rotating IR laser to directly measure its surrounds.
Re: (Score:2)
Great point.
It's reassuring that the decision-makers in that process consider alternative ideas; basing the goal on 'human-like' sight would leave a lot of room for error (given limitations of even human perception and classification capabilities!)
Re: (Score:2)
It's reassuring that the decision-makers in that process consider alternative ideas; basing the goal on 'human-like' sight would leave a lot of room for error
It's true, but using 3D laser mapping feels a little bit like cheating - after all, human drivers don't need nearly that much information. A successful computer vision approach would be a lot more impressive, even if it was too dangerous for the highway.
Re: (Score:2)
That is because computer vision is not yet good enough.
However, Google's rotating laser costs $70,000. Just the laser, you still have to pay for the car under it.
While large scale production would be able to lower that significantly it might be better to start with a $100 camera and a $1000 neural net computer.
Re: (Score:2)
This synopsis (Score:2)
Re: (Score:1)
makes it seem like the computers are morons. Anything that is black and yellow is a school bus...mmmmm nope.
black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow black and yellow bl
Re: (Score:3)
Re: (Score:2)
Re:This synopsis (Score:4, Interesting)
There's also a tremendous gap between what we consider complex and what we consider simple. For example, the brain is complex. However, individual elements of our brains are incredibly simple. Basic chemical reactions. Neurons firing or not. It's the sheer number of simultaneous simple pieces working together that makes it complex.
Lots of simple AI algorithms all working together make the complexity. This isn't climbing a tree. It's one person poking at chemicals until they get high-energy combustible fuels, and another playing with paper to make paper airplanes better, and a third refining ceramics and metals to make them lighter and stronger and to handle different characteristics, and then they all get put together and you have a person on the moon.
The illusion is that you think we need to make a leap to get from here to there. There's never a leap. It's lots of small simple steps that get you there.
Re: (Score:2)
Re: (Score:2)
Plenty of people have ideas for how to make a working general artificial intelligence. Some of them might even be correct. No one has the funding necessary for what some would call the "easy" versions, because they require a lot of research into things that aren't computers. If we study neurons to the point where we can simulate them reasonably accurately, we can probably simulate a brain, and have it work more-or-less correctly. However, we're not even that good at figuring out what a single neuron is goin
Re: (Score:2)
Re: (Score:2)
Then how come so few know the name Liebniz?
Re: (Score:2)
"If I have seen further it is by standing on the shoulders of giants."
Re: (Score:2)
The illusion is that you think we need to make a leap to get from here to there. There's never a leap. It's lots of small simple steps that get you there.
That is true if the ultimate goal is not impossible.
No number of small simple steps is going to lead to time travel.
The only way to prove that true AI (General AI or whatever you want to call it) is possible is to make something with true AI.
Re: (Score:2)
If the goal is impossible, then no leap will get there either.
Re: (Score:2)
It's like expecting Google search to suddenly gain sentience
Meet Watson [youtube.com], it beat the best humans in the open ended problem domain of "game show trivia" using natural language processing. When it won the Jeopardy championship it had 20 tons of air-conditioning and a room full of servers. Today it runs on a "pizza box" server and you can try it out yourself [ibm.com]. After Jeopardy it went back to working with various medical institutes where it was trained and fed on a steady diet of medical journals, it's now well past the point where it became knowledgeable enough to pass
Re: (Score:2)
I thought the consensus here was that AI is "just an engineering problem" (like terraforming Mars) and will probably be here by next Tuesday.
What is odd is that people here seem to think that computer programming will be exempt from the effects of real AI. I'd think it would be one of the first things to go.
Re: (Score:2)
makes it seem like the computers are morons.
Makes it seem like the people choosing the training sets are morons.
Re: (Score:2)
The people choosing the training sets are not morons at all. This "research" is almost exactly analogous to finding that this year's SAT can be passed by feeding it a fixed pattern of A, C, D, A, B, and so forth -- and then declaring that this means standardized testing is easy to fake out. They are exploiting the particular structure of a particular instance of a DNN. It is not surprising that they can find odd images that make a DNN answer "yes" when the only question it knows how to answer is "is this
Re: (Score:2)
Re: (Score:2)
Reverse OCR (Score:5, Interesting)
Reminds me of the reverse OCR tumblr. It generates patterns of squiggles a human could never read but the OCR recognizes as a word.
http://reverseocr.tumblr.com/ [tumblr.com]
seems a lot like human vision to me (Score:3)
Also... (Score:3)
Re: (Score:2)
It's not just returning a matched image, though. It's also returning a confidence level, and in the cases they've discovered, it's returning 100% confidence. That's clearly wrong.
Re: (Score:2)
What, you've never been SURE you were right, and then later found out you were wrong?
Nothing wrong with being wrong with confidence. Sounds like the majority of humanity the majority of the time.
Now, does this mean that the AI is useful? Well, it's useful for finding out why it's 100% certain, but wrong. In the field, not so much.
Re: (Score:2)
Nothing wrong with being wrong with confidence. Sounds like the majority of humanity the majority of the time.
Right, and it has created a great deal of misery throughout human history. Just because it is prevalent does not mean it is not a problem.
More specifically, the overconfidence displayed by the networks here should lead to a corresponding skepticism, in a rational observer, to the notion that they have cracked the image recognition problem.
Re: (Score:2)
Nothing wrong with being wrong with confidence. Sounds like the majority of humanity the majority of the time.
Oh, it definitely sounds like the majority of humanity the majority of the time. I just don't think it's one of our more admirable traits.
In our case, it's necessary, because we evolved with mediocre brains. I'd like to see our successors do better. They aren't yet, which is what this article is pointing out. This promising system isn't ready yet. It's just not wrong for the reasons that the GGP post thought.
Training classifiers require "rejectable" samples (Score:2)
The DNN examples were apparently trained to discriminate between a members of a labeled set. This only works when you have already cleaned up the input stream (a priori) and guarantee that the image must be an example of one of the classes.
These classifiers were not trained on samples from outside the target set. This causes a forced choice: given this random dot image, which of the classes have the highest confidence? Iterate until confidence is sufficiently high, and you have a forgery with the same fe
Re: (Score:2)
The DNN examples were apparently trained to discriminate between a members of a labeled set. This only works when you have already cleaned up the input stream (a priori) and guarantee that the image must be an example of one of the classes.
These classifiers were not trained on samples from outside the target set.
This is not some network hastily trained by people who are ignorant of a very basic and long-known problem: "Clune used one of the best DNNs, called AlexNet, created by researchers at the University of Toronto, Canada, in 2012 – its performance is so impressive that Google hired them last year." From a paper by the developers of AlexNet: "To reduce overfitting in the globally connected layers we employed a new regularization method that proved to be very effective."
It does not seem plausible that this
Re: (Score:2)
If the network was trained to always return a "best match" then it's working correctly. To return "no image", it would need to be trained to be able to return that, just like humans are given feedback when there is no image.
It seems highly unlikely that such an elementary mistake was made: "Clune used one of the best DNNs, called AlexNet, created by researchers at the University of Toronto, Canada, in 2012 – its performance is so impressive that Google hired them last year."
The fact that the net returns a confidence level implies that it does have a way to return a result of 'not recognized'.
Re: (Score:3)
It might be similar but it's not the same mechanism. When you see an object in static, your brain knows that it's just making a guess so the guess is assigned low confidence. But here they showed that you can actually design a picture that looks random but is assigned very high confidence of being an object.
This type of phenomenon is very well known. It's not news, people have known about this sort of stuff in artificial neural nets since the 80's. I guess they just sort of assumed that deep belief nets wou
Re: (Score:2)
These are Deep Neural Networks, not Deep Belief Networks... I think DBNs might be able to calculate a novelty metric that DNNs aren't capable of calculating... This might enable them to say that an image is unlike anything it has been trained on, and therefore lower the confidence of the rating, and so, may not be vulnerable to this problem.
Of course, no one has managed to train a DBN to be as successful at the ImageNet problems as DNNs, so there's still some way to go before this hypothesis can be tested.
Re: (Score:2)
The distinction between deep belief networks (based on graphical models) and deep neural networks (based on perceptions and backprop) is an imprecise one. You could argue that DNNs are just a subtype of DBNs, and yes, the only 'successful' DBNs so far have been DNNs. When people speak of DBNs they almost always mean DNNs.
Re: (Score:2)
DNNs are generally consist of (ignoring convolution and max-pooling) rectified linear units trained with back propagation...
DBNs are basically stacked RBM autoencoders using binary stochastic units.
So, while people may basically mean the same thing, I don't think they are... DNNs create hyperplanes that carve up the input space, and while DBNs actually do the same thing, you can calculate an 'entropy' or 'energy level' between layers... with familiar images being low energy and novel images having high ener
Re: (Score:2)
I think it was fairly clear what was going on, the neural networks latch on to conditions that are necessary but not sufficient because they found common characteristics of real images but never got any negative feedback. Like in the peacock photo the colors and pattern are similar, but clearly not in the shape of a bird but if it's never seen any non-bird peacock colored items how's the algorithm supposed to know? At any rate, it seems like the neural network is excessively focusing on one thing, maybe it
Re: (Score:2)
I think it was fairly clear what was going on, the neural networks latch on to conditions that are necessary but not sufficient because they found common characteristics of real images but never got any negative feedback.
You seem to be suggesting that it is 'simply' a case of overfitting, but overfitting is a problem that has been recognized for some time. I don't doubt that the developers of these networks have thought long and hard about the issue, so this study suggests that it is a hard and as-yet unsolved problem in this domain.
One thing that humans use, but which these systems do not seem to have developed, is a fairly consistent theory of reality. Combine this with analytical reasoning, and we can take a difficult i
Re: (Score:3)
When people don't know exactly what they are looking at, the brain just puts in it's best guess. people certainly see faces and other familiar objects in tv static. They see bigfoot in a collection of shadows or a strange angle on a bear.
Yes, I think it's very interesting when you look at Figure 4 here [evolvingai.org]. They almost look like they could be an artist's interpretation of the things they're supposed to be, or a similarity that a person might pick up on subconsciously. The ones that look like static may just be the AI "being stupid", but I think the comparison to human optical illusions is an interesting one. We see faces because we have a bias to see them. Faces are very important to participating in social activities, since they give many
Re: (Score:2)
The computer isn't trying to find food or avoid predators, so what is it "trying to do" when it "sees"
Fortunately we know this because we (in the general sense) designed the algorithms.
It's trying very specifically to get a good score on the MINST or ImageNet datasets. Anything far away from the data results in funny results. I'm not being glib. This results in the following:
One generally assumes that the data lies on some low dimensional manifold of the 256x256 dimensional space (for 256x256 greyscale imag
Re: (Score:2)
I think I understand... vaguely. To simplify, you're saying it's been trained on a specific dataset, and it chooses whichever image in the dataset the input is most like. It doesn't really have the ability to choose "unknown" and must choose an image from the dataset that it's most like. Its "confidence" in the choice is not really based on similarity to the image it has chosen, but instead based on dissimilarity to any of the other images. Therefore, when you give it garbage, it chooses the image that
Re: (Score:2)
I think I understand... vaguely. To simplify, you're saying it's been trained on a specific dataset, and it chooses whichever image in the dataset the input is most like.
A bit.
It's easier to imagine in 2D. Imagine you have a bunch of height/weigt measurements and a lable telling you whether a person is overweight. Plot them on a graph, and you will see that in one corner people are generally overweight and in another corner, they are not.
If you have a new pair of measurements come along with no label, you c
Re: (Score:2)
i even feel like i did sort of see a peacock in the one random image labeled peacock.
I know what you mean, but did you see a peacock before you read the label?
So, useless then? (Score:2)
Unless it's static of an image of a school bus, these things sound utterly useless.
According to TFS, Charlie Brown is a schoolbus.
It's OK, if AI is this stupid, we need not worry about it taking over any time soon.
Re: (Score:2)
It's OK, if AI is this stupid, we need not worry about it taking over any time soon.
Or that when it takes over it will make catastrophically bad decisions for us.
Re: (Score:2)
We've kept the feds gridlocked 3 out of 4 years for the last few decades. Best we can do.
Re: (Score:3)
> It's OK, if AI is this stupid, we need not worry about it taking over any time soon.
If only that worked for congress.
Re: (Score:2)
Re: (Score:2)
You're off by about a decade. I was playing chess on machines like Boris, and Chess Challenger back in those days. And while they were easy for a serious chess player to beat, they'd typically beat a novice. This is from http://www.computerhistory.org... [computerhistory.org]
Until the mid-1970s, playing computer chess was the privilege of a few people with access to expensive computers at work or school. The availability of home computers, however, allowed anyone to play chess against a machine.
The first microprocessor-based
Re: (Score:2)
In the early '80s people were laughing about computers trying to play chess.
Were they? I'm not sure they were laughing about it. By the early 90s you could buy rather slick chess computers which had a board with sensors under each square (pressure in the cheap ones, magnetic in the fancy ones), and LEDs up each side to indicate row/column.
You could play them at chess and they'd tell you their moves by flashing the row/column lights. Those weren't just programs by that stage they were full blown integrated c
Speech recognition still sucks (Score:2)
My composter helped me wreck a nice beach.
This is old news (Score:2)
Here is an article from Vice of all places about this research, from June http://motherboard.vice.com/re... [vice.com]
Research paper here: http://cs.nyu.edu/~zaremba/doc... [nyu.edu]
Also, a funny video demonstrating the rudimental nature of nintendo ds brain training pattern recognition: https://www.youtube.com/watch?... [youtube.com]
What's a school bus? (Score:2)
Then this [nocookie.net] is also a school bus.
B-b-b-but Slashdot said...! (Score:2)
Re: (Score:2)
I have been assured many, many times by the experts of Slashdot that computers are nowhere near achieving artificial intelligence.
er... and?
Fixing title for you (Score:1)
Research Highlights How a Deep Neural Network Trained With Deep Learning Sees and How It Knows What It's Looking At
There, fixed that for you.
Why is using the term "AI" wrong in this headline?
#001: Because industry experts don't agree on what AI is
#010: Because most of the definitions of AI are much broader than what the article is talking about
#011: Because at least one definition of AI says something like "if it exists today, it's not AI" - including "beyond the capability of current computers" or something similar as a defining condition of the term "AI"
Decent backpack actually (Score:2)
I know how they created the images, so I know its not really an image of a backpack really so much as static that has been messed with by someone in photoshop....however, if you showed me that, backpack would be high on my list of guesses.
That one really does look to me like someone washed out an image of a backpack with static.
Clickbait (Score:3)
So it needs to learn that these exact images are tricks being played on it, so it can safely ignore it. This is exactly what machine learning is. What's the story?
Re: (Score:2)
They tried that, but it didn't make a huge difference (the resulting network was still easily 'fooled' with similar images).
The big thing to realize here is that the algorithm that generates the fooling images specifically creates highly regular images ("images [that] look like modern, abstract art". The repeated patterns are very distracting to the human eye, whereas the DNN pretty much ignores them. See figure 10 in the paper (http://arxiv.org/pdf/1412.1897v1.pdf ). It is necessary to take into account t
Re: (Score:2)
The researchers also basically cheated by "training" their distractor images on a fixed neural network. People have known for decades that a fixed/known neural network is easy to fool; what varies is exactly how you can fool it. The only novel finding here is their method for finding images that fool DNNs in practice -- but the chances are overwhelmingly high that a different DNN, trained on the same training set, would not make the same mistake (and perhaps not make any mistake, by assigning a low probab
Re: (Score:2)
The researchers also basically cheated by "training" their distractor images on a fixed neural network.
That's hardly fair: they were trying to find images that fooled the network. What better way to do that than feeding images in until you find a good one (with derivatives).
The only novel finding here is their method for finding images that fool DNNs in practice -- but the chances are overwhelmingly high that a different DNN, trained on the same training set, would not make the same mistake (and perhaps not
Re: (Score:2)
Why was my characterization of their approach "hardly fair"? Someone -- either the researchers or their press people -- decided to hype it as finding a general failing in DNNs (or "AI" as a whole). The failure mode is known, and their particular failure modes are tailored to one particular network (rather than even just one training set). I think the "hardly fair" part is the original hyperbole, and my response is perfectly appropriate to that. The research is not at all what it is sold as.
Don't multi-c
Re: (Score:2)
Why was my characterization of their approach "hardly fair"?
You called it cheating.
Someone -- either the researchers or their press people -- decided to hype it as finding a general failing in DNNs (or "AI" as a whole).
It pretty much is. If you input some data far away from the training set you'll wind up at a completely arbitrary point in the decision boundary.
The research is not at all what it is sold as.
The research shows very nicely that the much-hyped deep learning systems are no different in many ways
Re: (Score:2)
I called it cheating because they violated both one of the prime rules of AI: train on a data set that is more or less representative of the data set you will test with, and one of the prime rules of statistics: do not apply a priori statistical analysis when you iterate with feedback based on the thing you estimated. Their test images are intentionally much different from the training images, which is one of the first things an undergraduate course on AI will talk about. They also use what are essentiall
Re: (Score:2)
I called it cheating because they violated both one of the prime rules of AI: train on a data set that is more or less representative of the data set you will test with, and one of the prime rules of statistics
But they're not trying to do that. They're trying to debunk the claims of "near human" performance, which they do very nicely by showing that the algorithms make vast numbers of mistakes when the data in is not very, very close to the original data.
They also present a good way of finding amusing failu
Re: (Score:2)
So it needs to learn that these exact images are tricks being played on it, so it can safely ignore it.
No. Learning that the "exact images" presented here are tricks would not be a solution to the problem revealed by this study. The goal in any form of machine learning is software that can effectively extrapolate beyond the training set.
What's the story?
Once you understand the problem, you will see what the story is.
Not smart or stupid (Score:2)
These are computer programs, not artificial intelligences as some have come to think of them. They are simply some charges flipping around in some chips. There is no seeing or recognizing in human terms. We apply all that consciousness crap.
In this case, the neural networks are randomly formed nets that match up a few pixels here and there then spit out a result. There is no seeing. Increase the complexity a thousand times over and there will still be no seeing, but there might, might, might be less sh
Re: (Score:2)
These are computer programs, not artificial intelligences as some have come to think of them. They are simply some charges flipping around in some chips.
And minds are just charges flipping around in some brain (at one level of abstraction, it is chemical, but chemistry is explained by the movement of charges.)
As John Searle said, brains make minds.
Everything else is just speculating.
If you look at John Searle's arguments in detail, they ultimately end up as nothing more than "I can't believe that this is just physics." Searle's view is actually rather more speculative than the one he rejects, as it implies an unknown extension to atomic physics [wikipedia.org].
Nevertheless, none of what I write here should be construed as a claim that artificial
Re: (Score:2)
Brains are charges and chemistry, but minds are something else, though clearly connected. Brains make minds, we know that. There is no reason to think that anything else can make a mind. There are some philosophers who say that a thermostat has a mind, but that's pretty clearly bullshit. These neural nets are simply primitive and chaotic data filters. Yes, at some point an AI will be able to convince us that it is concious, but there will be no reason to think it is anything but a parlor trick. Until
Actually a Great Step Forward (Score:2)
Computer learns to pick out salient features to identify images. Then we are shocked that when trained with no supervision the salient features aren’t what we would have chosen.
I see this as a great ah-ha moment. Humans also have visual systems that can be tricked by optical illusions. The patterns presented while seemingly incomprehensible to us make sense to computers for the same reason our optical illusions do to us -- taking short cuts in visual processing that would fire on patterns not often
Re: (Score:2)
The neural networks in question were absolutely trained with supervision. Unsupervised learning [wikipedia.org] is a quite different thing.
Re: (Score:2)
Computer learns to pick out salient features to identify images. Then we are shocked that when trained with no supervision the salient features aren’t what we would have chosen.
There is a huge difference: humans pick relevant features guided by a deep understanding of the world, while machine learning, unguided by any understanding, only does so by chance.
Now that we know what computers are picking out as salient features, we can modify the algorithms to add additional constraints on what additional salient features must or must not be in an object identified, such that it would correspond more closely to how humans would classify objects. Baseballs must have curvature for instance not just zig-zag red lines on white.
Hand-coded fixes are not AI - that would be as if we have we had a higher-level intelligent agent in our heads to correct our mistakes (see the homunculus fallacy [logicallyfallacious.com]).
Can't you just call it broken? (Score:2)
I mean an AI that looks at static and says it's a school bus 99.99% of the time seems to be about as broken as could be. The researchers have to be the most optimistic folks in the world if they still think there's a pony in there. I'd be seriously thinking about scrapping the software (or, at least, looking for a bad coding error) and/or looking for an entirely new algorithm after achieving results that bad.
Re: (Score:2)
Re: (Score:2)
Just like we see faces in other images of static.
Distilled images (Score:1)
I see the gorilla. (Score:1)
In the pictures from the last link, I clearly see the gorilla and the backpack.
Those images remind me of what you get with some edge-detection filters commonly used to enhance image features.
Art. (Score:1)
Think of the global implications to surrealism!
Image processing; LIDAR; ADAS perspective (Score:3)
I've done some image processing work.. It seems to me that you can take the output of this Neural network and correlate it with some other image processing routines, like feature detection, feature meteorology, etc; A conditional probability based decision chain,etc.
I work on a LIDAR sensor meant for Anti-. I work at a start-up that makes 3D laser-radar vision sensors for robotics and autonomous vehicles /anti-collision avoidance. The other day, I learned that such sensors allow robots to augment their camera vision systems to have a better understanding of their environment. It turns out that it's still an unsolved problem for a computer vision systems to unambiguously recognize that it's looking at a bird or a cat, and can only give you probabilities.. A LIDAR sensor instantly gives you a depth measurement out to several hundred meters that you can correlate your images to . The computer can combine the color information, along with depth information to have a much better idea of what it's looking at. For an anti-collision avoidance system, it has to be certain what it's looking at, and that cameras alone aren't good enough. I find it pretty exciting to be working on something that is useful for AI (artificial intelligence) research. One guy I work with got his Ph.D using Microsoft's Kinect sensor, which is something that gives robots depth perception for close-up environments..
“In the 60s, Marvin Minsky (a well known AI researcher from MIT, whom Isaac Asimov considered one of the smartest people he ever met) assigned a couple of undergrads to spend the summer programming a computer to use a camera to identify objects in a scene. He figured they'd have the problem solved by the end of the summer. Half a century later, we're still working on it.”
http://imgs.xkcd.com/comics/ta... [xkcd.com]
Re: (Score:2)
I've done some image processing work.. It seems to me that you can take the output of this Neural network and correlate it with some other image processing routines, like feature detection, feature meteorology, etc;
If you look at the convolutions learned in the bottom layers, you typically end up with a bunch that look awfully like Gabor filters. In other words, it's learning a feature detection stage and already doing that.
Some sort of depth sensing certainly does help.
Teehee (Score:2)
Machines got a lot of imagination, don't they? Next thing you know you'll be looking at the clouds with your robot buddy and it'll say "99.99% chance of that cloud looking like a puppy. BEEP". Oooorrrrr maybe a school bus, but you get what I mean.
Oh right I forgot this is Slashdot. MACHINES WILL DOMINATE US HELP. Peasants. Not like this display of reality will stop the rampart paranoia of people that works with computers and machines all day long... ...
ironic.
Re: (Score:2)
Ob. XKCD [xkcd.com]
Further along (Score:2)
Cool. Image recognition is far further along than I thought. It makes the same type of mistakes as humans although in a different way.
We humans see faces in everything. Smoke, clouds and static for example. This just means that this is inherent in the attempt of recognition.
DNNs and human behavior (Score:1)
Since the core of tis story is fooling a DNN rather than image recognition, I wonder whether the same exercise could be repeated with DNNs tasked to recognize human behavior and build digital profiles of humans based on for example browsing habits, keywords in online communication, movement is space, etc. How does a white noise terrorist look like? What would be its indirectly encoded best representation? We tend to be scared of digital profiling because we believe that our digital representation actually l
Rorschach tests for machines (Score:2)
Text recognition in white noise can be fixed with virtual saccades.
Aside from adding "human" sensibilities (do we only want it to only recognize objects in real, photo-realistic settings, and not drawings / art?), I would say it's good to go.
Re: (Score:2)
And then we also need a Red Forman translation tool to translate the message sent to the A.I.:
"This is static, dumbass!"
Re: (Score:1)
http://www.quickmeme.com/img/1... [quickmeme.com]
Re: (Score:2)
Yeah, who's to say the AI wasn't just seeing something we can't. Obviously aliens beamed a subliminal picture of a schoolbus into the TV static and the AI said Oh, a schoolbus!
When, Lord?! When the hell do I get to see the goddamn schoolbus?
Re: (Score:2)
I've seen a school bus in porn that neither drives kids to school nor is it owned or operated by one. Next definition.
Re: (Score:2)
Re: (Score:2)
Now all we've learned is that you define school bus in an idiosyncratic way, which already differs from the one I replied to (that definition stipulated that school buses also need to be owned or operated by a school). And if we ask 10 more people, we're going to find 10 more definitions, and I'm sure I can think of counter examples to all of them (for example, your definition would include parent-driven SUVs or any other kind of car frequently used to move children to and from school, which definitely does
Re: (Score:2)
Sounds like the no true schoolbus [wikipedia.org] fallacy.
Re: (Score:2)
Re: (Score:1)
Is this [nocookie.net] a school bus?
Re: (Score:2)
and never will be
How could you possibly know that?
An electronic switch knows nothing. A massive piles of electronic switches cannot know something.
A neuron knows nothing, and yet a "massive pile" of neurons can know, understand, imagine, lie, cheat, steal, love, hate, and dream.
AI may not be here yet, but it's practically inevitable.
Re: (Score:2)
A few neurons don't 'know' anything either. Neither do a dozen neurons. Tens of billions of neurons however... Anyway, you'll be the one looking like a complete tard in fifty years when AI is working well and is considered one of mankinds greatest achievements. (fusion power however will still be twenty years away)
Re: (Score:2)
One of you two will look like a tard. My money would be on you.