Researchers Teach Self-Driving Cars To 'See' Better At Night (sciencemag.org) 32
Researchers may have developed a way for self-driving cars to continue navigating at night (or on rainy days) by performing an AI analysis to identify traffic signs by their relative reflectiveness. Slashdot reader sciencehabit shares an article from Science:
Their approach requires autonomous cars to continuously capture images of their surroundings. Each image is evaluated by a machine learning algorithm...looking for a section of the image that is likely to contain a sign. It's able to simultaneously evaluate multiple sections of the image -- a departure from previous systems that considered parts of an image one by one. At this stage, it's possible it will also detect irrelevant signs placed along roads. The section of the image flagged as a possible sign then passes through what's known as a convolutional neural network [which] picks up on specific features like shapes, symbols, and numbers in the image to decide which type of sign it most likely depicts... In the real world, this should mean that an autonomous car can drive down the street and accurately pinpoint and decipher every single sign it passes.
Re: (Score:2)
So when my self-driving car is suddenly unable to drive at night, do I take it to a mechanic or to a psychologist?
Neither. You wipe the mud off the camera lens.
5 years? (Score:5, Insightful)
Re: (Score:2)
Re: (Score:3)
5 years huh?
Relevant xkcd.
https://xkcd.com/678/ [xkcd.com]
Oblig. (Score:3)
Ewe must be gnu hear. Links to XKCD may be relevant, they may be insightful, even pithy; but in all cases they must be obligatory. (The rules do allow abbreviation, but the inclusion of the word is...well, you know.)
Re: (Score:2)
I thought Beret Guy's company already did this [explainxkcd.com]
Re: 5 years? (Score:1)
Eh, no. Usually the Slashdot crowd is far more skeptical, but we've been getting an increasing number of Reddit refugees the past couple years. Oh, and I guess we do get engineers for said companies and they do love drinking the koolaid!
Re: (Score:3)
Somehow I'm not so concerned about this one, if you can follow the rules of the road in daytime and the sun is shining it's the same rules when it's night and raining. The rest is "just" a sensor sensitivity/noise cancellation problem that can be worked on in parallel to everything else. You can probably do a lot with combination LIDAR/optical systems to make LIDAR identify candidate surfaces then do optical do actually identify the sign. And you're looking for a predetermined number of surfaces of particul
Re: (Score:1)
I worked on autonomous Humvee's about 15 years ago and that same technology is whats driving the cars now.
The things that have changed are the sensors and computers are faster and better, the algorithms and NN's are better and deeper and there is so much data now.
We already have autonomous cars and people are diving them and the stuff that they can detect even at fast moving speeds is quite a bit.
They can definitely read signs and getting t\o be under all light, weather conditions.
The higher level decision
Uh what? (Score:3)
Wait, uh, this is cutting edge AI? What autonomous system can't evaluate multiple sections of an image
"convolutional neural network [which] picks up on specific features like shapes, symbols, and numbers in the image to decide which type of sign it most likely depicts." Uh, what? You mean the have an algorithm that can decide on types of street signs based on the image? Wow. Truly cutting edge. Autonomous cars are truly right around the corner.
Re: (Score:2)
"Wait, uh, this is cutting edge AI? What autonomous system can't evaluate multiple sections of an image"
I'm assuming they've finally added parallel processing to the mix. Eventually the output of each stream will be sequentially handled by the mainline driver.
Re: (Score:3)
You are correct; it's worded terribly.
The paper ( http://journals.plos.org/ploso... [plos.org] ) specifically mentions that their method allows for GPGPU processing due to parallel block evaluation instead of 'sliding window' evaluation of the image:
"Unlike the sliding-window method, which scans an image in a sequential manner, parallel window-searching divides the input image into several blocks and simultaneously performs classification on one block using each GPU core."
This is a step forward in commodification of s
Re: (Score:2)
Absolutely. I imagine this would be applicable to other areas of robotic vision. I notice that they also are able to handle varying illumination and sign occlusion better. I wonder how it would scale: Is it possible to more or less double the system to split the view into eighths instead of quarters? What's the point of diminishing returns?
Looking at nature, a fly has thousands of prisms in its eyes. They're good for motion detection, not so much for identification, though. Spiders have 8 eyes, again not so
Up the headlines from kindergarten level, please (Score:2)
Re: (Score:2)
The headline comes straight from the source article. I'd rather see that than some lame attempt to translate it, and end up with something worse.
Re: (Score:2)
Re: (Score:2)
If it's still going to have a link to the baby talk site, it's pointless to translate the title.
Signs? Who cares about signs? (Score:2)
At night out here in the boonies, I want my car to see Bambi, the deer that's trying to kill me.