Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Transportation AI

Researchers Teach Self-Driving Cars To 'See' Better At Night (sciencemag.org) 32

Researchers may have developed a way for self-driving cars to continue navigating at night (or on rainy days) by performing an AI analysis to identify traffic signs by their relative reflectiveness. Slashdot reader sciencehabit shares an article from Science: Their approach requires autonomous cars to continuously capture images of their surroundings. Each image is evaluated by a machine learning algorithm...looking for a section of the image that is likely to contain a sign. It's able to simultaneously evaluate multiple sections of the image -- a departure from previous systems that considered parts of an image one by one. At this stage, it's possible it will also detect irrelevant signs placed along roads. The section of the image flagged as a possible sign then passes through what's known as a convolutional neural network [which] picks up on specific features like shapes, symbols, and numbers in the image to decide which type of sign it most likely depicts... In the real world, this should mean that an autonomous car can drive down the street and accurately pinpoint and decipher every single sign it passes.
This discussion has been archived. No new comments can be posted.

Researchers Teach Self-Driving Cars To 'See' Better At Night

Comments Filter:
  • 5 years? (Score:5, Insightful)

    by fluffernutter ( 1411889 ) on Saturday March 25, 2017 @01:07PM (#54108669)
    We're not going to have self driving in 5 years if they can't even freaking read signs at night yet! Further proof that automated driving is much further back then we are led to believe.
    • Exactly. The hype around AI and autonomous cars is ridiculous.
    • by sims 2 ( 994794 )

      5 years huh?
      Relevant xkcd.
      https://xkcd.com/678/ [xkcd.com]

    • by Kjella ( 173770 )

      Somehow I'm not so concerned about this one, if you can follow the rules of the road in daytime and the sun is shining it's the same rules when it's night and raining. The rest is "just" a sensor sensitivity/noise cancellation problem that can be worked on in parallel to everything else. You can probably do a lot with combination LIDAR/optical systems to make LIDAR identify candidate surfaces then do optical do actually identify the sign. And you're looking for a predetermined number of surfaces of particul

    • by neoRUR ( 674398 )

      I worked on autonomous Humvee's about 15 years ago and that same technology is whats driving the cars now.
      The things that have changed are the sensors and computers are faster and better, the algorithms and NN's are better and deeper and there is so much data now.
      We already have autonomous cars and people are diving them and the stuff that they can detect even at fast moving speeds is quite a bit.
      They can definitely read signs and getting t\o be under all light, weather conditions.
      The higher level decision

  • by 110010001000 ( 697113 ) on Saturday March 25, 2017 @01:20PM (#54108709) Homepage Journal
    "It's able to simultaneously evaluate multiple sections of the image -- a departure from previous systems that considered parts of an image one by one".

    Wait, uh, this is cutting edge AI? What autonomous system can't evaluate multiple sections of an image

    "convolutional neural network [which] picks up on specific features like shapes, symbols, and numbers in the image to decide which type of sign it most likely depicts." Uh, what? You mean the have an algorithm that can decide on types of street signs based on the image? Wow. Truly cutting edge. Autonomous cars are truly right around the corner.
    • "Wait, uh, this is cutting edge AI? What autonomous system can't evaluate multiple sections of an image"

      I'm assuming they've finally added parallel processing to the mix. Eventually the output of each stream will be sequentially handled by the mainline driver.

      • You are correct; it's worded terribly.

        The paper ( http://journals.plos.org/ploso... [plos.org] ) specifically mentions that their method allows for GPGPU processing due to parallel block evaluation instead of 'sliding window' evaluation of the image:
        "Unlike the sliding-window method, which scans an image in a sequential manner, parallel window-searching divides the input image into several blocks and simultaneously performs classification on one block using each GPU core."

        This is a step forward in commodification of s

        • Absolutely. I imagine this would be applicable to other areas of robotic vision. I notice that they also are able to handle varying illumination and sign occlusion better. I wonder how it would scale: Is it possible to more or less double the system to split the view into eighths instead of quarters? What's the point of diminishing returns?

          Looking at nature, a fly has thousands of prisms in its eyes. They're good for motion detection, not so much for identification, though. Spiders have 8 eyes, again not so

  • People who read slashdot ought to know that 'teach' is a meaningless filler word when it comes to AI and ought to be able to understand a slightly more technical headline.
    • The headline comes straight from the source article. I'd rather see that than some lame attempt to translate it, and end up with something worse.

      • I'd rather see a well-executed attempt to translate it into adult language than a cut-and-paste any monkey can do. Which raises the question of why the new ownership of slashdot feels it can copy and paste baby talk from non-technical news sources at all. It didn't use to be that way.
  • At night out here in the boonies, I want my car to see Bambi, the deer that's trying to kill me.

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...