Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Transportation Technology

MIT Taught Self-Driving Cars To See Around Corners With Shadows 55

Researchers from MIT have developed a system that could help cars prevent collisions by, essentially, looking around corners. They call it ShadowCam. ExtremeTech reports: ShadowCam uses a sequence of four video frames from a camera pointed at the region just ahead of the car. The AI maps changes in light intensity over time, and specific changes can indicate another vehicle is approaching from an unseen area. This is known as Direct Sparse Odometry, a way to estimate motion by analyzing the geometry of a sequential image -- it's the same technique NASA uses on Mars rovers. The system classifies each image as stationary or dynamic (moving). If it thinks the shadow points to a moving object, the AI driving the car can make changes to its path or reduce speed.

The researchers tested this system with a specially rigged "autonomous wheelchair" that navigated hallways. ShadowCam was able to detect when a person was about to walk out in front of the wheelchair with about 70 percent accuracy. With a self-driving car in a parking garage, the researchers were able to tune ShadowCam to detect approaching vehicles 0.72 seconds sooner than lidar with an accuracy of about 86 percent. However, the system has been calibrated specifically for the lighting in those situations. The next step is to enable ShadowCam in varying light and situations.
This discussion has been archived. No new comments can be posted.

MIT Taught Self-Driving Cars To See Around Corners With Shadows

Comments Filter:
  • More "AI" (Score:4, Interesting)

    by Known Nutter ( 988758 ) on Tuesday October 29, 2019 @09:15PM (#59360598)

    The AI maps changes in light intensity over time...

    We've lost the battle on proper use of the term Artificial Intelligence. Damn near as bad as a loss as the Battle of Hack.

    • Basically everything from statistics to image recognition is AI at this point. I wonder what the media would think of Eliza if Google released a version in 2019?

      • Have you noticed how there will be half a dozen stories about the same thing on the front page, all in sequence? The editors get their marching orders from BizX who are farming with Slashdot. I'd be half surprised if EditorDavid, Ms. Mash, and Beau aren't bots most of the time, queueing up stories with keywords foisted upon them, released on a timer, and just come back from time to time to check/troll the comments section.
    • Technical terms are a lost cause; motherfuckers can't even "height" without adding an 'h' on the end and on the few occasions when I have the misfortune to be exposed to NPR, even theiy often say "is" instead of "are." The whole fucking nation's "gone Arkansas" - I blame the Clintons.
      • by eepok ( 545733 )

        The is/are thing pisses me off as well, but I've traced it to how people perceive an organization.

        McDonald's are moving away from sugary bread.
        --- Refers to all the employees acting in a concerted effort.

        McDonald's is moving away from sugary bread.
        --- Refers to the corporate entity as a singular.

        Personally, I think the singular is most appropriate in almost every case.

        • by AK Marc ( 707885 )
          The singular is most common in the US. The plural is most common outside American English
    • Re:More "AI" (Score:4, Informative)

      by swillden ( 191260 ) <shawn-ds@willden.org> on Wednesday October 30, 2019 @10:56AM (#59361996) Journal

      The AI maps changes in light intensity over time...

      We've lost the battle on proper use of the term Artificial Intelligence.

      Who is we? Computer scientists have used the term "AI" since the 50s to refer to a vaguely-defined region of "stuff that's hard". The precise definition changes over time as stuff becomes easier and areas of work acquire other names and move out of the "AI" category.

      I think you're of the opinion that AI should refer to what people in the industry call Artificial General Intelligence, AGI. This is the space of systems (none of which exist as of yet) that provide general-purpose intelligence, similar to what humans can do.

      Damn near as bad as a loss as the Battle of Hack.

      Completely the opposite, actually. "Hack" was a specialist term that lost its specialist meaning and acquired a different (though related) meaning in general use. Specialists still use the original sense of the word. In the case of AI, the specialist meaning of the term is the one in general use, and what you wish to do is to substitute a non-specialist, more literal meaning. If you succeeded, then it would be the same situation as "hack".

      • Exactly, it seems that people want to keep moving the semantic naming goalpost. That seems opposite of what we should do; we should keep naming consistant. If we pulled anyone out of the 1950's and showed them just about any programming language or processor, they would call it AI: an "electronic brain, made of modern materials and capable of automatically making decisions."
        If people today specifically feel that AI should only refer to neural networks, or some hard-to-define future goal, that leaves out a

        • "AI" is a marketing term. It's purpose is to sell shit. There is no "potential future of AI" -- there is only a swarm of products called "AI".
      • by AK Marc ( 707885 )
        So AI has been renamed AGI because those that do the naming realized AI doesn't mean anything useful. Got it. I think everyone is in agreement, but disagreeing on how to agree.
        • So AI has been renamed AGI because those that do the naming realized AI doesn't mean anything useful. Got it. I think everyone is in agreement, but disagreeing on how to agree.

          Nonsense. The industry definition of the term "AI" is very useful. It denotes a set of new, emerging techniques that solve problems that couldn't be effectively solved by older approaches.

    • Amazon:https://www.amazon.com/dp/B07S2ZZMG8 YWECAN Car Cup Holder Phone Tablet Mount for iPhone Xs/XS Max/X/8/7 Plus & 7"-10.5" Tablets, Universal Car Cradles Adjustable Gooseneck for Apple iPhone iPad Pro Air Mini, Samsung Galaxy Tab
  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Tuesday October 29, 2019 @09:24PM (#59360614) Homepage Journal

    The more ways in which the vehicles catch up to us, the more hope there is that they will be safer than the majority of human drivers, not just the worst ones. I use shadows all the time while I'm driving, mostly on the highway to spot occluded vehicles ahead of ones I can see.

    Maybe they'll put cameras with zoom on the vehicle so they can see cars in reflections, next. I rarely get useful information from reflections, and it would be nice if cars could actually do tricks we don't. That would give hope that they could someday be better than the best of us.

    • Depending on how competing self-driving software is trained, it may also be using this already, and nobody ever knew, because they weren't explicitly asking it to do so.
      • I doubt it. My understanding of how the algorithms work is that they use image recognition to identify objects on and around the road, then feed those objects into the decision making algorithm. I've never seen any indication that the raw images are used in the decision making.
    • by tambo ( 310170 )

      It's not that simple. Reality is much more complicated.

      Picture one of these new shadow-detecting autonomous vehicles seeing an odd black shape on the road ahead, concluding that it's a car coming from around the corner, and violently swerving and braking to avoid it. Only it's not a shadow - it's just tar from recent road work - and the compensating measures taken by the car lead to an accident, such as sideswiping the car in the next lane or a rear-end collision due to the sharp, unanticipated braking.

      • "Picture one of these new shadow-detecting autonomous vehicles seeing an odd black shape on the road ahead, concluding that it's a car coming from around the corner, and violently swerving and braking to avoid it."

        Why wouldn't I picture things that will actually happen instead? This is a variant of the trolley problem argument. But it's bullshit, because the car isn't going to swerve into another vehicle, or a pedestrian, because it won't swerve. It probably will nail the brakes, but it also won't exceed th

      • Only it's not a shadow - it's just tar from recent road work

        Humans can distinguish tar from shadows, so why wouldn't a computer be able to as well?

        Also, shadows of moving vehicles move. Tar spots don't.

        • One of the top complaints about Tesla's autopilot is "phantom braking". It sees a shadow or reflection of something, thinks it's a threat, and hits the brakes. It doesn't happen very often, and in recent versions it's become even rarer, but it's quite scary when it happens (and dangerous if someone's behind you).

          So even though it looks like a good idea at first sight, overacting on shadows may actually be counterproductive. At most it should be used as a hinting function so the car can react more quickly if

          • by robsku ( 1381635 )

            So even though it looks like a good idea at first sight, overacting on shadows may actually be counterproductive. At most it should be used as a hinting function so the car can react more quickly if the threat is confirmed by a direct visual.

            Obviously any such system will be tested and improved so that they are used according to their capabilities - if the system detects a car coming behind from the corner and 90% chance of it being a threat and 10% chance of breaking causing an accident, the car should break. However if we turn around these numbers the case is different. Before a system like this is put on a vehicle that is actually going to move autonomously in traffic, it will be quite extensively tested. When it's 1st put on a car they may

        • by Viol8 ( 599362 )

          "o why wouldn't a computer be able to as well?"

          The human brain has 100 billion neurons, most ANNs currently have a few thousand at most. Thats why. You might as well try and get a nematode to drive a car and never make a mistake.

          • The human brain has 100 billion neurons, most ANNs currently have a few thousand at most.

            But the ANN is orders of magnitude faster. This allows for instance the use of a convolutional neural network to apply the same processing kernel to each pixel of the image. Our brains don't have that option, so they require millions of neurons in parallel.

            Also, most of these neurons in the human brain are dedicated to different functions that are not needed or even useful for driving.

            • by Viol8 ( 599362 )

              Speed isn't everything otherwise we wouldn't need ANNs in the first place and could just use normal programming techniques. Structure is even more important and ANNs don't currently have it.

              "different functions that are not needed or even useful for driving."

              What like vision, hearing (any self driving cars go that? Nope), motion sensing, prediction, theory of mind comprehension?

              The latter matters - if you're coming up a narrow street and another driver is coming down how is a self driving car going to figur

      • The proper response to seeing a moving shadow is rarely to swerve and brake violently. In most cases, no action is needed because either you or the shadow will yield according to the default rules. In some cases, e.g. when shadow is supposed to yield and is still moving too quickly, a reduction in speed may be warranted to give yourself a bit more time to evaluate when object becomes visible.

      • by robsku ( 1381635 )

        It's not that simple. Reality is much more complicated.

        Picture one of these new shadow-detecting autonomous vehicles seeing an odd black shape on the road ahead, concluding that it's a car coming from around the corner, and violently swerving and braking to avoid it. Only it's not a shadow - it's just tar from recent road work - and the compensating measures taken by the car lead to an accident, such as sideswiping the car in the next lane or a rear-end collision due to the sharp, unanticipated braking.

        ...I thought the text specifically talked about moving shadows. Do you really think that they wouldn't think of situations like you just described? Do you really think that they wouldn't test for stuff that might fool the AI into thinking there's a car, when there isn't?

        If they were so simpleminded, they couldn't have developed ANY self-driving technology that actually works.

        The fact that this shadow-detecting car can detect shadows in some trials is nice, but hardly sufficient to prove road-worthiness without a whole lot of trials to detect and avoid false positives.

        Obviously

        So treat this study like the weekly "scientists show that drug X cures condition Y (in mice, in lab conditions, in one trial)" headlines.

        It's quite obvious to anyone with half a brain that this is not a finished product, but an ongoing development. Those without

      • by AK Marc ( 707885 )
        They only need to be better than the average human to be a net safety improvement, and that's a very low bar.
    • by jrumney ( 197329 ) on Wednesday October 30, 2019 @01:24AM (#59360968)

      I get info from reflections all the time. I live at the top of a hill, with a narrow winding road up to it, and cars parked along one side making the road even narrower. Approaching a bend in daylight, you can tell if a car is coming from the reflections on the parked car well before you can see around the bend.

    • How about sound, like squealing tires a few cars ahead, a collision, or the sound of a door opening when driving past a row of parked cars, or just the sound of children. Emergency vehicles approaching?

      These are involved in incredibly complicated decisions people make. The problem is making decisions as well as humans is expensive (effort wise) with diminishing returns. At the end you'd have a machine that thinks like a human with at least the same sensory parts integrated, or more. So for quite a damn

      • by robsku ( 1381635 )

        These are involved in incredibly complicated decisions people make. The problem is making decisions as well as humans is expensive (effort wise) with diminishing returns. At the end you'd have a machine that thinks like a human with at least the same sensory parts integrated, or more. So for quite a damn long time, we'll be settling for less I think.

        The sort of AI used in self driving cars is specifically crafted for this purpose - it doesn't have to "think like a human", because it doesn't really work like human brain (and/or mind). It's kinda like a driving savant, it's useless for anything else, and it's completely different from our brain. The comparison between them is not fruitful for the discussion.

        I don't believe in AI singularity crap. This will be hard. Something smarter than us will not have an even easier time making something smarter than itself. There might be some jumps, but it's going to be a long slog, not an acceleration.

        Humans constantly build machines that are stronger, faster, etc. than humans - why couldn't we, in theory, be able to make an AI that's smarter than

        • The sort of AI used in self driving cars is specifically crafted for this purpose - it doesn't have to "think like a human", because it doesn't really work like human brain (and/or mind). It's kinda like a driving savant, it's useless for anything else, and it's completely different from our brain.

          Except in special circumstances, for instance, a road is blocked by a fallen tree, and you have to drive across a solid line to get past.

      • by AK Marc ( 707885 )
        (Strong) AI: anything that can build something smarter than itself.
        Singularity: An AI that can build something smarter than itself in less time than it was coded.
    • Comment removed based on user account deletion
    • by Ranbot ( 2648297 )

      I use shadows all the time while I'm driving, mostly on the highway to spot occluded vehicles ahead of ones I can see... That would give hope that they could someday be better than the best of us.

      OH YEAH, I do that too. [...except when I read road signs, check a mirror or blindspot, weather is raining or snowing, I look at my phone GPS/maps, I fiddle with the stereo or climate controls, I text or talk on my phone, I chat with my passenger, I see wildlife near the road, I have to stare at hot chick running beside the road, I am tired/sleepy, I am rocking out to Bohemian Rhapsody like Wayne's World (my favorite movie, BTW!), or just deep in my own thoughts...] When will AI catch up to what humans alre

      • When will AI catch up to what humans already do?

        Including making eye-to-eye contact with another driver to judge
        what they are likely to do next.
        That is critical to safe driving, and I seriously doubt if
        the equipment they are using now could even sense this,
        let alone know how to interpret it.

  • They should also teach it, on non-divided roads, to look for headlights on the roadside vegetation, snowplow-markers, guardrails, or power/phone lines, as a sign that a car is oncoming around the curve.

  • But then, they probably know that this is at best a work in progress

  • Trees can cast moving shadows. In fact a lot of things can. What exactly is a shadow anyway? Are we talking about a shadow from the sun, or a shadow from a streetlight, or a shadow from headlights of a car? What if a person is rounding a corner on the sidewalk and casts a long shadow onto the street?

    • by robsku ( 1381635 )

      If your point is that they need to make the system not only to recognize things correctly when they are important, as well as not to react to shadows that are not important, well, then that is quite an obvious, isn't it?
      If that wasn't your point, then I didn't get it.

  • Yes, its very nice they can use shadows. But shadows can be deceptive if they're caused by moving lights, you know , like at night from other vehicles even if the road is lit. And what if its dark or overcast? Not much use then.

    Perhaps its time for this fraudulant industry to realise that driving is a multi-level ability. Its not simply a case of dumb reacting to obstacles on the road - it also requires *thinking* ahead, not just looking ahead and also requires a theory of mind so you know how other drivers

    • by robsku ( 1381635 )

      I too believe that the people developing these systems are too dumb to be able to create any kind of working self driving cars, because that's how it must be for them to ignore all these obvious things that you would have to take into account, like, you know, not to mistake on things that may result into serious accident.

      So in conclusion, I agree with you and any self existing drive anyone might see on road is caused by LSD in drinking water.

      Also, computers can never win the best human chess players, becaus

      • by robsku ( 1381635 )

        "self existing drive" = "self driving car" >:D
        Don't know how that happened - it's like those auto-correct errors you get with smartphones (well, I have it turned off), but I was typing with a laptop :D

      • by Viol8 ( 599362 )

        Did the Kool Aid taste nice?

        Thinking ahead in a very narrow sphere such as chess with fixed rules and boundaries is completely different to thinking ahead in a relatively uncontrolled enviroment. Otherwise we'd have had self driving cars since the 80s.

        And unless a self driving car can think and whats more act like a human incliding flashing lights to tell the other driver to give way or move (depending on country) it'll consistently fail in the scenarios I mentioned.

  • This should also come in handy in improving killbot performance. Early models performed embarrassingly badly against human resistance fighters hiding behind corners; this should go a long way toward addressing that deficiency.
  • Can it also determine whether the shadow is a bunny or a dog?
  • so cars of the future will diligently do anything to avoid colliding with autonomous wheelchairs, including running over crowds

    • by robsku ( 1381635 )

      Because that's what they do when they see trouble, they swerve and run over crowds. Right? Right!?

      • you want serious answer? self-driving cars have both swerved and hit people and things, and also not swerved and killed at full speed.

        oh well, bound to be some kinks needing ironed out and corpses to bury before it's out of beta....

  • MORE DATA! More more more more!

    Humans don't make decisions from more data. We make decisions by separating background from foreground. We're very much machines that do exactly that -- distinguish foreground from background. That's not "pattern matching" in data. It's prioritizing data.

    We see through horribly filthy windshields, and into blizzards.

    We aren't even capable of detecting background data. That's why all of our senses are relative. Can't tell you how far away that car is, not even to a resol

    • That's not "pattern matching" in data. It's prioritizing data.

      Prioritizing is pattern matching.

      • Trees are plants.

        Also, no, it doesn't need to be. You can prioritize without looking at the pattern. For example, first-come-first-served. Even random, would be a valid way to prioritize things.

Avoid strange women and temporary variables.

Working...