Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Transportation

Programming Safety Into Self-Driving Cars 124

aarondubrow writes Automakers have presented a vision of the future where the driver can check his or her email, chat with friends or even sleep while shuttling between home and the office. However, to AI experts, it's not clear that this vision is a realistic one. In many areas, including driving, we'll go through a long period where humans act as co-pilots or supervisors before the technology reaches full autonomy (if it ever does). In such a scenario, the car would need to communicate with drivers to alert them when they need to take over control. In cases where the driver is non-responsive, the car must be able to autonomously make the decision to safely move to the side of the road and stop. Researchers from the University of Massachusetts Amherst have developed 'fault-tolerant planning' algorithms that allow semi-autonomous machines to devise and enact a "Plan B."
This discussion has been archived. No new comments can be posted.

Programming Safety Into Self-Driving Cars

Comments Filter:
  • they're a disaster (Score:4, Insightful)

    by slashmydots ( 2189826 ) on Thursday February 05, 2015 @05:20PM (#48993399)
    If you're not aware of the level of performance of current self-driving cars, let me break it down for you. They can't stop for construction or understand rerouting from it or obey temporary signs. They can't see stoplight colors while the sun is setting anywhere near behind them. They can't drive on snow at all. They will slam on the brakes for a piece of newspaper blowing across the road or other low density objects. They think puddles are obstructions and will slam on the brakes.

    They're basically deathtraps on wheels and they don't work at all plus they're illegal in several states.
    • by turkeydance ( 1266624 ) on Thursday February 05, 2015 @05:23PM (#48993435)
      that resembles quite a few drivers.
      • by Anonymous Coward

        It doesn't even remotely resemble the worst drivers we have. Current AI tech might as well be equated to a drunk driver.

        • You're right that it doesn't remotely resemble the worst drivers we have. It does bear a fair resemblance to the average to high skilled drivers. The biggest difference being that the AI is less well rounded, instead being more skilled in certain core areas with weakness regarding the exceptional cases, which are by and large predictable scenarios.
    • Re: (Score:1, Interesting)

      by Anonymous Coward

      YM:

      If one isn't aware of the performance of human drivers, let me break it down for you. They can't stop for construction or understand rerouting from it or obey temporary signs. In fact, they speed up to try to prevent other drivers from maneuvering safely. They can't see stoplight colors while the sun is setting anywhere near behind them. They can't drive on snow at all. They will slam on the brakes for a piece of newspaper blowing across the road or other low density objects. They think puddles are obs

    • by ShanghaiBill ( 739463 ) on Thursday February 05, 2015 @05:35PM (#48993577)

      They're basically deathtraps on wheels and they don't work at all

      SDCs have already logged hundreds of thousands of miles on public roads, and have a safety record better than human drivers.

      • by burtosis ( 1124179 ) on Thursday February 05, 2015 @06:10PM (#48993855)

        They're basically deathtraps on wheels and they don't work at all

        SDCs have already logged hundreds of thousands of miles on public roads, and have a safety record better than human drivers.

        Highly misleading comment. Those tests have been on perfect condition roads, pre-planned everything, no construction, no rogue animals or children, no snow, no lose dirt or gravel, hell i doubt it was during bar close. Compare apples to apples please. Compare straight driving on highways and roadways under perfect conditions to humans and i doubt AI is better. Compare AI to humans in adverse conditions and it's like comparing a drunken teenager getting road head while texting to, well damn near anyone sane.

        • Re: (Score:3, Insightful)

          by Anonymous Coward

          Those tests have been on perfect condition roads, pre-planned everything, no construction, no rogue animals or children, no snow, no lose dirt or gravel, hell i doubt it was during bar close. Compare apples to apples please.

          He did compare apples to apples - the SDC outperforms humans driving in the same conditions in which it was tested. The fact that they haven't yet been tested in other conditions doesn't reduce the significance of that fact.

          Compare AI to humans in adverse conditions and it's like compar

        • Mod this up. Look at the Audi CES example. Reporters sat in the drivers seat 100% of the drive and the car was not autonomous in SF/Urban areas and steep hill climbs to Vegas... And it was a nice sunny day...

          And the 80% of those roads from SF to Vegas is a nearly perfectly straight: a freeway called the I-5... and another one called the I-15. It was more of a demonstration of active speed control than driverless! Hard to not go straight in a car with laser alignment. I can sleep on the I5 for 20min and like

        • by Jeremi ( 14640 )

          If they are really "deathtraps on wheels" that "don't work at all", how have they been able to drive so many miles without any actual, you know, deaths in their traps?

          They might not be ready for consumer use yet, but clearly they do work at least somewhat.

          • by burtosis ( 1124179 ) on Friday February 06, 2015 @12:12AM (#48995807)
            I worked for 7 years in a robotics lab so i do know a few things about vision and vehicle automation. What grinds my gears about this is every last mile was pre planned. Routes were mapped in gps, every last sign, stoplight and speed limit was pre-programmed in. Every single test was on a sunny day with free flowing traffic. Even under those circumstances the algorithms spazzed out and did very unhuman like things. Sure it sounds nice to lock up the brakes for a blowing trash bag but that's asking to be rear ended and is highly dangerous.

            TL:DR they took ideal conditions under which normal humans fare far far better than on average and ran their AI. They then compared this mean time between failure to what humans have to deal with on average in totally different enviornments - rain and snow - asshole drivers in traffic jams, unexpected icy conditions - drunken driving. It's not science it's intellectually dishonest.
            • by AmiMoJo ( 196126 ) *

              They are prototypes. It would be naive to think that they are not going to get better, fast.

              • Yes I would hope they are going to get better. It's an amazing technology. However saying they are better than humans (which is widely reported in media) is like the people who said computers would beat humans at chess in a few short years in 1960. Even the first real win against a human took a dedicated supercomputer, programmed with every single game it's human counterpart ever played in public, and even then - 40 years later - only won because it was relentless and tireless and killed the human by for
        • by Anonymous Coward

          No, comparing an early prototype A.I. to a seasoned, or even average driver is misleading. A better comparison would be between Google's self-driving car and a human who has learned the theory behind driving, but the only experience of driving he had was going up and down his own driveway. Keep in mind that the driving experience has different consequences for humans and robots - a human's experience is only for that one individual to take advantage of, while the robots can share the experience and keep dev

      • SDCs have already logged hundreds of thousands of miles on public roads, and have a safety record better than human drivers.

        So, they've logged fewer operating miles than accumulate in the US in a single day? Impressive.

        And how many of those miles have been in a typical Pacific Northwest blinding rainstorm? Or after a snow storm such as the Northeast experienced last week? (Etc... etc...) Or to put it another way, the numbers logged as only impressive to the easily impressionable.

        • by mjwx ( 966435 )

          SDCs have already logged hundreds of thousands of miles on public roads, and have a safety record better than human drivers.

          So, they've logged fewer operating miles than accumulate in the US in a single day? Impressive.

          And how many of those miles have been in a typical Pacific Northwest blinding rainstorm? Or after a snow storm such as the Northeast experienced last week? (Etc... etc...) Or to put it another way, the numbers logged as only impressive to the easily impressionable.

          This, number of hours does not make a good driver, human or otherwise. You can be a pitiful driver for your whole life and still not have a crash out of blind luck (and better drivers around you.

        • You would think on slashdot people would be slightly aware of the state of AI. But nope. Comparing sunny day pre planned everything short jaunts to what humans have to deal with in the real world of driving makes for yet another shiny piece of media hype that some people just buy hook line and sinker.
          • by radl33t ( 900691 )
            I would also think "slashdot people" are capable of not allowing perfect to be the enemy of good. But no, apparently incremental improvement in autonomous driving controls are unacceptable. Nothing less than Kit picking you up at the bar and driving to NY from Boston during Snowmageddon will suffice.

            Alas, I guess we're both wrong. We'll you're wrong. I'm actually just sarcastic.
            • But no, apparently incremental improvement in autonomous driving controls are unacceptable. Nothing less than Kit picking you up at the bar and driving to NY from Boston during Snowmageddon will suffice.

              That is correct, because if the computer drives most of the time, you're out of practice when it refuses to. Furthermore, if people buy autonomous cars, they will take advantage of them by, for example, taking a few beers on the way. So the end result will be a lot of drunk, inexperienced drivers having li

              • by radl33t ( 900691 )
                I don't think you should assume about what not is. There are already lots of incremental autonomous driving controls. And they work fine. And they do not change the relationship between a driver and his liabilities.
        • by Kjella ( 173770 )

          So, they've logged fewer operating miles than accumulate in the US in a single day? Impressive. And how many of those miles have been in a typical Pacific Northwest blinding rainstorm? Or after a snow storm such as the Northeast experienced last week? (Etc... etc...) Or to put it another way, the numbers logged as only impressive to the easily impressionable.

          Even under perfect conditions you run into every kind of moron driving if you do it long enough. And 700k miles is more than the average American license holder drives in 50 years, sure it's a one-trick pony but I'd rather have it do what it does right and say "it's raining, I'm not driving" than doing it half-assed.

      • SDCs have already logged hundreds of thousands of miles on public roads, and have a safety record better than human drivers.

        Numbers like this have been released by Google, but it's propaganda. You can tell it's propaganda because of the lack of details. For example, how many times did a human driver have to take over the car? We don't know. Once you start asking questions and digging deep, there's a lot that Google's numbers don't tell us.

        • For example, how many times did a human driver have to take over the car? We don't know. Once you start asking questions and digging deep, there's a lot that Google's numbers don't tell us.

          You obviously didn't read any of the reports, because while you don't know, anyone who did read them knows. It's all in there.

          • You obviously didn't read any of the reports

            I read a ton of them. If you have a link, I'd love to see it.

    • That's nice. What were they like 10 years ago? What will they be like 10 years from now? 20?

    • Re: (Score:1, Insightful)

      by Anonymous Coward

      Yes, because it does not work 100% now, we should abandon the idea altogether.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      You're commenting on the performance of companies that tried to eat the whole enchilada in one bite. The companies to watch are the ones introducing autonomy features in incremental features, such as lane assist, adaptive cruise control, etc. They will spend more time on a smaller amount of content in order to get it right one small piece at a time.

    • One would assume they don't look for stoplight colours. I would be worried if they did. I would assume they would look for the position on the light, exactly the same as that significant section of the population that are red / green colour blind.

      • Simply looking at the lights can be confusing to humans. What lane does that light belong to? Is it a vertical or horizontal light? Easy things like this aren't easy for current AI. They would need access to the information electronically. Furthermore imagine signs. Not every sign is programmed in. Temporary signs too. The list keeps getting longer.

        comparing gentle turns and straight line sunny day driving along pre planned routes with manually entered everything is not comparable to human capab
        • Why do the signs need to be programmed in? Signs in every country are regulated and their design standardised. It just needs to be able to cross reference the image of the sign with its database. That is significantly easier than the facial recognition Google is so good at. It would also likely be better than a human if the sign is hanging at a strange angle.

          Personally I think things like road furniture, signage, lights etc is probably the relatively easy part of this challenge. The bigger challenge is

          • Why do the signs need to be programmed in? Signs in every country are regulated and their design standardised. It just needs to be able to cross reference the image of the sign with its database. That is significantly easier than the facial recognition Google is so good at. It would also likely be better than a human if the sign is hanging at a strange angle.

            Personally I think things like road furniture, signage, lights etc is probably the relatively easy part of this challenge. The bigger challenge is likely to be the impact of weather on the sensors, things like low density objects like newspapers or even dust clouds.

            Even other drivers on the road are probably easier then the sensor problem.

            I think your time frame is probably correct but not for getting to average levels of human but for self driving cars to be on the roads in private hands.

            Eventually signs wont need to be programmed in. Nor would you need corrected gps to stay on roads. Currently signs are programmed in yet this is often covered up by the people involved. Occlusions from trees, poles, wires, etc along with poorly placed, facing, or damaged signs are beyond the scope of current AI vehicles on the fly. We are not even talking about dirty sensors such as rain, snow, dirt or salt over the view of a camera either as an example you mention - though the last 5 years in image rec

    • And yes it is marketting puff and I'm sure a contrived setup situation. But this one identifies roadworks.

      https://www.youtube.com/watch?... [youtube.com]

    • They can't see stoplight colors while the sun is setting anywhere near behind them.

      For quite a while, on my way to work there was a traffic light in a curve, with the angle to the sun such that the driver of the car stopped at the traffic light could _not_ possible see the colours. Because it was in a curve, about the third car in the line could see the colours without problems. Fortunately, almost everyone knew the situation, so if you were in the third car you would honk your horn as soon as the light turned green, and in the first car you would wait for someone honking.

      Yes, that mig

      • by Jeremi ( 14640 )

        The article mentioned that self-driving cars seem to have problems at four-way stop signs in the USA [...] I'm always polite and go first as quickly as possible so then everyone else can go in turn and nobody has to wait.

        Hmm, the de facto algorithm around here seems to be that you watch the order in which the other three drivers stopped, and whomever stopped first is the one who should go first. (of course that assumes all drivers actually do come to a complete stop, which isn't always the case ;))

        I think a car could probably handle that logic at least as well as a human.

        • Especially when 2-4 drivers arrive at around the same time which is usually the case during rush hour. Two pattern emerge. Either they all look at each other and no one goes or if there are some more decisive people among them, more than one might decide to go at the same time. Very funny. 4 way stops. Worse traffic invention ever.
    • fair comment, but surely if we intend to use AI driven cars then other changes need to happen? traffic lights broadcast their state to AI-cars (in fact that could be useful for human drivers too), construction-worker signs have small units attached to them to broadcast information and alternate-route information to the AI driven cars? (yes, yes, "but the hackers!" they'll cry. come-on certainly security is not a new problem for us)

      • by Jeremi ( 14640 )

        fair comment, but surely if we intend to use AI driven cars then other changes need to happen? traffic lights broadcast their state to AI-cars

        I think you've got it backwards -- if we intend to use AI driven cars, the cars have to be smart enough to do the right thing even in the absence of "smart infrastructure". Because there will always be a situation where the appropriate signal-broadcasting-device isn't installed, or isn't working today, or was hacked to give the wrong signal, or whatever, and the cars will still be required to work in that scenario. Given that, there's little point in designing a car that relies on such things, since it do

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      They're basically deathtraps on wheels

      As shown by the vast number of people they've killed and injured, you mean? What does that make regular cars, then?

    • So basically they are absolutely perfect for controlling garbage trucks, allowing them to be run by a 2 man team of loaders instead of a 3 man team - 1 driver + 2 loaders.

      Also, everything you describe are solvable problems, It will take some time and some

  • by Anonymous Coward

    this is a HUGE pet peeve of mine! if we deployed self-driving cars tomorrow we'd see a huge drop in overall accident rates but we're not doing it out of fear of edge cases! guess what: human beings encounter unforeseen scenarios on the road all the time & have to make reflexive decisions in real time. guess what? we f up a large % of the time! if a computer can reduce overall accidents by double-digit %s I'll be the first to say I'll accept the risk of being one of the edge cases that may (or not)

    • Why are we stuck in the "one car for every person" model of transportation? Why not work toward a more efficient means of mass transport; working from home, etc. etc. Almost everything we do except for pleasure driving can be accomplished with delivery services, really efficient mass transit, or tele-whatever.
      • Why are we stuck in the "one car for every person" model of transportation?

        Put simply, ride sharing sucks. The slightly longer answer is that commute time is heavily impacted when you introduce multiple pickup and drop off points to a vehicle route. If riding the bus was awesome then only poor people would drive while rich people would hog up all the mass transit.

        Anecdotal example: My morning commute takes roughly 25 minutes door to door by car. I live in a city with excellent public transit and there is a bus stop within easy walking distance of my house. Commute time by

      • by lgw ( 121541 )

        My car has no homeless people sleeping in it, and doesn't smell faintly of piss. This makes it entirely superior to mass transit. If you want more telecommuting, first re-invent the Manager - good luck with that. Delivery is getting better, though!

    • this is a HUGE pet peeve of mine! if we deployed self-driving cars tomorrow we'd see a huge drop in overall accident rates but we're not doing it out of fear of edge cases! guess what: human beings encounter unforeseen scenarios on the road all the time & have to make reflexive decisions in real time. guess what? we f up a large % of the time! if a computer can reduce overall accidents by double-digit %s I'll be the first to say I'll accept the risk of being one of the edge cases that may (or not) have survived had a human been behind the wheel.

      it's like the vaccine debate - guess what? there ARE people who have bad outcomes who would not have otherwise but the overall net gain to society is so big we (rightfully) shame people who don't participate...

      I can see you obviously have never worked in AI or even follow it as an armchair hobby. These algroithms work great in unpopulated pre-mapped parking lots. They work acceptably on perfect condition roads, pre-planned with everything from dgps to stop signs to every last detail manually entered.

      In real conditions, adverse conditions, they fail miserably. Not every road sign is mapped into computers yet, temporary signs none are. They slam on the brakes for blowing newspapers and puddles. They can't h

      • Considering you could say every one of your points in reference to human drivers, I really don't see the difference. Most human drivers lose their mind when there is an inch of snow on the ground. It is the rare drivers that do well in snow. This is true for most of your points. How many human drivers see the drunken idiot stumble in front of them? How many are able to react quick enough to avoid them?

        • Considering you could say every one of your points in reference to human drivers, I really don't see the difference. Most human drivers lose their mind when there is an inch of snow on the ground. It is the rare drivers that do well in snow. This is true for most of your points. How many human drivers see the drunken idiot stumble in front of them? How many are able to react quick enough to avoid them?

          Every single organism has been navigating the world almost as soon as life began. Animals have had half a billion years to evolve. Despite some really fantastic leaps in image recognition in theast five years computers still aren't better than informed humans. Humans still far exceed any computer ability to extract abstract data from a video stream in real time. And despite all the research AI driven cars aren't better than inattentive overreactive bad drivers in a general sense, in fact not even close

  • Problem. (Score:5, Insightful)

    by khasim ( 1285 ) <brandioch.conner@gmail.com> on Thursday February 05, 2015 @05:23PM (#48993425)

    "People are unpredictable. What happens if the person is not doing what they're asked or expected to do, and the car is moving at sixty miles per hour?" Zilberstein asked.

    So the car is travelling at 60 MPH on automatic when a situation arises that requires the car to switch to human-control ... and there might be a problem with the human not reacting correctly?

    I think that the problem would be expecting the human to take control and do anything useful at that speed if the programming couldn't handle it.

    • Re:Problem. (Score:4, Insightful)

      by eth1 ( 94901 ) on Thursday February 05, 2015 @07:07PM (#48994307)

      "People are unpredictable. What happens if the person is not doing what they're asked or expected to do, and the car is moving at sixty miles per hour?" Zilberstein asked.

      So the car is travelling at 60 MPH on automatic when a situation arises that requires the car to switch to human-control ... and there might be a problem with the human not reacting correctly?

      I think that the problem would be expecting the human to take control and do anything useful at that speed if the programming couldn't handle it.

      It more like it's unreasonable to expect a person to be able to sit and pay enough attention to what's going on when they're not engaged in the task at all. I either want full control, or no responsibility for control.

  • by Anonymous Coward

    I'm not surprised that AI experts thinks that it is unrealistic with self driving cars. The technology they work with it way to immature/insecure to be used for a function like that.
    As it looks now it would have to be solved with more traditional automation methods, something that AI experts are about as qualified to speculate in as your average linguist or any expert in any other field.

    • Except that no traditional automation method can handle the situation generality and complexity, combined with the massive and varying uncertainty in multiple sensor information sources whose inputs must be combined to create a (probably) good-enough-for-action state model.

      If anything can do that, it's natural general as well as specifically trained intelligence, and possibly, in the future, AI.

    • by vakuona ( 788200 ) on Thursday February 05, 2015 @06:26PM (#48993959)

      They make the mistake of thinking that you can get to self driving cars with a lot of miniscule improvements on current technology such as automatic braking and cruise control. A self driving car is an entirely new paradigm, much like the horseless carriage was a completely different paradigm. If you want to make a self driving car, then the working assumption should be that it has one mode - self driving. Actually, imagine the car without a steering wheel, no accelerator pedals or brakes. Imagine the car going round town with no driver in it. If the failure mode of your imagined self-driving car requires a driver to take over, then you have failed to create a viable self driving car.

      • If the failure mode of your imagined self-driving car requires a driver to take over, then you have failed to create a viable self driving car.

        Thus proving the inability to create a viable self-driving car. There will never be perfection; there will always be failure modes where a human has to take over. And I don't think it is too far fetched to accept that there are failure modes that we don't know will be failure modes until they happen. Exploding Pintos and a host of other issues that have forced safety recalls are proof of that.

        Remember the Tacoma Narrows Bridge? Do you think they'd have built that if they knew of the failure mode it had? W

        • by vakuona ( 788200 ) on Thursday February 05, 2015 @09:53PM (#48995355)

          It's hard enough for human to keep attentive on the road when they are fully in control of the car. Can you imagine humans having to take over when something has failed. By the time the human being realises that their car has failed and they are required to take over, they will have crashed already.

          "Human taking over" is a really really bad failure mode in a self driving car. It's way worse than the computer trying to take appropriate action to prevent accidents and loss of life.

          • It's hard enough for human to keep attentive on the road when they are fully in control of the car. Can you imagine humans having to take over when something has failed.

            I agree with you fully, and I've said this every time this kind of autonomous vehicle discussion comes up. Any system that has to prompt a human -- who has been told he can read, watch TV, or even sleep while the car protects him, and all quite legal -- who isn't paying attention is a system waiting for catastrophe. But ...

            "Human taking over" is a really really bad failure mode in a self driving car.

            I also agree with that. But I also accept the fact that there WILL BE BAD FAILURES in these autonomous vehicles and there has to be some fallback mode to deal with them. If the autonomo

            • by vakuona ( 788200 )

              The really really bad idea is designing a system in which a human being who is not really involved in what is going on is asked, at a moment's notice, to take over. If the computer diagnoses a problem big enough, it should stop the car safely and let people out. That's all. No need to ask a person what to do. No need to continue. Computers do what people tell them to do. They don't make completely autonomous decisions.

              There is actually a conflict between making the car better at resolving failure, and requi

  • Like in California where constant earthquakes sometimes open huge gaps in the roadway and present a danger to drivers?

    Or perhaps they really are moving forward with fault tolerance by brib er lobbing to make it completely the passengers fault when accidents occur? I know it worked super well for credit ratings so maybe they really are fast tracking its deployment.
  • From: legal@google.com
    To: larry.page@google.com, sergey.brin@google.com
    Subject: Self driving cars

    And you thought building a search engine created previously unheard of legislative scenarios.

  • Simple example to see how complex this idea is, would be just looking at CAPTCHA, anti-spam solution by google. It's only one little static image with couple of numbers. But cracking how to teach machine to read it took years.. And even now it's not 100% accurate on every case. So imagine now that you need to decode whole dynamic reality surrounding the car, reality in which everything can have similar shapes and colors. If i see drunk person walking on sidewalk i can determine if that person will maybe wa
    • by Jeremi ( 14640 )

      If i see drunk person walking on sidewalk i can determine if that person will maybe walk on road or not, how can computer determine that this person is even drunk?

      It doesn't matter whether the person is drunk or not. Any person, no matter how loopy their mental state, is still bound by the laws of physics. So if the car sees the person at position X, it can be certain that 1 second from now, that person will be somewhere within a circle of radius N feet around point X (where N is the maximum running speed of a human being in feet/second, plus some safety factor). So the car just has to make sure to stay out of that circle (and re-calculate X every few milliseconds

  • by PPH ( 736903 ) on Thursday February 05, 2015 @05:58PM (#48993767)

    The problem of over dependance on automation eroding piloting skill has already been addressed in the flying biz. Read about Children of the Magenta Line [flightlevelsonline.com].

    Once people give up hands-on driving experience, expect a rapid descent into complete dependence on the AI. At which time it would be better to take the steering wheel away and admit to ourselves that everyone in the car is a passenger. Even seeing a Zipcar coming down the road is enough to strike fear into the heart of the experienced driver. Here comes someone who thinks they can keep up their skill level by borrowing a car a couple of times a month.

    • by mjwx ( 966435 )

      Once people give up hands-on driving experience, expect a rapid descent into complete dependence on the AI.

      This has already happened.

      A lot of steering wheel attendants (I refuse to call them drivers) now refuse to buy a car unless it can change gear for them, brake for them and stay in the lane for them.

      People expect that lane assist and automatic emergency braking will do things for them so they dont have to pay attention to what they are doing. If you were to take one of these steering wheel attendants out of their SUV and put them into a TVR, they'd kill themselves within 5 minutes.

      Lets not even star

  • About the ethical rules that should govern decisions like saving one baby who's lying on the railway track to the left vs 5 grannies toddling across the track on the right, when you're at the controls of the track-switch.

    Now someone gets to actually program these rules into a car.

    Cool!

    • by mjwx ( 966435 )

      About the ethical rules that should govern decisions like saving one baby who's lying on the railway track to the left vs 5 grannies toddling across the track on the right, when you're at the controls of the track-switch.

      Now someone gets to actually program these rules into a car.

      Cool!

      The problem is, how will the car know which car contains the grandma, which one contains the brain surgeon on the way to save the popes life and which one contains 3 kids?

      The simple answer from engineers is that it doesn't.

      Engineers and safety experts have already got a bunch of rules to determine what to do in an emergency. Rule 1, avoid if at all possible a collision, rule 2, if a collision is unavoidable, do not swerve. Brake and stay straight as a rear end crash is the safest kind of crash.

      It d

      • by Jeremi ( 14640 )

        so you'll find a lot of the worst drivers taking manual control because the bleeping car isn't tailgating or lane weaving like they want it to.

        Don't forget the ones who will download and install the special aftermarket "racer AI" which is guaranteed to go at least 90 whenever physically possible, and to drift every turn... good times.

  • I can see it now, the inevitable Nationwide commercial, as the inevitable lawsuits against self-driving cars occur, when they run over kids and pets who "shouldn't have been there".

    Was it a hacker? That excuse won't fly.

    • i think the answer your looking for here is "insurance". we know that humans are bad at properly assessing risk, but insurance companies aren't. if you can get insurance for our AI driven car (and why not, likely we'll end up in a word where the AI is much safer than mere humans) then it not that much of an issue.

      of course you still need to convince people that they are safe and should be allowed on the roads, but that's what 'security theatre' is for (looking at you airport security mechanisms!)

      • You have it backwards. You would need to get the auto manufacturer to buy insurance. It will be an icy day in hell before I am forced to buy a shitty AI car only to have bad, poorly performing in real conditions, and malicious coding force injury upon myself and others while constantly driving up the price of insurance to insane and unsustainable rates.
      • if you can get insurance for our AI driven car

        The General will insure anyone.

        (and why not, likely we'll end up in a word where the AI is much safer than mere humans)

        A self-fulfilling prophecy. As humans start to abandon driving to the promise of the uber-safe AI, they'll lose skill at driving and the AI will become better at it than humans.

        That's not because the AI will actually be uber-safe and perfect at driving, it will be because humans will take the easy road and not bother to learn how to do it well. It's trivial to be better than someone who doesn't know how to do something, but that doesn't mean you are actually good at whatev

    • by gnupun ( 752725 )

      Was it a hacker?

      This is the biggest problem with self-driving cars, in the future, once all the driving kinks have been worked out. So far, there have been no networked computers that are safe from hacker attacks. This makes SDCs a convenient tool for killing/injuring people without consequences by using some hacking tools.

  • but certainly things will improve over time. to be honest I'm surprised at the negativity in the comments on this article.

    sure AI can't handle all scenarios, and it is worse when only some cars are AI controlled, but I would expect there to be 'partial' implementations first.

    what about motorways that are designed or updated specifically for AI vehicles? for example the AI does "see" the traffic light, it receives a signal from a traffic light controller or some other system.
    what about roads where it is li

    • but certainly things will improve over time. to be honest I'm surprised at the negativity in the comments on this article.

      sure AI can't handle all scenarios, and it is worse when only some cars are AI controlled, but I would expect there to be 'partial' implementations first.

      what about motorways that are designed or updated specifically for AI vehicles? for example the AI does "see" the traffic light, it receives a signal from a traffic light controller or some other system. what about roads where it is limited to only AI vehicles (then they can all talk to each other and you don't have the human drivers behaving unpredictably)

      The negativity I've posted and seen posted stems from the misleading way AI driving is presented. People pushing the technology get on sound bites promoted by media to say things like AI has better than human performance. In reality it's comparing sunny day pre planned everything to icy roads and hellish jam packed commutes. It's the same reason i tend to hate on coal powered cars that hide their higher than economy car tailpipe emissions - fully electric cars.

      the problem is the hype boarders on or c

    • what criminal liabilities are you willing to risk your freedom to a shirty AI and a eula that let's them off the hook and makes you got to court on your own and you can't get the source code as well.

      • You just need to have the AI auto manufacturer like google also sell the insurance to you. What could possibly go wrong?
        • you hope.

          But this makes for a good movie.

          Say some buys a auto drive car and it end ups killing some due to a software bug but due to Eula the owner / passenger is at fault and they go to prison and while doing some hard time they thing about getting revenge and when they get out and find the only jobs at mc jobs and they say I was better off in the join they to go Google (or some other place) and hunt down the coders / phb's and they (kill) or maybe just beat the shit out of them to get back in to the priso

          • I would totally watch that. It's already happening in a way like the guy they just released after three years due to the sudden acceleration defect he claimed was there from day 1 and no one believed him.
  • allow semi-autonomous machines to devise and enact a "Plan B."

    Actually Haskell would make this otherwise almost impossible plan easy with the "Either a or b" datatype! Thanks functional programming for making AI easy!

  • They should look at the drone vendors--same problem, with even worst failure (it falls from the sky).

  • "Failure in brakes.dll."

  • Rules like: stay on the road; don't sweat flying trashbags; stop at red stop lights; don't run into other vehicles; don't run off the road.

    Computers are rule-based systems. They are really good at following rules. It is the only thing they can do. If this, do that. Do that X many times or until Y. So beef up the rule sets, improve the sensors, and voila, you have a safe driving system.

    Now, I don't know if sensor technology is up to the task yet, or even if we have enough computing power. But the act of dri

FORTUNE'S FUN FACTS TO KNOW AND TELL: A black panther is really a leopard that has a solid black coat rather then a spotted one.

Working...