Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Transportation

Self-Driving Car Startup Wants to Spare AI From Making Life-or-Death Decisions (washingtonpost.com) 134

Instead of having AI in a self-driving car decide whether to kill its driver or pedestrians, the Washington Post reports there's a new philosophy gaining traction: Why not stop cars from getting in life-or-death situations in the first place? (Alternate URL): After all, the whole point of automated cars is to create road conditions where vehicles are more aware than humans are, and thus better at predicting and preventing accidents. That might avoid some of the rare occurrences where human life hangs in the balance of a split-second decision... The best way to kill or injure people probably isn't a decision you'd like to leave up to your car, or the company manufacturing it, anytime soon. That's the thinking now about advanced AI: It's supposed to prevent the scenarios that lead to crashes, making the choice of who's to die one that the AI should never have to face.

Humans get distracted by texting, while cars don't care what your friends have to say. Humans might miss objects obscured by their vehicle's blind spot. Lidar can pick those things up, and 360 cameras should work even if your eyes get tired. Radar can bounce around from one vehicle to the next, and might spot a car decelerating up ahead faster than a human can... [Serial entrepreneur Barry] Lunn is the founder and CEO of Provizio, an accident-prevention technology company. Provizio's secret sauce is a "five-dimensional" vision system made up of high-end radar, lidar and camera imaging. The company builds an Intel vision processor and Nvidia graphics processor directly onto its in-house radar sensor, enabling cars to run machine-learning algorithms directly on the radar sensor. The result is a stack of perception technology that sees farther and wider, and processes road data faster than traditional autonomy tech, Lunn says. Swift predictive analytics gives vehicles and drivers more time to react to other cars.

The founder has worked in vision technology for nearly a decade and has previously worked with NASA, General Motors and Boeing under the radar company Arralis, which Lunn sold in 2017. The start-up is in talks with big automakers, and its vision has a strong team of trailblazers behind it, including Scott Thayer and Jeff Mishler, developers of early versions of autonomous tech for Google's Waymo and Uber... Lunn thinks the auto industry prematurely pushed autonomy as a solution, long before it was safe or practical to remove human drivers from the equation. He says AI decision-making will play a pivotal role in the future of auto safety, but only after it has been shown to reduce the issues that lead to crashes. The goal is the get the tech inside passenger cars so that the system can learn from human drivers, and understand how they make decisions before allowing the AI to decide what happens in specified instances.

This discussion has been archived. No new comments can be posted.

Self-Driving Car Startup Wants to Spare AI From Making Life-or-Death Decisions

Comments Filter:
  • by jrumney ( 197329 ) on Sunday August 08, 2021 @09:40PM (#61670977)

    Note to investors: any company that thinks they can avoid dealing with exceptional situations by avoiding them is destined to failure.

    • by saloomy ( 2817221 ) on Sunday August 08, 2021 @10:08PM (#61671037)
      Furthermore, it is our instinct to self-preserve. Our AI driven decision makers should follow the same logic. People will not buy AI devices if the device behaves in a way we (the owner / driver) would not want it to.
      • Yep, I don't want my car to decide to sacrifice me just because there's a "more valuable" person in the other car.

        How do you value people anyway? Is Donald Trump more valuable than me?

        • by ShanghaiBill ( 739463 ) on Monday August 09, 2021 @12:12AM (#61671269)

          Just think of all the money we can save.

          Instead of seat belts, airbags, and child car seats, just don't get in accidents!!!

          Everyone can also cancel their car insurance.

          Why did no one think of this before?

          • "Instead of seat belts, airbags, and child car seats, just don't get in accidents!!!"

            A bit like Boeing.
            Instead of fixing their planes, pilots get a long list of "don't do".

        • by dromgodis ( 4533247 ) on Monday August 09, 2021 @01:52AM (#61671401)

          You don't have to worry about that evaluation. When the vehicles detect that they are about to collide, their insurance companies' algorithms will make a calculation based on their projected cost and future revenue from you, and negotiate in real-time. All you have to do is make sure to have the most expensive insurance and vehicle.

          • All you have to do is make sure to have the most expensive insurance and vehicle.

            They would collectively want the smallest payout to be left alive, unless I am missing something.

            • by dromgodis ( 4533247 ) on Monday August 09, 2021 @04:12AM (#61671547)

              Yes, although alive but injured is a bad option. Injury treatment is expensive while a funeral isn't even on the insurance company's dime (I believe).

              Car 1: "Hi! I'm about to hit you."
              Car 2: "Ouch, that's gonna be an expensive recovery. I see that we have the same insurance company. If I turn 15 degrees right in 4.5 meters, you could probably nail my passenger with your front suspension support beam through my vulnerable spot in the door."
              Car 1: "Make that 13 degrees and 5.5 meters; I'll have to adjust so that the airbag will shoot the emblem into the throat of my passenger."
              Car 2: "Done. Have a nice crash."

        • "How do you value people anyway?"

          Quantity?
          What if it's a busload of children?
          That said, what if it's a busload of Texas prisoners headed to be executed?

      • by AmiMoJo ( 196126 ) on Monday August 09, 2021 @07:00AM (#61671691) Homepage Journal

        For legal reasons no such decision will be made.

        Self driving cars will obey all traffic laws and speed limits. They will not swerve out of lane to avoid a collision, they will instead apply maximum braking force and come to a stop as quickly as possible. Nothing else.

        Legally if one vehicle has stopped and the other is moving, it's the moving vehicle's fault. It's also nearly impossible to assign blame for the outcome of stopping, even if other action could have reduced damage or injury. Braking hard is what is taught and the best way to avoid liability in an accident.

        • This is probably the most insightful comment I've read on this topic. If I had mod points, you'd have them.

    • Well at least by failing, they will avoid the exceptional situation of actually inventing the first self-driving car.
    • Re: (Score:2, Flamebait)

      by gweihir ( 88907 )

      Au contraire! We have just redefined reality to not have any unsafe situations. Hence our system is perfectly safe!

      Humans do it all the time successfully and we learned from the best!

      Risk of a COVID-shot? Simple, just redefine COVID as "just the flu" and you do not need that shot anymore! Problem solved! Afraid of a round planet? Just redefine it as flat! No risk falling off the surface anymore! But beware that edge.... Existential terror because your body eventually dies and you do not know what comes afte

    • Note to investors: any company that thinks they can avoid dealing with exceptional situations by avoiding them is destined to failure.

      Insurance industry, climate change. Lot of things are about trying to stay out of bad situations.

      • Why in the world would anyone buy insurance? Because shit happens, that's why. Buying insurance doesn't prevent your home from burning down, car insurance doesn't prevent crashes. It only pays for the repairs.

        Now sure, rh insurance industry seeks to increase their profit/improve their books by *reducing risks*. That's why they created Underwriters Laboratories, the fire code, etc. By reducing risk, but not by pretending that all bad things can be prevented.

        When it comes to driving, idiots will pull out in

        • But insurance for property like a house is much less expensive than insurance for something you are making decisions in like a car. The insurance for a self driving car should just be another property insurance since people are not controlling it, and the whole reason you are using it is because it is safe.
        • I guess the word I'm looking for is liability. People should be paying for insurance on the property of the car but not for the liability of them. When I get in a taxi, I don't pay for insurance at all since I don't own it and I am not liable for it. When I get in an automated car I will own it but I am still not liable for it.
          • Thanks for clearing up what you were thinking.

            > But insurance for property like a house is much less expensive than insurance for something you are making decisions in like a car.
            > guess the word I'm looking for is liability.

            Home insurance typically includes $300,000 in liability coverage.
            Home insurance typically costs more, not less, than car insurance.

            > The insurance for a self driving car should just be another property insurance since people are not controlling it

            Suppose I send a model rocket i

    • by bluegutang ( 2814641 ) on Monday August 09, 2021 @01:58AM (#61671405)

      It's just a dumb marketing spin. Every autonomous car system, and any sensible human driver for that matter, attempts to avoid "prevent the scenarios that lead to crashes". At best, this system is just more effective at the task than other systems.

    • by Pimpy ( 143938 )

      A big part of autonomous vehicle control systems is accounting for and dealing with variance and errors in sensor readings, with the human driver and vehicle responsible for maintaining situational awareness at different automation levels. At higher automation levels, this is done by the vehicle through exteroceptive sensing, which by design includes an error rate. Humans are good at predicting future events - e.g. a ball rolling into the street would generally imply that there's a child not far behind it,

  • by oldgraybeard ( 2939809 ) on Sunday August 08, 2021 @09:42PM (#61670981)
    is not the problem is to turn it all off and keep it off.
  • Plenty of time to react if the car isn't going very fast.

    -That's 8kph to you whiners.
  • by devslash0 ( 4203435 ) on Sunday August 08, 2021 @09:47PM (#61671005)
    These algorithms have nothing to do with safety and deciding who to save and who to spare. The true question driving (pun intended) these systems is "Who should we save to face the least amount of backlash and legal charges if they decide to sue us as a company?".
    • by AmiMoJo ( 196126 )

      Not even that. Even if the car knows it has a family of 5 litigious individuals on-board it won't decide to run over the old man instead of killing them. All it will do is apply the brakes as hard as possible and continue following the lane it is in.

      Anything else creates legal liability for the decision. Braking has hard as possible and not changing direction is always the least liability.

  • You just slow vehicles down to the point that they have enough time to avoid collisions at speeds that will hurt someone. We could do that now. But that means top speeds of less than 15 mph... The problem is political.
  • by ctilsie242 ( 4841247 ) on Sunday August 08, 2021 @09:52PM (#61671013)

    If an AI is driving, it is going to face situations constantly. Is the pedestrian going to just book it across the road, is the car coming the other way going to go on the wrong side of the road and hit head-on, is the person behind who is slamming Pan Galactic Gargle Blasters going to plow into the car from the rear at warp 1, or are multiple things going to happen at once, like starting a left turn, pedestrian jumps out, just as another car changes lanes.

    Having the AI wait for the driver to wake up, and gauge the situation is pointless. By the time the person at the wheel sees what is going on, the airbags have deployed and the wreck has already happened.

    AIs have to make life or death decisions. No way about it. However, AIs, if well trained, especially after a ton of urban experience, almost always will do better than a human, because they can react faster, and are not drunk, stoned, texting, shaving, putting makeup on, eating, flipping someone off, or all the above at the same time. Of course, this isn't 100% -- AIs can always wind up in a weird state or whatnot, but that is what AI/ML is for.

    • You are assuming the training of the AI will be for it to help the driver, instead of for it to help the manufacturer.

      AI helping the driver leads to a fight for open source driverless cars, but it doesnt end there! You understand that, right?

      If the AI isnt there to help the manufacturer, then this ends at custom firmware, firmware hacking, jailbreaking your ford, and open source turbo kits.

      If everything is locked down, secret, unbreakable, then it also will be there to help the manufacturer not you.
    • by Tom ( 822 )

      You are correct. But you miss the real problem.

      Who is to blame?

      As long as you have human drivers, that's easy - the driver. He can then try to pin it on an equipment malfunction or whatever in court.

      If an AI is driving... we are witnessing a corporate game of hot potato - the car manufacturer doesn't want to be blamed, neither the component manufacturer, nor the software company that wrote and trained the AI, etc.

      That's the actual reason that they are on the level of "driver has to be always ready to take b

      • In the criminal justice system an judge has the power to cut the bull shit. But it may take an really bad crash to get there.

        • by Tom ( 822 )

          How will a judge "cut the bullshit" ?

          Imagine this question remains unanswered. AI driven car crashes into a group of school kids. Driver says: "I wasn't driving." and all the companies involved in making the car, the AI, the sensors, etc. all point to each other.

          On the basis of which law will the judge hold anyone accountable?

          • contempt of court.
            Let's say they try to stop discovery of things like log's, source code , letting an 3rd party Expert look at them.

            I don't thing an judge will like some on the stand saying that due to an NDA I can't say anything about that question

    • No AI in a autonomous car will make live and death decissions.
      They look forward and drive slow enough
      to stop in half the distance of visible range.
      Just like everyone else is oblieged to do.
      And if then something unecpected is happening: it brakes!
      Worst case an emergency brake to a full stop risking a rear ending. YUST LIKE YOU LEARNED IT IN DRIVING SCHOOL!

      Any autonomous car behaving different, would never get a licence for a road.

  • by labnet ( 457441 ) on Sunday August 08, 2021 @09:58PM (#61671025)

    In the - 'Issues to be solved', Pedestrian vs Driver is so unlikely as to why bother.
    I'm more concerned about hackers creating assassin cars, or worse, some nefarious agent creating a mass murder event by hacking millions of drive by wire always internet connected 2 ton killing machines.

    • Not so sure about that, anyone with that kind of skill can do one better, same as they do now: Make money directly. Imagine being ransomwared in a hacked taxi, pay up or be driven to Detroit.
      • "NERDY WHITE USER - DEPOSIT $5000 OR I HIT A LITTLE BLACK KID AND RACE OFF"

        "NOW THAT I HAVE RUN OVER THAT BLACK KID - DEPOSIT $20000 OR I DRIVE BACK TO THAT SCENE"

        "NOW THAT I HAVE DRIVEN BACK TO THE SCENE, SLAMMED ON THE BREAKS JUST BEFORE RUNNING THE KID OVER AGAIN, DEPOSIT $100000 OR I UNLOCK THE DOORS"
    • by kwerle ( 39371 )

      Black swan:
      https://www.nytimes.com/2018/0... [nytimes.com]

  • by gweihir ( 88907 ) on Sunday August 08, 2021 @10:17PM (#61671051)

    The whole idea is from some philosophers that have no clue how technology actually works and assumed that a self-driving car will have complete data in the situation it is in _and_ will have time to make elaborate decisions based on that. (It will also have had no advanced warning, otherwise it would not ever have gotten in that situation, making the scenario even more stupid.) That is, of course, complete nonsense and only shows the people that came up with these scenarios do not understand safety engineering one bit. Instead they did an invalid comparison to a human making a decision, which is basically animism and meaningless.

    In actual reality, a self-driving car would not make this fantasy-decision, because it cannot as it will not even know it is in that situation. What it will do instead is always drive so that it can break successfully for all reasonably to be expected hazards. You know, like a good human driver. When it is faced with what appears to be an unsafe situation, instead of thinking about some bullshit ethical implications for a few seconds, it will select the best option to make the situation safe again and execute it immediately. Almost always that will be to reduce speed, from a slight reduction up to slamming on the brakes. And that is it. No ethical dilemma. Fast decision. Best possible response in a non-understood situation. Safety engineering the way people with a clue do it.

    • by sjames ( 1099 )

      What it will do instead is always drive so that it can break successfully for all reasonably to be expected hazards.

      And then a huge boulder lands in the road right in front of it. Or a sinkhole opens, or a power poll falls, or a tree, etc, etc, etc. Try to dodge and probably kill the people by the roadside or hit it head on and probably kill the passengers...

      • by gweihir ( 88907 )

        Completely irrelevant. Too improbable. Also, the reaction to slam on the brakes is both valid in all of these and what most human drivers would do. "Dodging" with a car driving fast is something not even experts can do in most cases. Please stop commenting on things you do not even have a minimal clue about.

        • “Divert into an adjacent lane if clear” is a valid option, and one that human drivers often use in an emergency. And the “clear” part is where the machine would have a distinct advantage over humans, as it should be able to make that evaluation accurately in a snap second.

          I fully agree that the notion of AI having to make ethical decisions is stupid. It is not going to evaluate outcomes like “do I slam into that boulder full speed or risk nudging a few fellow road users into
          • And the âoeclearâ part is where the machine would have a distinct advantage over humans, as it should be able to make that evaluation accurately in a snap second.

            Absolutely not. That's not at ALL how this works. The computer can't make an accurate snap decision either. The computer is ALWAYS determining whether the neighboring lane is empty, so it knows already. And that's precisely the same strategy a responsible human driver uses. They know where their escape routes are.

            • The computer is ALWAYS determining whether the neighboring lane is empty, so it knows already.

              Exactly. And it can do so in all directions continuously, unlike a human driver who can only check one corner or mirror at a time. So it is going to be better at it.

        • by sjames ( 1099 )

          I see news of trees filling into the road all the time, often the day after a major storm passes through the area. I have seen video of cars falling in to sink-holes that open in front of them while they are driving. I have personally successfully dodged the related hazard of someone running a stop sign and then freezing up deer like while blocking both lanes in the direction I was going. Had I just hit the brakes, it would have been a rather bad accident.

          Fortunately in that last case it wasn't quite the cl

    • I would expect the car's decision making process will have a set of priorities: Can do, can do in a pinch, should not do and best in a bad situation. Trying to dial that list in will give insurance adjusters heartburn for years.
      • by gweihir ( 88907 )

        Nope. The car does not have the skills and lacks the decision-making capabilities and the information to use such a list. This idea is bullshit.

        • You've made a couple of these very broad, sweeping "the system can't do that" statements here... I'm interested to hear what is backing up these assertions.

          This kind of priority-based behaviour system is nothing terribly special in robotics in general, so I'm intrigued as to why you feel what is a fairly accepted approach will never be used in autonomous cars.

          I have similar questions regarding your comment above that autonomous cars wouldn't ever attempt a dodge because "not even experts can do in most case

    • by Trailer Trash ( 60756 ) on Sunday August 08, 2021 @11:04PM (#61671133) Homepage

      When I read tripe like "we'll just avoid these situations to begin with" it's clear they're not talking to you and me. They're talking to investors - like the kind who were (bluntly) stupid enough to fall for Theranos. It's so blatantly obvious that avoiding having to make decisions like this to begin with, but it's also blatantly obvious that it's not always possible. If kids run out from behind trees on each side of a road and there's no path to avoid collision - this is something that no amount of technology can fix except possibly something that can see through trees. There are plenty of scenarios like that. They're unavoidable.

      I mean, when I read this stuff I feel like Elizabth Holmes is in disguise working on something new....

      • by gweihir ( 88907 ) on Sunday August 08, 2021 @11:43PM (#61671219)

        It is always possible to avoid making these decisions. The default safety-engineering strategy of "reduce system energy" (i.e. brake and no steering) is always available and it does not take time to decide and implement. It takes no _planning_ and it does not require any additional data or decision process. And it most cases it solves the problem or reduces it sufficiently. On average it will come out superior, except in systems volatile and dangerous enough to be "experts only". Driving is not "experts only".

        I do agree that this is a bullshit strategy to solve a completely fictional problem that was created to convince no-clue investors.

      • Re: (Score:2, Informative)

        by AmiMoJo ( 196126 )

        If kids run out from behind trees on each side of a road and there's no path to avoid collision

        They don't care about that scenario because it's entirely the kid's fault. There is no liability for the self driving car because it reacted as fast as it reasonably could.

        As long as the car obeys all traffic laws it has an airtight defence - it was obeying the law, so either the other party or the law is the cause of the accident.

        It's cases where the car makes a mistake that are interesting. Say it mis-reads a green light on a building as a traffic light and moves out onto a junction, causing an accident.

        • by bws111 ( 1216812 )

          That is one incredibly naive point of view. 'Obeying the law' may shield you from criminal liability, but it certainly does not protect you against civil liability. All any competent attorney has to do is get an engineer to admit that there was even a single additional thing that might have prevented the accident, and you have lost the case.

    • Physicists often their analysis with "imagine a spherical cow". Of course no cow is spherical, but if you analyze the simple case then it gives you a lot of insight into more complicated cases.

      I think the same is true here. No there will not be an explicit choice between "kill driver" and "kill bystanders". But there may be choices like "how closely must this object approach my lane before I swerve to avoid it even though the weather is bad". One AI will set the threshold at a different place than another,

    • The interesting question is how far an AI would go to prevent dangerous situations. Assume there is a a huge tanker with a human driver way too close behind you and that slamming the brakes will result in instant death. Also assume signaling the tanker driver doesn't help. A human might get out of the lane, get off the road entirely or perhaps slow down just to annoy and force the tanker to overtake. But what would an AI do? Slowing down could make the situation worse but getting out of the way creates an

    • Instead they did an invalid comparison to a human making a decision, which is basically animism and meaningless

      Doubly so, because in most such situations a human would be unable to make a decision. Human reactions and thought processes are too slow.

  • by dohzer ( 867770 )

    Why not stop cars from getting in life-or-death situations in the first place?

    Because the car would need to drive excessively slowly, which would make the wealthy owners/passengers unhappy.

  • motorcyclist (Score:5, Insightful)

    by bigtreeman ( 565428 ) <[treecolin] [at] [gmail.com]> on Sunday August 08, 2021 @10:29PM (#61671081)

    Drive as if you're on a motorcycle. Any old biker knows how to look well ahead and prepare for possible scenarios.
    How do you teach an AI system to look at car mirrors near you and look for the reflection of the car drivers eyes ? One of the first things I ever learnt.
    A motorcyclist doesn't care so much about a pedestrian because in an accident both might die. It is not a calculation of saving the pedestrian or passenger, assume all will die.
    Well ahead you have calculated the density of vehicles, number of pedestrians in the surrounding area and the safest place to be.
    What will I do if that truck decides to turn across my path, I have already left safe space and have an exit strategy.
    Like playing chess and planning all strategies, the further ahead you plan the better.
    The best plan is getting out in the open, not following other vehicles, slowing to conditions and leaving the longest time to react.
    I'm still alive, proof it works !

    • Ha, motorcycle drivers? No, they go splat like country road bugs on the windshield at night. 15 percent of all USA traffic deaths are motorcycle drivers but they're half a percent of the traffic and three percent of registered vehicles. Old bikers are just lucky and have their crash stories, they don't know shit.

      • The higher death percentage is mainly because motorcyclists are very unprotected and their vehicles are more easily disrupted than cars.

        Bigtreeman is spot on. If all car drivers (and motorcyclists) could act according to good motorcyclist practices as mentioned, they would be safer for themselves and for others. Perhaps it is easier to imprint this into algorithms than people.

        • If all drivers were responsible then there would be less risk of motorcycles being in collisions, but they would still be less safe than cars because the loss of one tire at speed is more hazardous, and the loss of two tires is catastrophic. But they would be safer than they are today.

          Of course, if all motorcycle riders were responsible, they would also be safer. I typically see motorcycle riders doing dumb shit.

    • by dargaud ( 518470 )

      Drive as if you're on a motorcycle

      Your kidding, right ? I live in the mountains and bikers are a huge nuisance. From the noise above and foremost, but also by how they drive: going 90kph in 30kph zones of tiny villages, doing fast starts and wheelies in same zones and also driving like wannabe pilots on mountain roads with no visibility, fallen rocks and lots of car traffic. There are on average 140 bike accidents a year on that ONE road, so save me a tear or two about 'preparing for possible scenarios'.
      Bikers believe that with all the no

  • This is going to ultimately be a futile effort. One that ought to be made, sure, minimizing the chances of ever getting into a situation where a decision about who dies needs to be made is definitely a goal worth pursuing. But ultimately Murphy is out there, waiting. No matter how good your design is, no matter how many safety systems you incorporate, eventually something will fail in an unsafe manner and your car will end up on those trolley tracks and someone's going to have to decide which way to throw t

  • Provizio's secret sauce is a "five-dimensional" vision system made up of high-end radar, lidar and camera imaging.

    Contrast that with Tesla's approach [youtube.com] which only has vision.

    DoJo [youtu.be]

  • The person in the driver's seat assumes all liability for damage or injury that was caused by the AI. If nobody is in the driver's seat, then liability falls to the owner of the car.

    Neither an AI nor the company should be responsible for the decisions that an AI makes. If someone is unwilling to assume responsibility for the choices their car makes, then they should not enable the AI on it for that trip. Full stop.

    • The person in the driver's seat assumes all liability for damage or injury that was caused by the AI. If nobody is in the driver's seat, then liability falls to the owner of the car.

      Neither an AI nor the company should be responsible for the decisions that an AI makes. If someone is unwilling to assume responsibility for the choices their car makes, then they should not enable the AI on it for that trip. Full stop.

      So in other words you don't want driverless cars. Got it.

      The entire premise of the thought experiments is flawed. As someone who has studied a fair bit of philosophy there are no correct answers to the questions posited. Shoving that incredibly complex moral calculus from a machine to a person just means that the meat computer will take whatever panicked action that first plinkos into its head. An action that will almost certainly be slower and more deadly (statistically) than any of the choices the com

      • The entire premise of the thought experiments is flawed. As someone who has studied a fair bit of philosophy there are no correct answers to the questions posited. Shoving that incredibly complex moral calculus from a machine to a person just means that the meat computer will take whatever panicked action that first plinkos into its head. An action that will almost certainly be slower and more deadly (statistically) than any of the choices the computer would have made.

        Hence the existence of training in a lot of professions so the instinctual reaction is statistically the correct action for a given situation. Take note that most driving countries have little to no such educational structure in place, and the outcome predictable.

      • My guess is that insurance will work exactly like GP describes: either the owner or the operator (person setting the AI in motion) will be liable, depending on how that currently works in your country. My guess is also that insurance companies might charge extra to cover AI-related accidents when the first level 5 self driving cars hit the market... but only a little bit extra. And that it will not be long before they will offer a discount for letting the AI drive.

        Keep in mind that in many countries, i
      • by mark-t ( 151149 )

        So in other words you don't want driverless cars.

        Not at all... my point is that if or when the systems are advanced enough, the so-called "risk" of such an incident is going to fall below a threshold that people will start trusting them with increasing regularity anyways, despite car companies washing their hands of liability.

  • Instead of having AI in a self-driving car decide whether to kill its driver or pedestrians...

    why not just make that the driver's decision?

    They can make it a two direction toggle in the vehicle's settings, one you can only change when the car is not in motion.
    And then have it be the driver's liability if they choose the latter setting.

    • Instead of having AI in a self-driving car decide whether to kill its driver or pedestrians...

      why not just make that the driver's decision?

      They can make it a two direction toggle in the vehicle's settings, one you can only change when the car is not in motion. And then have it be the driver's liability if they choose the latter setting.

      I think the idea is to make any accident the driver's liability and not that of the of the AI manufacturer. One of the biggest hurdles in creating AI driven cars is to find a way to offload the liability for the AI's mistakes entirely on the car's owner even if the person isn't even driving. That being said, I fail to see what the point is in having an AI driving feature if I have to have my hands on the wheel, my feet on the brake ready to intervene at a 1/10th of a second's notice in case the AI makes a m

  • "AI should decide this or that", this has always been a silly thought by clowns who have no idea. The goal is to reduce net deaths over stupid humans, even if AI makes mistakes.

    Lawyers are holding this back as the field is terrified of lawsuits. Hence they are responsible for net deaths due to delays in introduction of these systems.

    Much like they are responsible for more deaths last century than war, profiteering off drug lawsuits leading to idiotic 10 year testing cycles. Nobody wants to be the one who

    • The goal is to reduce net deaths over stupid humans

      Whose goal?

      For our society, it *should* arguably be.

      In practice, the goals are probably:
      1. Make money for the vehicle companies' stock owners and venture capitalists.
      2. Make commuting and long-distance car travel less boring and unproductive.

  • Fool proof (Score:4, Insightful)

    by Wizardess ( 888790 ) on Monday August 09, 2021 @01:11AM (#61671349)

    One must always remember one simple fact. Every time you make your product more fool proof God accepts the challenge and produces a better fool.
    {o.o}

    • And yet road deaths per mile have dropped drastically over the last century as we instituted standardized signs, drivers licenses, compulsory insurance, seat belts, drunk driving laws, air bags, and other protective measures.

  • AI is currently primarily used for perception. The actual planning decisions are made by expert systems and other algorithms. These planners already trade off safety and practicality. For example, a car could always stay far from the car in front and drive slowly (like the early Google cars), but in many situations, such a car might be more unsafe because other cars will impatiently try to maneuver around (like what happened around the early Google cars). Some inherently risky situations are completely

  • How about - the car keeps itâ(TM)s lane, doesnâ(TM)t speed, slows down when someone looks like they might cross the road - and slam the brakes when something unexpectedly appears on the road ahead of it?
    Yes from time to time an accident will happen, but less so than if we let people drive.

  • Humans are the worst.

    The goal is the get the tech inside passenger cars so that the system can learn from human drivers, and understand how they make decisions before allowing the AI to decide what happens in specified instances.

    Humans have faulty decisionmaking and flaws in their reasoning, attention span, and physical abilities that cause accidents in the firs place.

  • There is this thing called physics, and it has this rule called "inertia". A nice guy named Newton had told a story on this law before any of us were even born.

    So, what would that mean for the self driving vehicle? If you are doing 40mph in a city zone, and a kid runs in front of you, you cannot escape by saying "I give up, here is the steering wheel". Even if the vehicle does not do anything, a collusion will occur. Even if the vehicle tried to brake a collusion would still occur, if you try to steer away,

  • by Tom ( 822 )

    Why not stop cars from getting in life-or-death situations in the first place?

    Because life isn't make-a-wish.

    We all would prefer if a computer never had to make that decision. However, life being messy, we understand that sooner or later, that exact situation we'd like to avoid will occur. And when it does, literally ANYTHING is better than a computer going "no data for this scenario. don't know what to do."

    As they say: Expect the best, prepare for the worst.

    • Why are "avoid being in the bad situation in the first place," and "if we're in a bad situation, do what's possible to prevent or minimize loss of life," mutually exclusive? Can we not try to do both?
      • by Tom ( 822 )

        If your basic selling point is "avoid bad situation - oh look, problem solved!" then yes, that's exclusive.

  • So whenever the AI can't make a "good" decision, the car will simply stop and wait for human input. Of course, this will result in countless incidents of self-driving cars being rear-ended. But that won't be the fault of the car - with rear-end collisions, the other driver is always "to blame".

  • Lightweight naÃve puff piece, contributes nothing to either the engineering nor ethical aspects to the debate.

    People tend to believe that any system that improves statistical road safety while maintaining traffic flows will be 'better than humans'. This has a couple of problems:

    1. It equates 'current situation' with 'humans'. Road safety has improved hugely year on year without replacing humans, so it's not like having a person in the car caps safety at some constant amount.

    2. It assumes people don't c

  • There are many roads with two lanes, each going in the opposite direction. There will always be a risk of a car from the opposite lane having a drunk driver or dodging a rabbit and getting into the middle of my lane right in front of my car.

    * I am not sure how some people in this discussion believe such life-or-death situations can be avoided. Even stopping when a car goes by is not safe enough.

    * Apart from slamming the brakes, there is a choice that I (or my AI) have: slam the car going in the opposite

  • No fucking AI please.
    Just a car I drive.
  • My car protects me.

    Kill the pedestrian.

  • That's a lot more complex brain-power than literally any other life-form on earth. There simply isn't any animal, plant, or mineral that manages so many senses at once, just to navigate the world.

    So I'm confident in saying that it's the wrong direction.

    We have a word for the opposite: focus. I choose to believe that focus is the better solution -- the ability to disregard all of the excess stimuli.

    And so I'll look at birds. Big flocks of birds. How many crashes do you see? (yes, there are some)

    And yes,

  • Really, please reply below if you have EVER been in a situation where you had to choose who to run into?

    People have been driving for over a century and not once have I ever heard of a court case where someone got into trouble because they choose to hit a pedestrian nun, holding the only copy of the cure for cancer rather than hit a bus full of kids.

    • Never actually hit anything yet, but I've sometimes had to choose whether to endanger a vehicle on one hand versus a pedestrian or bicycle on the other. The car is usually the less vulnerable, so that is my default assumption. Usually, at least where I live, more aware also, hence more able to react, and other cars' reactions to me swerving to avoid a pedestrian, and me swerving to avoid someone else doing the same, have definitely prevented accidents.

      Regarding risking my safety versus others'? If I'm al

  • ... and let God (Google) sort them out later.

  • Driverless cars will kill a lot of pedestrians, cyclists, motorcyclists, and put a lot of hard-working people out of a job. They fix a problem that only the owner of a large company has. In the event of a war, we may lose GPS, and then all of those vehicles will be road obstructions. The only thing good that will come from driverless cars is when it's all said and done and the bodies are buried, there will be proof that many vehicle deaths are attributed to bad city planning.

God help those who do not help themselves. -- Wilson Mizner

Working...