Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
Get HideMyAss! VPN, PC Mag's Top 10 VPNs of 2016 for 55% off for a Limited Time ×
Transportation AI The Courts

Volvo Will Accept Liability For Self-Driving Car Crashes (bbc.com) 203

An anonymous reader writes: Volvo has announced it will accept "full liability" for accidents when one of its cars is driving autonomously. It joins Mercedes and Google in this claim, hoping to convince regulators that it's worthwhile to allow testing of such vehicles on public roads. Volvo's CTO said, "Everybody is aware of the fact that driverless technology will never be perfect — one day there will be an accident. So the question becomes who is responsible and we think it's unrealistic to put that responsibility on our customers." Of course, this is limited to flaws in the self-driving system. If the driver does something inappropriate, or if another vehicle causes the accident, then they're still liable. It's also questionable how the courts would treat a promise for liability, but presumably this can be cleared up with agreements when customers start actually using the technology.
This discussion has been archived. No new comments can be posted.

Volvo Will Accept Liability For Self-Driving Car Crashes

Comments Filter:
  • by sethstorm ( 512897 ) on Thursday October 08, 2015 @12:33PM (#50686929) Homepage

    Given that Volvo is now a PRC-backed concern under Geely, it's easy for them to just throw out money.

    • Don't be too sure of that, China has been burning through it's currency holdings lately to prop up the economy. [businessinsider.com]

    • Well, it was easy for Google to do it first, so your comment has no insight value.

      As to the summary, no they won't just make a general "promise" for the courts to work out, they'll simply issue insurance and the courts won't see any difference except which party is providing the insurance. It will still work the same way; you sue the owner of the vehicle, and the insurance company provides their lawyer and pays any judgement or settlement.

      If it is a dual-mode drivable car, then it will be up the two insuran

  • by Virtucon ( 127420 ) on Thursday October 08, 2015 @12:39PM (#50686983)

    Even though they'll take responsibility, in every state in the US you must still have liability coverage. If these companies are to be their own underwriters so to speak then they'd have to jump through hurdles to be approved to operate as an insurance company as well. They could obviously partner with insurance companies as well.

    • If insurers only had to deal with 4-5 automakers vs 50 million drivers it would certainly reduce their paperwork and the hassle that goes with it. My friend is a claims adjuster and has a bunch of stories along the lines of: "Someone stole the oil out of my engine which is why my car broke down so I need you to pay me for it".

      Anyway I'll file this under common sense. If Volvo software is driving your car and causes an accident, Volvo (the driver) is liable. If something else causes an accident involving you

      • Your Friend's Job (Score:2, Informative)

        by Anonymous Coward

        How long will your friend have a job if insurance companies only have to deal with a few car companies?

        How long will car insurance companies be around? The car manufacturers will self insure with re-insurance to stave of massive catastrophe.

        Claims adjusters are pencil pushing paper shufflers. I'm married to one. Assuming self driving cars and the inevitability of the manufacturers matching Volvo's tactic, there will be no claims adjusters.

        • As long as you have a non-self-driving car, you're probably going to want or be required by law to have insurance.

          As long as your self-driving car has a manual override and the car company's insurance guarantee won't cover incidents when that override is engaged, you're probably going to want or be required by law to have insurance.

          Eventually, will there be no need for automotive claims adjusters? Perhaps. How long will it be before "eventually" occurs? A long time IMO.

        • by TWX ( 665546 )
          Auto insurance will be around as long as there are uninsured drivers and other incidents beyond the fault of the owner or authorized driver.

          Rates may reduce dramatically if liability is reduced to those cases, but the need for the insurance will still be there.
        • How long will your friend have a job if insurance companies only have to deal with a few car companies?

          Given that he's a claims adjuster and not a salesman, his job should be fairly secure - he might have to scale back from 40 hours to 30 hours.

          Remember, he's not just adjusting claims for on road accidents, but things like windshield repairs, vandalism, theft, etc...

          For that matter, you might end up with an interesting split - liability is taken by Volvo for any damage caused by the car, including something like hitting a tree. It's still a good idea for the driver to pick up:
          Under/Uninsured Motorist, theft

        • by mjr167 ( 2477430 )
          Is that a bad thing? Life moves on... when your industry becomes obsolete, you have to move on too.
      • by Altrag ( 195300 )

        Volvo (the driver) is liable

        There's trickiness to that statement. In particular, keeping up proper maintenance on your car is fairly important in order to keep things running optimally.

        Presumably if Volvo (or Google or whoever) is willing to take on this liability, they're probably fairly confident in their sensors being able to detect anything too far out of whack and prevent the car from moving until its repaired (or at least shut off the auto-driving software along with a clause that their liability only covers times that its on..

    • by gstoddart ( 321705 ) on Thursday October 08, 2015 @12:47PM (#50687043) Homepage

      But it will have to be made to mean something.

      I've been saying for quite a while that self-driving cars can't just go into a failure mode which says "OK, meat sock, you do it I'm confused" and expect humans to be able to respond or take liability.

      It's completely unrealistic to expect humans to transition from not actively driving to being required to take over in the event of an emergency.

      Why would I pay insurance on a self-driving car? That would be idiotic, and basically means everyone else is footing the bill for the adoption of unfinished technology.

      If the passengers aren't the source of the risk, they sure as hell shouldn't be the ones pay for the insurance.

      • A vehicle won't be 100% autonomous, and least not for the foreseeable future. Meaning, there's still a steering wheel and peddles for human interaction. So until those are removed, expect to still pay insurance for the vehicle. Even still, you have other natural disasters that can total a vehicle while parked, hail, floords, landslides, theft...etc.

        Most like these newer vehicles that employ autonomous driving will allow for auto insurance at a reduced rate for the owner. In fact, in the insurance business t

        • by 0123456 ( 636235 )

          A vehicle won't be 100% autonomous, and least not for the foreseeable future. Meaning, there's still a steering wheel and peddles for human interaction.

          But the fanboys keep telling us that all the human drivers will be gone in five years because Google.

          Why would I want a 'driverless car' if I can't sit in the back drinking whiskey because the car might expect me to take over at any second? What's the point?

          • But the fanboys keep telling us that all the human drivers will be gone in five years because Google.

            No they don't. That's shorter than the normal life of the cars already on the road.

            They may well say that autonomous cars will be in consumers hands within 5 years. And they may well be right.

            As to whether the fully autonomous approach or the gradually add more automation to existing cars approach is better, we'll find out. American companies are generally trying the former. European companies the latter. Results, not theoretical arguments will provide the winner.

            Why would I want a 'driverless car' if I can't sit in the back drinking whiskey because the car might expect me to take over at any second? What's the point?

            What's the point of automatic transmission?

      • by Altrag ( 195300 )

        being required to take over in the event of an emergency

        Emergencies are when the cars are most likely to NOT want the meatsock taking over. People are really really bad at making decisions while panicked. Pretty much anything the software can do to minimize impact in the case of emergency is likely going to be a smarter idea than what 95% of the population would do, even if they were in active control and alert the entire time.

        The type of things that would likely be passed to manual control would be "we've arrived at the mall parking lot and I'm not going to e

    • by tripleevenfall ( 1990004 ) on Thursday October 08, 2015 @12:49PM (#50687061)

      Volvo is offering to indemnifying individual owners against flaws in the self-driving system. Of course, you'd have to prove somehow that the self-driving system was responsible, and do it by going up against a massive corporation's legal department.

      • Sit in the back seat. Or don't buy one.

        Honestly, until they get the issues of liability sorted out, the self driving car is a complete non-starter .. precisely because of crap like this.

        • Volvo *seems* more trustworthy than most, and perhaps it would be your insurance company paying your claim, and then deciding to fight it out with Volvo themselves. However, what if they deny your claim? Are you going to be able to fight them? Will anyone?

          Really, this gesture doesn't mean much. There will have to be a new structure devised for determining liability around self-driving cars. New laws, new types of insurance, etc. (I'm guessing it isn't going to be cheap to insure one of these)

          None of that ex

          • Volvo isn't making the expansive claim the Google is. The gesture doesn't mean much, as you said. You're correct, your insurance will pay, and they may or may not then try to blame Volvo based on this. But the part you miss is that if they fail, or succeed, they won't be telling you about it. And it won't come back to you. It can't; they already paid your claim, and what happens with Volvo isn't a new fact about your accident.

            Your guess about it not being cheap is silly, there is already public crash data o

        • Sit in the back seat. Or don't buy one.

          Honestly, until they get the issues of liability sorted out, the self driving car is a complete non-starter .. precisely because of crap like this.

          This is for governments to make laws just as they have for vaccines. It's the same deal. Over a population it will save many lives. In some specific instances it make kill someone. The government gets to make laws that protect the vendors from the specific cases so the general case can be realized,

        • Sit in the back seat. Or don't buy one.

          Honestly, until they get the issues of liability sorted out, the self driving car is a complete non-starter .. precisely because of crap like this.

          You're claiming that if they don't "sort out" the imaginary "issues" that the whole product is a "non-starter." I would like to point out that if they don't come up with a new system, I'll just put the self-driving car on my existing insurance, and can have all the other benefits of the product. The rates will be low as soon as there is vehicle crash data available, which will happen before they even go on sale. That data is already being collected and analyzed by actuaries!

        • This isn't a big problem. You'll have to have your own insurance on the car for the foreseeable future (at least until the liability issues are sorted out), and so you're covered. Volvo's assurance might make your insurance less expensive (and just having a self-driving car might lower the rates also).

      • My guess is that self-driving cars will, in the case of accidents, have a "black box" that will be able to tell investigators just what was going on with the car including whether self-driving mode was engaged or not. So if the accident investigators determine that your car was at fault, but your car was in self-driving mode at the time, you'd be off the hook for liability.

        • by JazzLad ( 935151 )
          And if there is anything we've learned from VW lately, it's that we can trust auto manufacturers.
      • by eth1 ( 94901 )

        Volvo is offering to indemnifying individual owners against flaws in the self-driving system. Of course, you'd have to prove somehow that the self-driving system was responsible, and do it by going up against a massive corporation's legal department.

        On the other hand, if your self-driving car has a crash, who do you think people are more likely to sue? You, or the corp with deep pockets? Lawyers will probably be lining up for contingency fees to go after the corp.

        • Lawyers will probably be lining up for contingency fees to go after the corp.

          Assuming that Volvo does the 'smart' thing and retains an insurance company to act as a *processor* for claims, they might not be so ready to line up. Volvo would be able to show, in most cases at least, that a reasonable payment offer was extended. This tends to limit punitive damages, which is where they can really make their money.

          At least until Volvo has enough self-driving cars to justify having their own claims office and people.

        • Volvo is offering to indemnifying individual owners against flaws in the self-driving system. Of course, you'd have to prove somehow that the self-driving system was responsible, and do it by going up against a massive corporation's legal department.

          On the other hand, if your self-driving car has a crash, who do you think people are more likely to sue? You, or the corp with deep pockets? Lawyers will probably be lining up for contingency fees to go after the corp.

          You betray your ignorance of insurance. That is how it already works; you don't sue an insurance company, you sue the driver, and the insurance company provides their lawyer, and then pays the claim. Story time. My friend was a passenger in his girlfriend's car, and they were in an accident. It was ruled "no fault" (translation: both drivers made mistakes) and so his gf's insurance company was liable for his injuries. But they denied the claim, even though it was a rather obvious situation. In order for the

    • by mh1997 ( 1065630 )

      Even though they'll take responsibility, in every state in the US you must still have liability coverage. If these companies are to be their own underwriters so to speak then they'd have to jump through hurdles to be approved to operate as an insurance company as well. They could obviously partner with insurance companies as well.

      In many states, you do not need insurance, but proof of financial security which can be a surety bond with the DMV. In my state it's only $50,000. The company I work for does this and we are in no way associated with or approved as an Insurance Company.

      • It seems to me that $50K would buy a lot of insurance, and that's a significant sum of money to sit around in a savings account accruing approximately no interest.

        • by dj245 ( 732906 )

          It seems to me that $50K would buy a lot of insurance, and that's a significant sum of money to sit around in a savings account accruing approximately no interest.

          The GP specifically mentioned a surety bond [wikipedia.org] on file with the DMV. A surety bond is not a cash bond. In fact, it somewhat resembles insurance.

          Surety bonds basically work like this- I go to my bank and say I want a surety bond for $10,000. The bank checks my financial situation and says "ok, DJ245 has enough assets, he could easilly come up with $10,000 if he needed to". I pay my bank a small fee (around 1-2% of the bond amount) and they basically write a letter that says "DJ245 can come up with $10,0

    • by Nidi62 ( 1525137 )

      Even though they'll take responsibility, in every state in the US you must still have liability coverage. If these companies are to be their own underwriters so to speak then they'd have to jump through hurdles to be approved to operate as an insurance company as well. They could obviously partner with insurance companies as well.

      Or they just work with their current insurance providers. Sure they would have to renegotiate rates, but since major companies (especially manufacturers of finished goods) already have insurance for all types of liabilities both during and after production it should be relatively easy to extend said liability to driverless cars. No need to act as your own insurer.

    • There aren't any real mysterious hurdles, you just deposit n dollars into a bank account, as per State or Federal self-insurance guidelines, and the government stamps the insurance cert as soon as the bank verifies the account. Even for individuals, self-insurance doesn't require a bunch of hurdles, it requires depositing real money into an account, having the State verify the amount, and then in some States you have to show them that the money is still there every year or two.

      But they won't, because they'd

  • by gurps_npc ( 621217 ) on Thursday October 08, 2015 @12:55PM (#50687091) Homepage
    Eventually they will see it as a "feature", rather than a bug. Buy our car and WE pay for the insurance. Of course, in reality, the price of insurance will be bundled into the vehicle.

    Also, self-insuring is not as big a deal some people seem to think it is. Yes, there will be some legal/regulatory hurdles, but a lot of the that has to do with financial resources to pay it off, which VW will either still have or be out of business.

    More importantly, it will eventually lead to huge profits as current computers are already far safer drivers than human beings.

    Always remember it's like being chased by a bear - you don't have to be faster than the bear, just faster than your competitors.

    • by bsolar ( 1176767 )

      More importantly, it will eventually lead to huge profits as current computers are already far safer drivers than human beings.

      Not necessarily since a lower risk should translate into a lower insurance premium. Actually in some fields it's very strictly regulated and the insurance company is mandated by law to pay back to the insured any risk-based surplus within a few years. Of course if you instead give the insurance free rein...

  • >> Of course, this is limited to flaws in the self-driving system.

    Oh your car chose to kill a kid on a bike instead of hit an old person crossing the road? Yeah sorry you're on your own since we arbitrarily choose to not identify that as a flaw in our system.

    • You've just hit on an interesting scenario that will be to Volvo's advantage.

      Volvo is driving. For any accident, they accept full responsibility. However, a holy-crap scenario arises where the computer has no viable options. Clearly, Volvo is still fully accepting responsibility.

      Except, in that type of scenario, I'm going to grab the wheel and try to do something. Since I've done something in this worst case scenario, their lawyers will cite the computer data indicating that 1.4 seconds before the accid

      • by 0123456 ( 636235 )

        No. In that scenario, they'll presumably do what aircraft manufacturers do. The autodriver will turn off, and they'll blame 'driver error!' when you crash.

    • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Thursday October 08, 2015 @01:43PM (#50687549)

      Oh your car chose to kill a kid on a bike instead of hit an old person crossing the road?

      What? Why do you think that a car would be programmed to hit "obstacle B" when "obstacle A" appears in front of it?

      Instead, wouldn't the car be programmed to avoid ALL obstacles and apply the brakes with maximum efficiency?

      • by The-Ixian ( 168184 ) on Thursday October 08, 2015 @01:59PM (#50687675)

        Exactly this.

        As a matter of fact, the computer will know about the problem long (hundreds of milliseconds) before you see it and will already be reacting.

        The idea that you could react faster or make a better critical decision than the computer is sort of funny actually.

        • by 0123456 ( 636235 )

          The idea that you could react faster or make a better critical decision than the computer is sort of funny actually.

          Belief in the infallibility of computers and programmers is sort of funny, actually.

          I test-drove an SUV last year which would beep if you started crossing the lines in the road. Wow, brilliant, right? Except, for at least six months a year, you can't see the lines on the road around here.

          • Belief in the infallibility of computers and programmers is sort of funny, actually.

            I test-drove an SUV last year which would beep if you started crossing the lines in the road. Wow, brilliant, right? Except, for at least six months a year, you can't see the lines on the road around here.

            Not to mention the other 6 months of the year when there's road construction and multiple markings on the road.

          • by Ichijo ( 607641 )

            Belief in the infallibility of computers and programmers is sort of funny, actually.

            The Space Shuttle Columbia's computer did a remarkably good job [spaceflightnow.com] of keeping the nose pointed in the right direction while the craft was disintegrating, right up until the time it lost hydraulic pressure and control became impossible:

            the [plasma] breach ultimately caused unusual aerodynamic drag to develop on the left side of the spacecraft, forcing Columbia's flight computers to adjust the shuttle's roll trim with the elevo

          • Version 1 of something didn't work guys. Lets pack up all R&D and give up.

          • by Aighearach ( 97333 ) on Thursday October 08, 2015 @04:13PM (#50688719) Homepage

            You erect the straw man "computers are infallible" to attempt to defeat the claim that computers react more quickly than humans. Fail.

            Also, the thing you drove wasn't a commercially available self-driving car, it was a different thing, very primitive with a limited intended function that is different than a self-driving car. One could almost think you were comparing apples to oranges, but in this case it is more like comparing an apple to a cartoon orange sticker.

          • by amiga3D ( 567632 ) on Thursday October 08, 2015 @08:28PM (#50690461)

            He didn't say computers were infallible, just better than the average driver. That's not hard at all.

        • The idea that you could react faster or make a better critical decision than the computer is sort of funny actually.

          Your reaction to this article is also sort funny. The situation you describe, with the computer making faster and better decisions, is what you expect to happen. A desirable output. And most of the time this is what WILL happen.

          However, in liability issues we are not interested in the desired result. We are interested in failures. When things go wrong in whatever imaginable or unimaginable way. As this Volvo CTO seems to be more aware of than you, this WILL happen some day. That will most likely be the sam

          • Because of the way insurance works, that won't be a real issue. What they offer will either meet the requirements of auto liability insurance, in which case it won't matter what their analysis of the fault is, or it won't meet that requirement, and you'll have additional insurance that actually covers the accident.

            Accident fault is based on traffic laws and which vehicle went where at which time. If they own up to a technical fault will be a separate issue for them to fight out with the insurance companies.

        • As a matter of fact, the computer will know about the problem long (hundreds of milliseconds) before you see it and will already be reacting.

          Knowing about a problem hundreds of milliseconds before a human does doesn't mean the computer will have a better solution than the human would. At some point, there will be the problem of a person stepping into a crosswalk inappropriately at the same time a bike rider blows through a stop sign and becomes an obstacle. When the solution is "slam on the brakes" because there are other people where the car would choose to swerve, and there is no physical way of stopping before hitting one of the new obstacle

          • The part you're getting wrong is that it is never appropriate to swerve. If you had time to check that it is safe to do so, you'd have time to stop. If somebody steps into the crosswalk when you don't have time to stop, your duty is to brake as quickly as possible to reduce the speed before impact. DO NOT SWERVE. The self-driving car is going to get this right 100% of the time; it won't be programmed to panic and create a new accident because of a rash action. Those milliseconds will directly translate into

            • The part you're getting wrong is that it is never appropriate to swerve.

              Tell that to the mother of the child you've just run over because they stepped out from between parked cars and you didn't swerve over into the other lane, you just slammed on the brakes and ran them down.

              Tell the mother of the child you just ran over that you didn't attempt to swerve because you might have dented the fender of the car in the next lane over.

              If you had time to check that it is safe to do so, you'd have time to stop.

              I'm sorry, but that's patently absurd. As a defensive driver, I see and keep track of vehicles coming my way in the other lane. I can't see a four ye

        • by mayko ( 1630637 )

          Exactly this.

          As a matter of fact, the computer will know about the problem long (hundreds of milliseconds) before you see it and will already be reacting.

          The idea that you could react faster or make a better critical decision than the computer is sort of funny actually.

          I agree that a computer will certainly be able to react faster in the event of a sudden unexpected obstacle, but what about my human ability to see children playing near the road or someone who isn't looking my direction but hasn't actually stepped off the curb? I can preemptively slow my vehicle just-in-case. A human can quickly detect things about their surroundings that computers aren't yet capable of.

        • Exactly this.

          As a matter of fact, the computer will know about the problem long (hundreds of milliseconds) before you see it and will already be reacting.

          The idea that you could react faster or make a better critical decision than the computer is sort of funny actually.

          Like those web crawling bots that so easily fill in the captcha boxes? Matter of fact they must be filling it in before i even start to process what is even happening! More like it will be a true revolution when sensor fusion algorithms work half as well as distracted drivers. We are still decades away from even writing algorithms capable of interpreting patterns half as good as an average Joe.

        • What if "obstacle B" is a cardboard cutout of Mickey Mouse and "obstacle A" is a person? Maybe you actually could make a better decision sometimes.

          Actually, the idea that computers always will make a better decision than a person is hilariously optimistic. Sure, they will make a faster decision, which most of the time will avoid both obstacles, but that's not always the same as better.

          There's no doubt in my mind that self-driving cars will improve traffic immeasurably, because of quicker reaction times, bu

        • by JustNiz ( 692889 )

          I may not be able to (re)act faster but I think real world situations still exist where I could make a better decision than a chip.

      • by 0123456 ( 636235 )

        Instead, wouldn't the car be programmed to avoid ALL obstacles and apply the brakes with maximum efficiency?

        Yes, because cars can always stop instantly when a kid runs out in front of them.

        Ultimately, someone's going end up programming the 'what to do when you have a choice between hitting a kid or a bus full of nuns' case, either intentionally, or as a consequence of general obstacle programming.

      • I think he refers to an old philosophical question.
        The classic example is 'you're conducting a train. You come around a bend, and there's a track split. One track A is, say, a person. On track B is, say, two people. You don't have time to brake. All you can do is pick which track you take. Which one do you take?

        What if Track B has five people? One child? A world-class doctor who saves lives? A scumbag criminal? Your wife?

        So, say you're in a self-driving car. The car wants to make a left turn across

        • The car wants to make a left turn across traffic at a four-way intersection. So it advances into the intersection,

          The car has just broken the traffic law. It has entered an intersection prior to having an ability to complete the action and leave the intersection. In some places this is made explicit (large cities, e.g.) by painting a box in the intersection and actually ticketing people who cause "gridlock".

          Your autonomous vehicle will sit patiently at the stop line until traffic clears enough to be able to complete the left turn, even if that means it never completes the left turn. A human driver would do as you sa

        • That is pretty easy. You don't "go" anywhere, you make sure your arms and face are in a position to minimize injury when the airbag deploys, if the semi in fact doesn't stop in time. Duh.

          And as a moral thought exercise, you don't address the issues raised in the train situation. That exercise is based on the fact that you can't stop the train; there is no neutral course of action. In your situation, you do have a neutral course of action that doesn't involve killing anybody, and so there is no actual moral

      • Oh your car chose to kill a kid on a bike instead of hit an old person crossing the road?

        What? Why do you think that a car would be programmed to hit "obstacle B" when "obstacle A" appears in front of it?

        Instead, wouldn't the car be programmed to avoid ALL obstacles and apply the brakes with maximum efficiency?

        They're confused into thinking this way because their driving practices are so dangerous, they actually plan ahead to swerve around an obstacle instead of stopping before hitting it. These are exactly the idiots who will stop running over Jr and Grandma when they switch to a self-driving car that will follow the DMV mandate and stay in the same lane and stop before hitting anything.

        I kinda think the traffic engineers are right about this one. If you had time to "choose" what to hit, you'd be driving a safe

        • They're confused into thinking this way because their driving practices are so dangerous, they actually plan ahead to swerve around an obstacle instead of stopping before hitting it.

          Defensive driving requires thinking ahead of time what one would do when presented with a surprise on the road, such as a child popping out from between parked cars. You cannot say "I plan on stopping before hitting a child that does that" because you cannot plan on it happening far enough ahead of you that you could stop. The best drivers will ALWAYS think ahead far enough that they know "should I be unable to stop from hitting a child in the street, there is an open space I can use to avoid it."

          It's cal

      • by JustNiz ( 692889 )

        So you seriously think there could never be a situation where the world didn't behave as planned and it would have to make a choice between "Evil A" and "Evil B"?

        • So you seriously think there could never be a situation where the world didn't behave as planned and it would have to make a choice between "Evil A" and "Evil B"?

          Be calm, citizen. The autonomous vehicle will make all such choices for us, and the autonomous vehicle will get the answer right 100% of the time. You never again need to worry about what was right or prudent or safe, the decision will be made for you. People who think about this all day, every day, for a year will design and accurately code the correct responses to all such possibilities, and you cannot possibly know a better answer as a non-engineer.

          Please enjoy your trip in comfort and forget about the

      • by bsolar ( 1176767 )

        Because brakes might not be enough: then the decision becomes: "do I hit the obstacle or attempt to dodge it? Hitting the obstacle might mean a lower risk for the passenger of the car, but a huge risk for the obstacle. Attempting to dodge might mean a higher risk for the passenger of the car but a much lower risk for the obstacle.

        If the obstacle is a dog you might want to prioritise the passenger of the car's safety, but if the obstacle is a kid you might want to attempt to dodge even if the passenger is pu

  • I can't imagine the big auto's pushing for any other policy. What better way to ensure that no new competition ever emerges. No more pesky start ups like Tesla showing up and disrupting things, nope if you are not already established with billions of dollars in assets, you need not apply.

    • So you're saying that new car makers won't be able to start up, and your argument is that Tesla drivers can't afford to buy their own insurance? Or that companies like Tesla, which exists only because a rich guy dropped a giant wad of billion dollar bills on it, won't be able to exist because you'll need billions to start a car company? What the hell, man, I know Thursday is the new Friday but you posted at 10:18am and it is way too early to be that drunk.

  • This is definitely a context where Volvo's obsession with safety and their reputation for it will pay off in spades, both with regulators and the marketplace.

  • by c ( 8461 )

    I think I'm going to need the see the fine print on that promise... the words "$corporation will accept liability" are rarely written without conditions a truck could drive itself through.

    • I think I'm going to need the see the fine print on that promise... the words "$corporation will accept liability" are rarely written without conditions a truck could drive itself through.

      My advice is to not waste time on that. You'd be better served researching your State's liability insurance requirements. You might discover that if it meets the requirements of insurance, they have to pay and the Court doesn't care about their excuse. That's what liability insurance is; you screwed up, hurt or killed the other person, and now your insurance pays them. Saying it was really your fault because of fine print, that isn't a scenario that gets them off the hook.

      OTOH if it doesn't meet the require

  • You can't trust these European car makers. The fine print will say "contract valid only when driven on test tracks instrumented by Volvo a priori"
    • Yes because GM has shown a great social conscience when it comes to their customers' safety.

    • Fact of the matter is, providing comprehensive insurance as part of the package for buying a new car is actually a thing over in Europe. They're generally economy shitboxes that a person could probably stop with a hard shove, but a lot of new/bad drivers end up buying said new cars because it's the cheapest option - over even buying a used one because the insurance costs are so high otherwise.

      Liability for a new driver overwhelms the expense of everything else. If the self-driving cars have half the *aver

  • As more companies adopt this "we cover it" guarantee, they're going to start having more "costs" dealing with human drivers. Accidents, time/speed inefficiencies (vs. following the letter of the law), etc.

    As a result, they'll start pushing the driverless cars harder. For those they can't convince with the carrot of incentives to make the switch, they'll eventually pull out the stick of passing the costs to those human drivers through various lobbying channels - forcing them to deal with higher insurance pre

  • What this really tells us is how the justice system values human life.

    Say someone is killed by a self-driving car in a way that's obviously not an hard-to-avoid accident but a clear malfunction of the device. One might expect the liable party to face fairly astronomical damages for designing and marketing a killing machine. But we know they won't. They'll say "That guy made $30k/year and was 10 years from retirement. Here's $300k. We're even." And the courts will say, "ya, that sounds fair.". Maybe a few pe

    • And that thinking will have be like it's better to kill some one then to leave them alive with long term medical care costs.

  • Even without self driving cars, it is quite possible that you can be found to have 0 liability for injuring someone:

    It was dark, you were driving with your headlight on the highway. As you turn the corner a small kid is out on the street chasing after her ball. You slam the breaks, but you still hit her. Your car was in full working order, and you reacted as fast as reasonably expected. Good chance that the judge finds no one liable, or maybe the parent of the kid for letting them be in a dangerous situatio

If God is perfect, why did He create discontinuous functions?

Working...