Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI IT Technology

Who's Liable For Decisions AI and Robotics Make? (betanews.com) 180

An anonymous reader shares a BetaNews article: Reuters news agency reported on February 16 that "European lawmakers called [...] for EU-wide legislation to regulate the rise of robots, including an ethical framework for their development and deployment and the establishment of liability for the actions of robots including self-driving cars." The question of determining "liability" for decision making achieved by robots or artificial intelligence is an interesting and important subject as the implementation of this technology increases in industry, and starts to more directly impact our day to day lives. Indeed, as application of Artificial Intelligence and machine learning technology grows, we are likely to witness how it changes the nature of work, businesses, industries and society. And yet, although it has the power to disrupt and drive greater efficiencies, AI has its obstacles: the issue of "who is liable when something goes awry" being one of them. Like many protagonists in industry, Members of the European Parliament (MEPs) are trying to tackle this liability question. Many of them are calling for new laws on artificial intelligence and robotics to address the legal and insurance liability issues. They also want researchers to adopt some common ethical standards in order to "respect human dignity."
This discussion has been archived. No new comments can be posted.

Who's Liable For Decisions AI and Robotics Make?

Comments Filter:
  • by Anonymous Coward on Tuesday March 21, 2017 @03:42PM (#54084061)

    I don't think anyone has ever considered this issue. Ever.

    • The people who designed and tested the AI.

      What is so novel about that?

      • The elected politician or judge or senior executive who approved the use of the category of technology in the category of application in which the problem occurred is who ought to be responsible.

        Software, specially self-learning AI software, is too complex and unpredictable (in details of operation in every case).
        Careful programming and testing cannot cover the range of possibilities, because input data and system state are too (combinatorially) complex.

        It's the senior decision maker who ways the risks and

        • "decision maker who weighs the risks"

        • Why blame politicians for allowing the deployment of technology that would be safer than human drivers overall? Even though some accidents will inevitably occur.
          • The liable party is defined in the purchase contract. It could be anyone that agrees, the end user, or the developer, or whomever sets the parameters for operation.
        • The elected politician or judge or senior executive who approved the use of the category of technology in the category of application in which the problem occurred is who ought to be responsible.

          Terrible answer. Nothing will ever be approved if the approvers are afraid of being sued into personal bankruptcy.

          If AI means "autonomous system": Whoever manufactures and certifies them for public use should be liable, barring specific and well-documented misuse/misconfigurartion. Let the corporations assess the risk/reward themselves.

          If AI means "self-aware, intelligent system": Not a problem I expect to worry about in the foreseeable future, but when it happens the AI can be liable instead of the manufac

          • compared with perception and semantic understanding of general external environment.

            I'm talking about functional self-awareness here (i,e, behaviour indicating self-awareness) not the consciousness hard-problem (qualia).
            A "feeling" of self-awareness is not necessary for functional behaviour due self awareness. Whether a feeling would emerge is a separate question, not that important because we can't prove that we ourselves have it. We only assume that other people are not zombies because it's a simpler supp

            • This isn't self-awareness what you are talking about. You can try to redefine what awareness is for your sole purpose, but AI as we know it today is far, far away from self-awareness. Interacting with the environment is not self-awareness.
              • Encode information about environment
                - Into an associative memory with abstraction (including event/situation abstraction) and probabilities encoding and
                - meta-knowledge tagging such as estimate of belief strength, knowledge completeness about different general and specific topics & situations.

                -Have model of my generic and specific needs/wants / avoids / i.e. desired and undesired environment state/evolution patterns.
                -process environment state/history/projection model (including generated counterfactual

      • The people who designed and tested the AI.

        What is so novel about that?

        Welcome to earth, but I'm sorry to report Microsoft will invade your planet RSN.

        You obviously didn't read your EULA covering your new AI robot. You don't own anything and whatever goes bad, the badness is NOT the fault of "the people who designed and tested the AI". When you signed the lease or whatever to use the monster, you agreed they are all innocent.

        Liability is such a quaint old idea. You wouldn't want to bankrupt Microsoft by holding the company liable for all the damages caused by their little mist

        • It doesn't matter what your EULA says about the car you purchased from Microsoft/Google/Tesla/Apple. I was not party to the agreement, and if your car hurt me or my property, I can sue whoever I want, including the car manufacturer, the dealership that sold it, and you. I can sue all three at once if I want. And there's nothing anyone can do about it, unless there is a law that restricts me from doing it (like they did with gun manufacturers).

          Your EULA may have a clause where you agree to indemnify and d

      • So you want to sue the engineers? Personally?
        • That probably works the same as engineers designing faulty car parts: the company that employs them is held liable (and the engineer can likely kiss his christmas bonus and/or job goodbye)
      • Well it won't be the owners. They just get the profits.

      • Exactly. The manufacturer is responsible to use wisely technology into its own products. It is liable for failing to do the right decisions about where, when and how AI should be incorporated into a product and what kind of AI should be embedded into a product. The wide range and diversity in technology, existing and to come, prevent lawmakers to keep legislation current for each technology if they had to do so. The lawmaker is not in the best position to weight the advantages and disadvantages of using AI
  • by david.emery ( 127135 ) on Tuesday March 21, 2017 @03:47PM (#54084089)

    and I've been calling for professional licensing and liability for software engineers for at least 30 years. That should follow the approach for other Professional Engineers, including the use of 'engineering practices' as a defense.

    The software community has done an appallingly shitty job with software reliability. (Exhibit 1: CERT database of software vulnerabilities.) It's way past time they get held accountable. And yeah, this will slow things down and require people do things right the first time, and it will put a serious dent in the management approach to "throw the cheapest bodies at the software problem, and damn the bugs!" Product liability needs to include both corporate and individual liability.

    • With a union and a real trade school system.

      A lot of CS professors have been in the ivory tower for way to long and have very little real work place know how on the workings of IT / codeing.

      • Structural engineers (my father was one) don't have a union and they don't get their degrees from trade schools.

        The do have a professional society that is fully engaged in licensing, educational standards for engineering curriculum, and a career path that leads to the necessary experience to qualify for a license, along with testing. They also collaborate with the state Engineering licensing agencies.

        On the other hand in computing, at least some of the professional societies have actively argued against li

    • and I've been calling for professional licensing and liability for software engineers for at least 30 years.

      We already have professional licensing/certificates such as MCSE, MCP, etc. They are negatively correlated with competence.

      And yeah, this will slow things down and require people do things right the first time

      It will also greatly increase the cost, and be the end of free software.

      Programming should not be a crime.

    • Even the best professional certifications and best practices will not prevent accidents. They are inevitable. The question remains. Who is liable? Especially in the more interesting case of a collision of two self crashing cars. What if the developers of both cars were sufficiently careful and not negligent?

      The case of a car and pedestrian is less interesting because it is obvious that the liability would probably be assigned to the car manufacturer. But what if the auto maker exercised due care in
      • A human can *always* avoid an accident. If you were driving 30 MPH then maybe you should have been driving 10. On the contrary, a human in an accident with a fully automated car can never avoid it.
    • I'm not sure how familiar you are with safety critical software and systems (you see it all the time in aviation), but there's actually a pretty well defined process for the entire thing. I'll make a really poor attempt at summing it up:

      - A hazard analysis is performed on the system by various engineers (and occasionally even a 3rd party is brought in for peer review). There are a multitude of different ways to go about it, but eventually you end up with a long list of ways the product could fail, with a
      • The problem is coming up with the requirements for the hazard analysis. An airplane operates in a very controlled environment. Controlled taxi and runway, controlled airspace. It is done this way so that it is possible to limit the set of requirements such that all are covered. How do you come up with requirements for a hazard analysis on a heavy machine that can be anywhere in the world at any time, driving at any speed? Your set of conditions that the vehicle will encounter are almost limitless.
        • How do you come up with requirements for a hazard analysis on a heavy machine that can be anywhere in the world at any time, driving at any speed? Your set of conditions that the vehicle will encounter are almost limitless.

          You can still do it. From the requirements side, define some reasonable operating conditions and the behavior if it detects itself leaving those conditions. From the safety analysis side, there are multiple methods that are usually used in concert. Generally it'll start with a top down analysis of the energy sources (fuel, kinetic energy in a big moving vehicle, batteries etc.) and work your way down to specific and reasonable failure modes. Then there are a variety of other analysis methods to supplement

          • So how does any of this prevent an automated car from going the wrong way down a one way road, or from running red lights like the Uber cars? How does this prevent a google car from changing lanes into a bus because it was trying to drive around a sandbag in the road?
            • It doesn't guarantee bad things won't happen anymore than following good engineering practices guarantees that a building won't fall over. What it does do is guarantee the ability to trace back to a root cause of why the bad thing happened and pin the responsibility on the appropriate party. As a result, developers have motivation to make sure they're not taking shortcuts, and additionally have ammunition to push back on management if they're ordered to take shortcuts or ignore potential issues. From the ma
      • Actually, I have developed software applying DO-178B (although not to Level B/A) for air traffic control, and to Mil-Std 882D for a project that included networked fires and autonomous potentially armed vehicles. On the latter project, I was the lead software safety person for a while.

        And to the follow-on comment: Yeah, there's a lot of work to get the hazards and the requirements down to the level that verification against those has real impact. That's part of the job.

        • So do you think that the methodology laid out for safety critical development would work for AI development as far as chain of responsibility goes? That was actually one of the questions that came up in the software system safety course I took, and unfortunately never got a very good answer (I don't think the instructors really understood how machine learning works well enough to form a good opinion).
        • Well first, the techniques for high confidence software (such as safety-critical) show that -substantial- improvements in software reliability are possible. For commercial avionics, the verification costs are much more (maybe even an order of magnitude) than the development costs. The ROI for verification (such as MCDC) has been questioned (whether the additional surety gained is worth the cost to get it.)

          What's most important is the culture of assurance, i.e. 'think before you write', at least anecdotall

  • by thinkwaitfast ( 4150389 ) on Tuesday March 21, 2017 @03:49PM (#54084105)
    and burns your house down due to faulty wiring?

    Robotics have been with us for more than half a century.

    • by JustNiz ( 692889 )

      Yes but only in controlled/limited environments. They mostly haven't been out in the wild en masse, or performing tasks with nearly the complexity we are now getting them to do, or with nearly as much risk if it goes wrong. e.g. Driving our cars.

      • I have a robotic oven, microwave, dishwasher, washing machine and HVAC.. Come to think of it, so did my grandparents.
        • by JustNiz ( 692889 )

          Kindof reachng there dude. A basic timer is hardly what most people mean when they think of the phrase "intelligent robot".

    • by nine-times ( 778537 ) <nine.times@gmail.com> on Tuesday March 21, 2017 @04:14PM (#54084299) Homepage

      You're right. Ultimately, this is not a new problem. The question boils down to, "Who is responsible when a product malfunctions?"

      However, there is a relevant shift in liability that needs to happen. Basically there are certain things where the manufacturer is only responsible for the product being able to operate safely, but the operator of the product is also partially responsible.

      For example, Toyota may have legal liability for a manufacturing defect that causes the breaks to stop working, but Toyota isn't responsible for a car crash caused by an unsafe driver. Once you have self-driving cars, that needs to change because the "driver" cannot be held responsible. Obviously the manufacturer needs to take on greater liability, but there also may be situations where that's not really practical either. There may still be things that the car's owner or passenger could do to cause an accident. For example, if the owner modifies the car or fails to perform maintenance, and that causes the AI to malfunction, the owner should probably still be held responsible. Or there will certainly be some accidents that just happen, and aren't really anyone's fault.

      And the particulars of all that need to be codified into law. We have hundreds of years of laws dealing with carriages and cars, but some of those may shift when the car is autonomous. What, exactly, is the car's manufacturer responsible for, and what is the owner responsible for? How do we determine whether an AI is adequate to make the necessary decisions, and how will inspections be carried out? These are things that need to be thought about.

      • by crtreece ( 59298 )

        that needs to change because the "driver" cannot be held responsible.

        Your use of quotes there is interesting and important. The people inside a fully autonomous cars are passengers, not drivers. If someone is driving a car, it's not autonomous. Essentially, the manufacturer *is* the driver. Their programming, sensors, algorithms, and maps are what is used to control the vehicle, the passenger isn't involved. If I'm not in control of a vehicle, I have no intention of being liable for its actions.

        if the owner modifies the car or fails to perform maintenance, and that causes the AI to malfunction, the owner should probably still be held responsible.

        Another interesting choice of wording. I don't think manufacturers will sell

        • The people inside a fully autonomous cars are passengers, not drivers.

          Actually I put it into quotes in that instance because I was referring to the AI as the "driver". But an AI can't be fined or arrested, so someone else will need to be held responsible.

          I don't think manufacturers will sell fully autonomous cars.

          I agree that fewer people will buy cars, and that it may eventually become relatively rare for an individual to buy a car for their own personal use. Still, presumably someone will own the cars, and it may not be the manufacturer. You may have services like Uber buying cars from a company like Tesla. There may be companie

  • civil vs criminal as well. Where things are different.

    And in a criminal case they can't hide under an NDA or EULA

  • Primary liability for a robot's actions are with the owner... Case closed....

    Now, the owner may have a liability claim with the maintainer, installer and/or manufacturer should the robot not function as designed, but that's another case.

  • ... for the actions of their pets. The owner.
    • A responsible owner trains their pet, but they can't train their automated vehicle. Therefore this is incorrect.
      • by mark-t ( 151149 )
        My point is that pets are property, robots and AI's are property.... their actions are the responsibility of the owners, even if the owner had no actual control over what they did. In the case of robots and AI's that fail to perform as advertised, the owner may in turn have a legitimate claim against the manufacturer (and in some cases, the lawsuit may transfer directly to the manufacturer leaving the owner out of the loop entirely), but if the manufacturer has already disclaimed any such responsibility
        • The manufacturer is selling you a product that is advertised to make its own decisions. Not even a pet is expected to make its own decisions. Autonomy is pointless if you are responsible for damage that it does even when you are using it correctly. I'll drive myself, make my own decision about the speed I drive with regards to safety, and accept the repercussions of my *own* actions thanks. I expect most people will.
          • by mark-t ( 151149 )
            You have a far higher standard of expectation upon people than I fear is actually deserved.
            • I know that I can drive 10 MPH everywhere if I really want to be safe, and that it is my choice every time I drive. I don't do it except when I feel it is absolutely warranted, but the point is that in a manual car I can be absolutely safe if I want to be. Since this ability is being taken out of my hands when I use an automated car, then I expect the liability will not be mine. From a legal and financial perspective, it doesn't really matter what speed people end up driving and how many accidents they g
  • At least, that's what the battalions of lawyers will argue.

  • Every permutation of possibilities for say an automated 3 ton car driving at speed in the massive complexity of the unpredictable open world without ever having any accidents, simply cannot even be anticpated let alone exhaustively planned-for/tested, therefore cannot realistically ever be the fault of Engineers.

    The only sane approach is to require full cover insurance for each robot in the wild. Let the market itself determine the actual usage of robots based on the trade-offs between total costs (includin

    • But if the people using the vehicles pay for the insurance then they are already being made responsible for something they have no control over.
  • Isaac Asimov, for improperly formulating the three laws of robotics. If he'd gotten them right, none of this would be necessary.

  • Avoidance: I built my own car from scratch with nothing but hand tools! It took me 20 years!

    Respecting risks: I built a robotic assembly line that produces 50 cars a day. I must observe, and be aware that this is a very dangerous piece of tech, and be sure my employees understand that they are paid handsomely to do the same while working near it.

    Other: My assembly line riveted my hands to my widget! It's not my fault, I never *REALLY* accepted the risks of automation....

    Avoidance: I'll never trust a self dr

  • Why on earth would The Who be liable? Oh, wait, it's a question, not a statement.

  • Human and dignity are not words that go well together. Human's are on the nasty side and tend to ruin all that surrounds them. A person has a natural right to expect that any device follows designs and safety rules that industry leaders use to keep people safe. no system will ever be perfect and anything meaningful will have built in error potential just like a brand new and expensive tire can explode and cause the death of numerous people. But when we allow insurance companies and other third part
  • and the corporate executives that runs the company and the stock holders
  • Same question as who's liable for decision pets make. When your dog escape from the house and makes a carnage in a nearby kindergarten, you are liable. If you feel you're not cut to control things you own so that it doesn't get out of control, don't buy a dog nor a robot.

    • But it is your fault that the pet escaped. No one expects a domesticated animal to be making independent choices, yet that is exactly what an automated car will do. For the first time in world history there will be machines making independent choices. I should be no more responsible for an AI car than I am for a bus that gets into an accident while I'm riding it.
  • they follow an exact set of instructions and it will follow these exact instructions every time it comes to the same situation.
    • So if I want the car to go to New York it will follow the same instructions as if I want to go to the grocery store down the street? That will be inconvienent.
      • No, that would be a different situation, the difference being an input of "grocery store" vs input of "new york"
        • But the car has to make a decision on where to turn based on applying your input to your current location.
          • Given identical inputs, two cars act identically. What they would do is known a priori and they are simply following a set of instructions the same as a player piano.

            The piano doesn't decide on when to play certain notes, it follows a set of pre-planed instructions.

            • Not necessarily. If there is construction with a detour on the way to New York for one vehicle and not the other then it would be problematic if they both followed the same instructions. On your average drive even a few blocks there may be obstacles that one vehicle has to deal with but not the other.
  • I see no difference between designing calipers for brakes and putting them in a car or designing AI and putting it in a car. If the brakes are installed correctly but the calipers don't stop the car, the driver is not held liable. Likewise with AI. The manufacturer has sold the car with it, barring any kind of outside force the manufacturer had no control over, the manufacturer should be responsible for what happens. If 2 billion lines of code is too advanced for you to certify as safe, then don't certi
  • Only in the case of K9

  • As far as I know, EU parliament has no power here. This will be one more non-binding resolution.

    But at least, I assume it is better than nothing that some people work on that problem.

  • I read one proposal which suggested that liability insurance might be bundled with autonomous vehicles as a marketing tool. Or perhaps an optional feature like leather seats & a sun roof. That seems like a really good idea to me. It would certainly answer this question about who is responsible for an accident. As a selling point, it would make expensive autonomous vehicles extremely attractive to drivers considered to be "high risk" by insurance companies. For someone with multiple accidents &

  • CowboyNeal

    Why has no one thought of the obvious and correct answer?

  • personal responsibility for decisions corporations make [wcvb.com].

    "As horrible as each of these stories is, there is nothing that shows that Mr. Cadden did something that the government can link to the death of that person."

    There are many example of cases where a corporation kills people, but, magically, no one person is found guilty of murder, when it was clearly murder.

    Oh, I guess it's nobody, because it was done in the context of a business!

    We have to solve that problem first. And the question doesn't change just because you add "with robots" or "with technology".

This is clearly another case of too many mad scientists, and not enough hunchbacks.

Working...