Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
Robotics The Courts United States Technology Hardware

Contradictory Understandings of "Robot" Sow Confusion In US Law (medium.com) 100

Hallie Siegel writes: A new paper covering 60 years of robotics in American case law shows that a growing mismatch between how judges think about robots and what contemporary robots can actually do is resulting in inconsistent treatment of how robots are dealt with in the courts. Interestingly, much of this confusion comes down to the definition of the word robot; dictionaries' definitions often contradict each other. This article presents the case that lawmakers and policy makers need to work more closely with technology experts to develop a more nuanced understanding of robotics, lest new technologies overwhelm our legal systems.
This discussion has been archived. No new comments can be posted.

Contradictory Understandings of "Robot" Sow Confusion In US Law

Comments Filter:
  • More importantly... (Score:4, Interesting)

    by Fire_Wraith ( 1460385 ) on Thursday March 10, 2016 @10:18AM (#51671167)
    What I want to know is how US law views various other robot-like devices. For instance, is a giant robot that's piloted by a human considered a robot?
    What about a tele-operated robot, or a waldo?
    Likewise, is a drone considered a robot? At what degree of autonomy does it become considered one?
    • by bws111 ( 1216812 ) on Thursday March 10, 2016 @10:32AM (#51671227)

      Those questions are pretty much what the paper is about. One of the examples given was a marine salvage case, where a salvage team found a shipwreck that was about a mile and a half deep. It was too deep/too dangerous to dive, so they sent some autonomous subs down. Then they went to court to keep other salvagers away. The court decided that because the people were right above the wreck, and sending humans down was dangerous, the robots could stand in for the humans. The other salvagers were ordered to stay away, just like they would be if humans were diving. But it then raises the question: what if the humans had been on land? Would it still count? Tricky questions.

      • by Xest ( 935314 ) on Thursday March 10, 2016 @12:11PM (#51671733)

        That sort of example strikes me as a classic case of legal overthinking of the problem though. It sounds like the case revolved around nonsense such as whether humans were present near or at the wreck, when really the only real legal question that actually needed answering was "Should first finders get first dibs on the wreck". There's no real complexity added into the mix by robots, merely legal gymnastics introduced by lawyers because they think they can try and spin their argument with a new thing - first that was "on the internet", now it's "robots". There's only a problem because people are making money from making a problem, not because there actually needs to be one from this specific change in technology.

        I've seen similar questions posed around autonomous cars and what the legal liability should be, but the rules of the road are already well established, if your autonomous car illegally drives the wrong way around the roundabout and crashes into someone then you should still legally be at fault. You may wish to try and sue the car manufacturer but that's really no different to the status quo - if you're in a car and the fucking wheel falls off making you crash into someone you're still at fault currently from an insurance perspective, but if it fell off because it was defective you make a claim against the manufacturer.

        So I think the only legal complications come from broken law and legal gymnastics, and if anything robots will force us to tidy up the law so that we don't argue "You can't dive this wreck, it's not safe!" but instead argue in court about what they really mean - "You can't dive this wreck, because we found it first!".

        Beyond that I don't really see robots needing to add much complexity to the law, they're a tool like any other and should be treated as such in law (at least until we have strong AI and concious robots enter the fray if that ever happens). Just as if you pull the trigger on a gun whilst pointing it at someone you can't say "I just pulled the trigger, it was the gun that ignited the gunpowder and launched the bullet into him so it's the gun's fault!" anymore than you can say "I know I pressed the go forward at 100mph button when he was in front of my car, but it was the robot that actually drove into him!". Levels of culpability degrade as they always have - if you explicitly told the robot to drive at someone it's murder, if you told the robot to drive 40mph in a 30mph zone and it hit someone then it's manslaughter, if the robot hit someone all by itself without it being anything to do with you and you did everything right legally then it's a tragic fucking accident that the victims family can sue the manufacturer of the faulty tool (robot) over. I don't see why the law needs to change whether the tool is, say, a runaway non-robotic tractor, or an automated robotic one.

        (Adjust my post as necessary for whatever your current local laws say for determining different levels of culpability)

        • by Blue23 ( 197186 ) on Thursday March 10, 2016 @12:24PM (#51671811) Homepage

          That sort of example strikes me as a classic case of legal overthinking of the problem though. It sounds like the case revolved around nonsense such as whether humans were present near or at the wreck, when really the only real legal question that actually needed answering was "Should first finders get first dibs on the wreck".

          I think perhaps you are oversimplifying the problem. If I were to theorize 2352 potential shipwreck locations based on satellite imagery and publish it, am I the "first finder"? If I get odd sonar pings but don't follow it up am I the "first finder"?

          I believe maritime law may specifically want to grant salvage rights to the first people at the wreck, and anything else opens doors to abuse. Can anyone who actually knows it speak to this?

          • You are overcomplicating again.

            First finders are those first to physically be in the immediate physical proximity (effectively touching distance) to the wreck and confirm its existence and location visually. If you didn't actually go to the wreck (be it in person or with a remote device under human direction) and confirm it is what you suspect, you didn't find it, you only had evidence of its possible existence.

            • that should be (be it in person or with a remote device either under human direction or autonomous)

              • by Anonymous Coward

                So, if I placed an autonomous robot/drone into the ocean, and let it crawl the ocean floor systematically. Anything it finds I have salvage rights to?

            • I think you're the one who's overcomplicating things.

              A "robot" is a machine under the control of a human being, and stands in the stead of that human being. Rational liability laws would hold the operator liable if the robot did damage, just like any other machine, because the robot is simply an extension of its controller.

              Reasonable laws, then, would also attribute "findings" and good deeds to the robot operator, since the robot, in exactly the same way, is standing in the stead of that human. A repr
            • You are overcomplicating again.

              First finders are actually the reptilian creatures with zip-on human skin who infiltrate and accompany human treasure hunting parties. When a discovery is made they unzip the skin and expose their true form, diverting the humans from their quest by eating out the insides then donning their skins to claim and salvage the loot. International law is much like international finance where the physical imbibing of flesh merges identity sufficiently to prove their claims in front of

          • If I were to theorize 2352 potential shipwreck locations based on satellite imagery and publish it, am I the "first finder"?

            Yeah, that can be handled with existing analysis. A belief is not the same as an action. Knowing where the wreck was never granted salvage rights; getting there and verifying it on-site always was. Sonar pings do not tell you what is actually there, it just gives you a theory. If it was clear water and a shallow wreck, you could just look at it and be the first finder.

            It is a complicated problem, but lets not over-think how complicated it is; robots do not add to the complexity when you already have to cons

            • by HiThere ( 15173 )

              If sonar pings "do not tell you what is actually there, it just gives you a theory", then you can say the same thing about vision and touch. A good sonic image can be more detailed than a visual scan in cloudy water. So if looking at it counts, then so should sonic imagery. And nobody has mentioned detail. Which is a gradation with no sharp drop-offs or edges.

              As for touch... if I touch it with a telefactor, have I touched it? What if the telefactor is purely a kinesthetic sensor? Why would that count

              • If sonar pings "do not tell you what is actually there, it just gives you a theory", then you can say the same thing about vision and touch.

                No. The lack of the quality of sonar data is a totally different issue than Plato's Cave, or the general existential fact that we do not directly perceive the world that that vision is processed and can be faulty.

                If you're willing to bring that in, you'd never be able to label anything as anything. Rule of Law does not go that route. ;) In Law, you only have to consider the likelihood that what you think you saw is really what you saw. It depends on specifics. In good conditions, the human eye is a very acc

          • by Xest ( 935314 )

            You seem to have disagreed with me, then agreed with me, so I'm not really sure what you're trying to get at. On one hand you've created a convoluted example about theorised maps and then you've pointed out that maritime law already stipulates that you have to get to the wreck, whether that's physically with a person, using a robot, or just using a massive winch off the side of a ship.

            This is already established in maritime law, so why overcomplicate it with meaningless examples about theoretical discovery?

        • > if you're in a car and the fucking wheel falls off making you crash into someone you're still at fault currently from an insurance perspective

          You are not at fault from any perspective (insurance or otherwise), unless it was your fault the wheel fell off. Now, if you knew the wheel was badly damaged, and chose to drive around anyway, that's another matter. If it then falls off and causes a crash and kills someone, you can be charged with manslaughter. But if it fell off because of a manufacturing def

      • After reading that paper, I'd like to bring up two different _concepts_.

        1: An automated, autonomous tool (human control limited to turn on/turn off or other minimal intervention)
        2: A tele-presence tool, or remotely controlled stand-in for an operator (human control dominating activity).

        The issue here seems to be that the law wants to call both of these "robots" in different contexts.

        The answer is simple: declare "robot" to be a vague and undefined legal term, and define two new legal terms for the two diffe

    • When a drone does autonomous missions (piloted by the FC and mission parameters) take off/land etc, it is technically considered a robot. I fly more robotic missions than I do drone piloting.

    • by Pseudonymous Powers ( 4097097 ) on Thursday March 10, 2016 @10:44AM (#51671267)

      What I want to know is how US law views various other robot-like devices. For instance, is a giant robot that's piloted by a human considered a robot? What about a tele-operated robot, or a waldo? Likewise, is a drone considered a robot? At what degree of autonomy does it become considered one?

      The real problem is that very few law schools in this country offer a Mecha Law program worth the name. I mean, it's gotten so bad that even the University of Phoenix had to discontinue their pre-Gundamology degree track for lack of interest. At the same time, the law profession's nativist/protectionist culture combined with our insane immigration policies (specifically Japanese immigration) have combined to create a severe shortage of qualified robotech lawyers in the United States.

      • We will have plenty of qualified robotech lawyers when the Zentradi invade...

        Talk about illegal aliens taking your jobs, these guys kill you and take everything you have.

    • by Kjella ( 173770 )

      Whether something is or is not a robot is really derailing the whole discussion anyway. If you fire a gun or set up a trip wire to fire the gun doesn't for the most part matter. If you set up a meat grinder that's not secure and a person gets dragged into it, whoever designed and approved that could be held liable. If you get summoned to court, calling in on a speakerphone doesn't cut it. Courts have usually dealt with all kinds of indirection, remote action and action-by-proxy before, legally speaking I do

    • There generally is no mention of robots in US law, and the operator of a device is fully responsible for its actions. If it breaks and hurts somebody, it depends if the responsible party should have known it would happen; the same as any non-robotic machine that breaks.

      It is almost entirely a non-issue, except in edge cases where a party offers a novel legal theory based on robots being different; as in the salvage case example.

      It is the same as for a pet dog, so that makes it easier to understand that bett

  • One can never have enough laws. More laws please. No one should be able to escape the guilt. A guilty society is a controlled society.
    • Yep... because protecting the freedom of electronic equipment is such an *important* goal for liberty...

      It's just like worrying about the 'rights' of corporations - they have none, only human beings have rights, you may as well try to oppress a hammer or enslave a brick.

      • If Corporations had no rights the FBI would have no problem compelling Apple to build their back door to the iPhone. I'm not for a moment suggesting corporations do not abuse power, or that their lobbying (individual lobbying in general as well) is not destroying our representation and American politics. However there is no reason the government should be able to make companies do their bidding.

        • Your conclusion is false. Corporate employees are still people and have rights. Since compelling apple means compelling the people that remains a rights issue.

          You dont need abstract entities to have rights to preserve any current liberties. You need to revoke them to protect a great many individual liberties that are being steadily eroded because those entities are given all the rights of people while having none of the constraints and refusing to accept any of the responsibilities.

          At the botom of every sup

          • by mark-t ( 151149 )
            Who is the slave with solar generated electricity? Bear in mind that the sun is burning because of natural laws, and does so regardless of whether we draw power from it or not.
            • You forget the rare earth minerals in the panels. Nearly all the world's supply comes from mines in the eastern DRC where the workforce is entirely slave labour.

              • by mark-t ( 151149 )

                While some solar cells use exotic and rare materials, the standard version (over 95% of all solar cells) on the market uses almost exclusively silicon and aluminum (the two most abundant metals in the earth's crust) for the cell. The only modestly rare element used in many solar cells' construction is silver, which isn't even considered a rare earth element, and only a very tiny amount is needed per cell for its back contact. The silver isn't even technically required if one is willing to take a modest

                • Tin and coltane which are common in all electronics including the electronic parts of solar panels.

                  Now ill be happy to consider solar to be one of the least affected products but unaffected ? Nope.

                  Hell slavery is even rife in the timbre industry. So you cant buy a wooden chair without good odds there was slavery in the supply chain. 70% of all chocolate is made from beans harvested by kidnapped child slaves. Coffee is about the same. Yeah its depressing as hell. Consumer advocacy doesnt work - ellse one o

                  • by mark-t ( 151149 )
                    I was only drawing attention to the fact that when the supply is a natural resource, and the availability of that resource is not affected by cultural climate because there is no particular place on the globe where it is especially dominant, and therefore is not potentially subject to the tolerance of slave labour in specific regions, and especially if the availability of the resource is unaffected by whether or not you actually even use it in the first place, then it is pretty far stretch to say that the a
                    • So fine. The 3 or 4 things in he world thats true off is great. But policy should be build around preventing the bad stuff. Especially when there is a great deal of it around. You cant base policy around the goid extremes. It must be based around the bad extremes because its the only tool we have to deal with them.

                    • by mark-t ( 151149 )
                      While there may be relatively few resources whose availability is not typically unaffected by consumption, there are no lack of resources that are available in numerous places around the globe. You mentioned lumber as one example that can involve slave labour, but by no means is that necessarily reflective of what one can expect. regardless of where they live. Wood imported into the USA from Canada, for instance, which is among the two nations' largest import/export trades with eachother is most definite
                    • And much of the wood coming out of the Amazon is - often the slaves are the very people who used to live in the area that was clear-cut by criminals last year.

                      If crime was one corporation - that corporation would make more money every year than the top-50 fortune-500 companies combined. It's the largest employer on earth - by far and the vast majority of it's employees are ordinary people just doing a job that they have no idea is not legitimate.

                      Think about that... There's absolutely no way that much money

          • Compelling Apple to do something is not the same as compelling Apple employees to do something. Apple employees work there voluntarily, presumably for the money. If some Apple employees object to doing something, Apple can hire people who won't object and have them do it. (This works better for legal values of "something".)

            • No. Nobody who works for apple has agreed to do whatever the GOVERNMENT says while they do, they only agreed to obey their bosses at apple. If the government wants to force apple to do something - they are forcing the PEOPLE at apple to do it. Apple doesn't really exist, it's an imaginary abstract entity. You can't compel imaginary entities to do stuff. You are always compelling the people doing the imagining.

              • Apple, Inc. is not an imaginary entity, just abstract. It consists of some legal formalities and some human beings in various roles. It works according to law, and has assorted abilities and requirements, assigned by law. Apple, Inc. is no more imaginary than a computer program.

                If Apple, Inc. is legally compelled to do something, then how that gets done is the responsibility of the people running Apple, Inc. Typically, such things are done by paying people to do them, whether the people are regular e

    • by RDW ( 41497 )

      Surely robots only need 3 Laws?

  • by Anonymous Coward on Thursday March 10, 2016 @10:28AM (#51671203)

    autonomy.

    Robots work based on stored directions, without needing direct human control. None of the current "robots" (other than those in manufacturing) are actually autonomous. The experiments in walking, yes, as the determination of "how" is done by the machine itself, not the person that directs "where" to walk.

    The others are actually "drones", being fully controlled by a human or (in some cases, an animal).

    • by Anonymous Coward

      Re: "None of the current "robots" (other than those in manufacturing) are actually autonomous."

      Wow. Are you a judge? Because you have some seriously outdated misconceptions.

      You've never seen a Roomba? How about the Google Autonomous Car? Tesla Roadster? ThinkGeek sells a USB nerf missile launcher that can be programmed to automatically track human faces and shoot them with foam missiles.

      The AUVs used by the Navy to conduct sidescan sonar surveys: yup, totally autonomous.

      Excluding the autonomous cars, I have

  • They need to listen to Jon Siracusa's Robot or Not podcast.

  • by sacrilicious ( 316896 ) on Thursday March 10, 2016 @11:07AM (#51671379) Homepage
    ... is troubling [drawception.com]. Imagine an army of robotic pigs with no clear instructions on what to do or where to go.
    • by HiThere ( 15173 )

      Yeah, but those are what became (in the 1950's) called androids.

      However robot did used to mean autonomous machine (again, in the 1950s). So C3PO and R2D2 were later called robots by that meaning. Unfortunately telefactor is too clumsy a word for newspapers, and the media are notorious for not respecting fine distinctions of meaning. So currently robot is almost without an understandable meaning. You know that it means either some sort of machine, or something acting as the speaker presumes a machine wou

  • Small wonder that they can't differentiate between a waldo and a robot since they can't do it with drones and remote control copters either.

  • by sbaker ( 47485 ) on Thursday March 10, 2016 @01:16PM (#51672287) Homepage

    The definition I've always had in my head goes something like:

            A robot is a computer that can interact with the world using sensors and moving parts.

    Well...kinda...a radio controlled "Robot Wars" thing isn't a robot, it's a radio controlled toy - it needs autonomy...so I wouldn't call it a "Robot". On the other hand, my PC has "sensors" (the mouse and keyboard) - but it doesn't have hands, legs or wheels (unless you count the spinning hard drive) - so it's not a robot either. My home thermostat has a sensor and can open and close the ducting vents. It has a computer inside so it's a "Robot"....hmmm - not sure I like that - maybe the robot has to be able to move itself around. A robot-arm in (say) a car factory - can move the arm around, but not move bodily around the world...so it's a robot according to my original definition...but not if I change the definition to exclude my thermostat. My car isn't a robot - although it has a computer that handles a lot of the work (electronic throttle, ignition, brakes) - a 'driverless' car, however is clearly a robot in my mind. But a car is still a robot if I sit inside and tell it where to take me by typing "221B Baker Street, London" - but not if I have a steering wheel to tell it where to go, even if it has automatic lane-keeping and will stop me from rear-ending the car in front. OK - so that's fairly clear. But what about if I have to tell some hypothetical car: "Take the next left turn...go a bit faster than the speed limit please...go right at the fork in the road." - is it a robot now? Mmmmm - not sure. Maybe if I tell it to take the next turn by nudging a joystick, it's just a car with sophisticated lane-keeping and maybe if I have a speech interface to control the exact same software/behavior, it's a robot? We're in a very, very grey area there.

    So this is a hard thing to define. I think there is a continuum from the car that knows from data from your toothbrush that your teeth need polishing and automatically takes you to the dentist's office when there is a two hour gap in your schedule...down to my current car...in which the computer decides that I'll over-rev the engine if I push harder on the gas pedal and it's not going to let me do that.

    Legally, you may need to impose a hard distinction somewhere between those two extremes - but it's going to be completely arbitrary. In the end, a word like "robot" has to be consigned to the pile of words like "home" or "food" that have fuzzy definitions and shouldn't be used in a situation where a binary choice has to be made. It's not really like the word "adult" that has a specific meaning that takes effect precisely at midnight on the 18th anniversary of your birth.

    Law-makers and judges need to pass more specific legislation about the specific attributes of robots that require legal decisions.

    So, for example a ("robotic") car where the human has the ability to override the speed and direction, regardless of the road conditions, may need to be insured by the individual - but a car that decides the speed and direction for itself and always overrides the human if the conditions require it to might require to be covered under the manufacturers' insurance. Doesn't matter what you *call* it - it only matters what functions are automated.

    • hmmm.... interesting decomposition. I agree with most if not all your points.

      I'll raise one example for thought: the Stability Program in my car . The system can bring the car to an almost stop if need be - it doesn't (directly) steer, control is limited to reduce engine RPM, gear selection, and brakes. Airbags are still the final solution.

      I can press a button to disable it (must disable for launch-control). However - should conditions warrant the system will re-enable itself. Anti-lock brakes can't be

  • by dfn5 ( 524972 ) on Thursday March 10, 2016 @01:20PM (#51672319) Journal
    They prefer the term "Artificial Person".
  • by Bruce Perens ( 3872 ) <bruce@perens.com> on Thursday March 10, 2016 @03:54PM (#51673563) Homepage Journal

    A robot is a machine that can do a human's job. Over time, we cease to think of these jobs as human occupations, and thus we cease to think of the devices as robots. Consider these occupations:

    • Elevator operator
    • Washerwoman (it's old enough that I won't say "washerperson")
    • Computer (yes, that was an occupation)
    • Telephone answering service person
    • Telephone operator
    • Copyist
  • Human control is done while offline, so they are easy to call a robot.

The test of intelligent tinkering is to save all the parts. -- Aldo Leopold

Working...