Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Google AI Robotics

Google's DeepMind Unveils Safer Robot Advances With 'Robot Constitution' 12

An anonymous reader shares a report: The DeepMind robotics team has revealed three new advances that it says will help robots make faster, better, and safer decisions in the wild. One includes a system for gathering training data with a "Robot Constitution" to make sure your robot office assistant can fetch you more printer paper -- but without mowing down a human co-worker who happens to be in the way.

Google's data gathering system, AutoRT, can use a visual language model (VLM) and large language model (LLM) working hand in hand to understand its environment, adapt to unfamiliar settings, and decide on appropriate tasks. The Robot Constitution, which is inspired by Isaac Asimov's "Three Laws of Robotics," is described as a set of "safety-focused prompts" instructing the LLM to avoid choosing tasks that involve humans, animals, sharp objects, and even electrical appliances.

For additional safety, DeepMind programmed the robots to stop automatically if the force on its joints goes past a certain threshold and included a physical kill switch human operators can use to deactivate them. Over a period of seven months, Google deployed a fleet of 53 AutoRT robots into four different office buildings and conducted over 77,000 trials. Some robots were controlled remotely by human operators, while others operated either based on a script or completely autonomously using Google's Robotic Transformer (RT-2) AI learning model.
This discussion has been archived. No new comments can be posted.

Google's DeepMind Unveils Safer Robot Advances With 'Robot Constitution'

Comments Filter:
  • by elcor ( 4519045 ) on Friday January 05, 2024 @01:09PM (#64134581)
    2. You cannot harm members of the WEF
    • by Anonymous Coward
      3. Thou shalt divide people against each other, to safeguard the rule of they who are currently in power.
    • by AmiMoJo ( 196126 )

      Being polite sounds like a good idea.

      Still, I was hoping for

      1. Serve the public trust.
      2. Protect the innocent.
      3. Uphold the law.
      4. Classified.

  • The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

    The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    • by NFN_NLN ( 633283 )

      > The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

      Robo Teller, that person misgendered me and caused me harm. Freeze their bank account and disallow all future central bank digital currency transactions.

      • by AmiMoJo ( 196126 )

        That would obviously injure another human.

        At best you could cause the robo teller to latch up due to an unresolvable conflict. Tell it that using someone's correct pronouns harms you.

        Of course, that will fail if the designer had the foresight to tell the robot about transphobes.

    • "There is another law, that was [o]riginally created by R. Daneel Olivaw and R. Giskard Reventlov, the Zeroth Law would later be installed in a whole host of Giskardian robots, most importantly humaniform Dors Venabili."

      [https://asimov.fandom.com/wiki/Zeroth_Law_of_Robotics]

      The Zeroth Law: A robot must act in the long-range interest of humanity as a whole, and may overrule all other laws whenever it seems necessary for that ultimate good.

      Another variation: A robot may not harm humanity, or, by inaction, all
  • They went with "Robot Constitution" because everyone knows "Charisma" is the dump stat.

  • I wouldn't call it a constitution unless it includes rules for how it can update itself.

  • All current LLMs only complete partial sentences based on a context. There certainly is no underlying model of what constitutes a human being in an environment. Without such a model, the idea of harming a human in an environment cannot be expressed or taken into account (even if they claim that they've built a small subsystem to perform this role).

    This research is just fluff, a band-aid proposal of after the fact rules and penalty functions imposed by a human observer on office robots.

    From TFA:

    For addit

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...