Forgot your password?
typodupeerror
Robotics

Researchers Show Some Robots Can Be Hijacked Just Through Spoken Commands (interestingengineering.com) 25

An anonymous Slashdot reader shared this story from Interesting Engineering: Cybersecurity specialists from the research group DARKNAVY have demonstrated how modern humanoid robots can be compromised and weaponised through weaknesses in their AI-driven control systems.

In a controlled test, the team demonstrated that a commercially available humanoid robot could be hijacked with nothing more than spoken commands, exposing how voice-based interaction can serve as an attack vector rather than a safeguard, reports Yicaiglobal... Using short-range wireless communication, the hijacked machine transmitted the exploit to another robot that was not connected to the network. Within minutes, this second robot was also taken over, demonstrating how a single breach could cascade through a group of machines. To underline the real-world implications, the researchers issued a hostile command during the demonstration. The robot advanced toward a mannequin on stage and struck it, illustrating the potential for physical harm.

This discussion has been archived. No new comments can be posted.

Researchers Show Some Robots Can Be Hijacked Just Through Spoken Commands

Comments Filter:
  • by Joe_Dragon ( 2206452 ) on Saturday December 27, 2025 @07:45PM (#65885367)

    Alexa, kill Kenny!

  • Comment removed based on user account deletion
  • A software bug is a software bug. The program acts in an unexpected, and likely unwanted, manner.

    .

    An AI bug is different, though. The intelligence acts in an unexpected, and likely unwanted, manner.

    What are the ramifications of those AI errors?

    AI ganging together against humanity?

    • by gweihir ( 88907 ) on Saturday December 27, 2025 @08:35PM (#65885449)

      No. LLM "errors" are just stupid people using an unreliable mechanism for something that needs reliability. Pure stupid. These are not bugs at all. This is "works as designed".

      • I do agree that AI works to design, but I disagree about that not being a bug. Too many times humans design stupid things and insist that it isn't a bug.

        If you design a car that crashes whenever someone turns the air conditioner on, that is a bug, even if you did it to save electrical power.

        • by gweihir ( 88907 )

          Well, from that perspective, the plugging in of an LLM into some system (instead of some more capable mechanism) is the "bug". I can agree to that. This is not a bug of the LLM though, but a problem of misuse.

          In this context, about 3 years in, actually working business models for LLMs are still elusive. And that is at massive expenses. So the perpetrators behind the scam and those successfully scammed in spending tons of money are getting a bit desperate.

  • Prior Art (Score:3, Insightful)

    by Tablizer ( 95088 ) on Saturday December 27, 2025 @08:29PM (#65885439) Journal

    So Kirk really can make a bot smoke by telling it goofy logic.

  • by gweihir ( 88907 ) on Saturday December 27, 2025 @08:34PM (#65885445)

    What else is new. So far general LLMs have been nicely compromised time and again. And agents will be worse (and already have been shown to be).

    The stupidity of using an unreliable mechanisms for safety/security critical equipment is really staggering. These people are DUMB.

  • by PhantomHarlock ( 189617 ) on Saturday December 27, 2025 @08:34PM (#65885447)

    Anything running an LLM interface as a means of input is not only insecure, it's anti-secure. When you write something that accepts user input, you always sanitize those inputs by dropping special characters,etc to prevent command injections. buffer overflows, etc. Anything that isn't what you are expecting. With input coming through voice to an LLM, How the HELL do you do that? So far in the last few years the only thing that's been proven is no one has come up with a way to properly sanitize inputs. LLMs are a nightmare of garbage both in and out. They can do some amazing tricks, but are still terrible at accuracy and safety.

    • Humans are also famously vulnerable to spoken word attacks.
    • The programmers of high end systems from major vendors struggle to avoid simple overflow bugs when parsing a string to print out some output with formatting. I was hoping that someone clever would fix simple parsing errors in the next few years.

  • by spywhere ( 824072 ) on Saturday December 27, 2025 @08:35PM (#65885453)
    They'd better read all of Asimov's fiction.

    (I have, and he never imagined building a robot without the Second Law).
  • One dangerously corrupted agent issues a command to a group of others with limited or degraded autonomy, inciting them to harm others, and violence ensues. I swear I've heard this story before...

    Oh, right! It was in my high school textbook, "A History of Western Society." Aww, our robot children want to be just like us! They grow up so fast...

  • At least they're not made of T-1000 liquid metal. Yet.
  • > Robots Can Be Hijacked Just Through Spoken Commands
    Amazing - just like people - AGI is here!!!!

  • > Using short-range wireless communication, the hijacked machine transmitted the exploit to another robot that was not connected to the network. Within minutes, this second robot was also taken over

    Fun times ahead - picture an army of robots riding on top of autonomous cars.

  • That article is poorly written and is talking about different robots being hacked in different ways. It was not "verbally tell a robot a few commands and it wirelessly hacks another robot." What it was was a robot that takes verbal commands could be told to do things it's owners didn't want. Thus it was 'hacked' verbally. (If I'm wrong about that please provide a better link. I'd love to read it.) In addition to that, you could also hack it though it's network connection giving you direct and full con

  • He used it all the time to make them explode and catch on fire.

    ERROR!!! ERROR!!!

"I have not the slightest confidence in 'spiritual manifestations.'" -- Robert G. Ingersoll

Working...