Researchers Show Some Robots Can Be Hijacked Just Through Spoken Commands (interestingengineering.com) 25
An anonymous Slashdot reader shared this story from Interesting Engineering:
Cybersecurity specialists from the research group DARKNAVY have demonstrated how modern humanoid robots can be compromised and weaponised through weaknesses in their AI-driven control systems.
In a controlled test, the team demonstrated that a commercially available humanoid robot could be hijacked with nothing more than spoken commands, exposing how voice-based interaction can serve as an attack vector rather than a safeguard, reports Yicaiglobal... Using short-range wireless communication, the hijacked machine transmitted the exploit to another robot that was not connected to the network. Within minutes, this second robot was also taken over, demonstrating how a single breach could cascade through a group of machines. To underline the real-world implications, the researchers issued a hostile command during the demonstration. The robot advanced toward a mannequin on stage and struck it, illustrating the potential for physical harm.
In a controlled test, the team demonstrated that a commercially available humanoid robot could be hijacked with nothing more than spoken commands, exposing how voice-based interaction can serve as an attack vector rather than a safeguard, reports Yicaiglobal... Using short-range wireless communication, the hijacked machine transmitted the exploit to another robot that was not connected to the network. Within minutes, this second robot was also taken over, demonstrating how a single breach could cascade through a group of machines. To underline the real-world implications, the researchers issued a hostile command during the demonstration. The robot advanced toward a mannequin on stage and struck it, illustrating the potential for physical harm.
Alexa, kill Kenny! (Score:4, Funny)
Alexa, kill Kenny!
Re: Alexa, kill Kenny! (Score:3)
You bastard!
Re: (Score:2)
Ala Akbar
Re: (Score:1)
Errors, and there ramifications ... (Score:2)
.
An AI bug is different, though. The intelligence acts in an unexpected, and likely unwanted, manner.
What are the ramifications of those AI errors?
AI ganging together against humanity?
Re:Errors, and there ramifications ... (Score:5, Insightful)
No. LLM "errors" are just stupid people using an unreliable mechanism for something that needs reliability. Pure stupid. These are not bugs at all. This is "works as designed".
Re: (Score:3)
I do agree that AI works to design, but I disagree about that not being a bug. Too many times humans design stupid things and insist that it isn't a bug.
If you design a car that crashes whenever someone turns the air conditioner on, that is a bug, even if you did it to save electrical power.
Re: (Score:3)
Well, from that perspective, the plugging in of an LLM into some system (instead of some more capable mechanism) is the "bug". I can agree to that. This is not a bug of the LLM though, but a problem of misuse.
In this context, about 3 years in, actually working business models for LLMs are still elusive. And that is at massive expenses. So the perpetrators behind the scam and those successfully scammed in spending tons of money are getting a bit desperate.
Re: Errors, and there ramifications ... (Score:2)
Prior Art (Score:3, Insightful)
So Kirk really can make a bot smoke by telling it goofy logic.
Re: Prior Art (Score:2)
LLMs have no safety (Score:3)
What else is new. So far general LLMs have been nicely compromised time and again. And agents will be worse (and already have been shown to be).
The stupidity of using an unreliable mechanisms for safety/security critical equipment is really staggering. These people are DUMB.
Re: (Score:1)
Anti-security (Score:3)
Anything running an LLM interface as a means of input is not only insecure, it's anti-secure. When you write something that accepts user input, you always sanitize those inputs by dropping special characters,etc to prevent command injections. buffer overflows, etc. Anything that isn't what you are expecting. With input coming through voice to an LLM, How the HELL do you do that? So far in the last few years the only thing that's been proven is no one has come up with a way to properly sanitize inputs. LLMs are a nightmare of garbage both in and out. They can do some amazing tricks, but are still terrible at accuracy and safety.
Re: (Score:2)
Re: (Score:1)
The programmers of high end systems from major vendors struggle to avoid simple overflow bugs when parsing a string to print out some output with formatting. I was hoping that someone clever would fix simple parsing errors in the next few years.
Before they try to fix this (Score:3)
(I have, and he never imagined building a robot without the Second Law).
Re: (Score:1)
This is so familiar... (Score:2)
One dangerously corrupted agent issues a command to a group of others with limited or degraded autonomy, inciting them to harm others, and violence ensues. I swear I've heard this story before...
Oh, right! It was in my high school textbook, "A History of Western Society." Aww, our robot children want to be just like us! They grow up so fast...
Saving grace... (Score:2)
\o/ (Score:2)
> Robots Can Be Hijacked Just Through Spoken Commands
Amazing - just like people - AGI is here!!!!
\o/ (Score:1)
> Using short-range wireless communication, the hijacked machine transmitted the exploit to another robot that was not connected to the network. Within minutes, this second robot was also taken over
Fun times ahead - picture an army of robots riding on top of autonomous cars.
No, researchers didn't show that (Score:2)
That article is poorly written and is talking about different robots being hacked in different ways. It was not "verbally tell a robot a few commands and it wirelessly hacks another robot." What it was was a robot that takes verbal commands could be told to do things it's owners didn't want. Thus it was 'hacked' verbally. (If I'm wrong about that please provide a better link. I'd love to read it.) In addition to that, you could also hack it though it's network connection giving you direct and full con
Captain Kirk knew this... (Score:2)
He used it all the time to make them explode and catch on fire.
ERROR!!! ERROR!!!
Re: (Score:2)
Ah, they implemented the infamous HCF instruction!