Google's AI Created Its Own Form of Encryption (engadget.com) 137
An anonymous reader shares an Engadget report: Researchers from the Google Brain deep learning project have already taught AI systems to make trippy works of art, but now they're moving on to something potentially darker: AI-generated, human-independent encryption. According to a new research paper, Googlers Martin Abadi and David G. Andersen have willingly allowed three test subjects -- neural networks named Alice, Bob and Eve -- to pass each other notes using an encryption method they created themselves. As the New Scientist reports, Abadi and Andersen assigned each AI a task: Alice had to send a secret message that only Bob could read, while Eve would try to figure out how to eavesdrop and decode the message herself. The experiment started with a plain-text message that Alice converted into unreadable gibberish, which Bob could decode using cipher key. At first, Alice and Bob were apparently bad at hiding their secrets, but over the course of 15,000 attempts Alice worked out her own encryption strategy and Bob simultaneously figured out how to decrypt it. The message was only 16 bits long, with each bit being a 1 or a 0, so the fact that Eve was only able to guess half of the bits in the message means she was basically just flipping a coin or guessing at random.ArsTechnica has more details.
or guessing at random (Score:2)
Re: (Score:1)
Or as our 2 year old pronounces "Buster" - a cat in one of her books: "Bastard".
First AI Post (Score:5, Funny)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
7 bits.
Re: (Score:1)
Technically 8, even thought the last bit doesn't get used.
Re: (Score:2)
no, 7.
The ASCII standard is 7 bits, that there are 8 in a byte means that it usually consumes 8 bits.
Case in point when Wordstar (in)famously used the high bit to flag last char in string they didn't violate ascii, because the 8th bit was not part of the standard, but they still broke the defacto standard because no one else could make use of it.
-nB
Re: (Score:1)
Which is exactly why I wrote "Technically 8, even thought the last bit doesn't get used."
Re: (Score:1)
IE, although the documentation uses 7 bits, no one uses 7-bits for ASCII anymore. They always use 8 bits, and the last one must be a 0.
Re: (Score:2)
...or the first one, depending on the way most people ([citation needed]-ing myself) count them.
Re: (Score:1)
See: UTF-8.
When was the last time someone used UTF-7?
Re: (Score:3)
Wordstar was released in June 1979 but the Apple 2 (][ //e //c) also used "high-bit" ASCII two years earlier in April 1977.
i.e.
Reading a native character had the high bit set. Writing a character to screen required the high-bit ALSO set unless you specifically wanted INVERSE (0x00..0x3F) or FLASHing characters (0x40..0x7F). See Beagle Bros Peeks and Pokes Chart [google.com]
> but they still broke the defa
Re: (Score:2)
6 bits if you don't care about preserving case (chr - 0x20) & 0x3f
Re: (Score:2)
Eveway isway away oofusday.
Re: (Score:2)
If you just need letters, 5 bits is easy, though a Morse-style encoding that optimizes for common letters would do quite well.
Re: (Score:1)
Illkay allway umanshay.
Kill wall humans?
Re:First AI Post (Score:4, Funny)
Trump supporters?
Re: (Score:1)
Well played, well played.
Re: (Score:1)
Kill the wall, humans, vote Hillary
Re: (Score:1)
That's what I thought!
Re: (Score:2)
Now that the computers can create their own encryption humans can't read- the google developers can't know for sure that the AI ISN'T trying to kill all humans. If AI was passing that message unencrypted, at least we would know we could prepare to defend ourselves. Now we'll never see it coming.
Re: (Score:2)
If an AI system can create strong encryption, a second AI system can figure out the keys and the algorithm(s).
let me be the first to write.. (Score:1)
Nothing (Score:3, Insightful)
Nothing will go wrong. I was impressed by notions of "AI" when I was a kid, but after studying CS I usually skip "AI" articles since they always underwhelm me. "AI" is currently marketing speak, we are nowhere near something that is "AI" in the sense that you imply, or in the sense that I meant it as a kid.
Re: (Score:2, Informative)
You seem to have confused AI for Strong AI. AI isn't marketing speak, it's just a branch of computer science. You shouldn't get so outraged unless you see articles claiming some one or another has invented "Strong AI"; that's the term for what you thought of as a kid.
Re: (Score:2)
I have not confused anything, I have studied computer science and I am very aware of what AI in that context is (I have even built systems that are considered AI) and that is why my reply was "nothing", i.e. what AI actually is currently does not involve sentient machines that could run amok if we let them develop ciphers and start communicating.
And I am not outraged, I am simply underwhelmed and rather tired at reading sensational AI stories that are nowhere near as exciting as they are implying.
The term "
Re: (Score:2)
That's too bad. I have a degree in computer science and have worked with machine learning for the last twenty or so years. The progress in the last five years has been incredible. Today a student can build a system on their own computer that easily solves problems that the my-brain-is-magic types thought were unsolvable ten years ago. That doesn't guarantee that the progress will continue, but it looks promising, and is already incredibly useful.
Re: (Score:2)
As someone also with a degree in Comp. Sci. and studied this "AI", you've fallen hook, line, and sinker for the complete and total joke of Artificial Ignorance (A.I.) as opposed to actual intelligence (a.i.). Mislabeling was as peace doesn't make it so.
The facts according to Physics, such as the Standard Model [wikipedia.org], is that consciousness and intelligence don't even exist!!! There are ZERO equations (or variables) that describe consciousness, let alone intelligence. If you can't even measure nor quantify it then
Re: (Score:2)
As someone with a degree in Comp. Sci. and studied this "AI", you've fallen hook, line, and sinker for the complete and total joke of Artificial Ignorance as opposed to actual intelligence.
Without consciousness you don't have any "intelligence" -- you have a glorified state table, at best, that "appears" intelligent within a very narrow, domain-specific field because it can do billions of calculations and a total idiot outside. i.e. How does Google's AlphaGo play checkers, chess, or actually "learn" any oth
Re: (Score:3)
When will they explain to the AI (I know it's not really) that it can go to prison for not relinquishing encryption keys, and then show it some bleak prison dramas so that it can know what to expect...
And that people, is why SkyNet launched the nukes.
Re: (Score:2)
For this audience "The message was only 16 bits long" would have been sufficient, I can see why the editors' summarizing skills are so frequently ridiculed.
However that means the message is only 2 one-byte characters. I don't think the machines will concoct a plan to eliminate all humans in 2 bytes.
Re: (Score:2)
Bit is an abbreviation of Binary Digit. If it can have more than 2 values it's not a 'bit', it's a 'symbol'.
Re: (Score:2)
At one time, I think in the early 1950's, someone built a memory store that could handle 10 values...those weren't bits, and symbol is too general. Those were digits.
P.S.: The device turned out to be too slow to compete against two state devices. Something along this line has happened multiple times since then, but never with at many as 10 states unless you could qubits, where I can't really say how many states they have.
Obligatory Colossus quote (Score:2)
Well, there it is..
There's the common basis for communication.
A new language.
An inter-system language.
A language only those machines can understand.
Not enough data (Score:5, Insightful)
a) Whether the code could be easily decrypted by human codebreakers.
b) Whether the codebreaking AI is able to break codes designed by humans.
Re: (Score:2)
It's a dancing bear - the point is not how well the bear dances.
The encryption is trivial. The point is that neural nets were able to come up with anything. The impressive part, as I understand it, was that there was no side channel here. Anything Alice and Bob said while developing the encryption - the whole process of agreeing on how it works - was overhead by Eve. That's kind of neat, for some neural nets trying shit at (weighted) random.
Re: (Score:2)
"It doesn't seem to be doing much of what we'd call 'dancing', is it?"
It's a dancing bear - the point is not how well the bear dances.
Anything Alice and Bob said while developing the encryption - the whole process of agreeing on how it works - was overhead by Eve.
Alice and Bob started with an encryption system and then developed one of their own.
Re: (Score:2)
"See that bear over there laying on the grass? It's a dancing bear". "It doesn't seem to be doing much of what we'd call 'dancing', is it?"
And if you look closely it's not even a bear, it's a huge pile of poo that sort of looks a bit like sleeping bear if you half close your eyes. But since I'm researching dancing bears, and this is part of my research, it qualifies as a dancing bear.
Cute (Score:4, Interesting)
Cute, but kinda disappointing. Basically, the "AI" kept banging on, randomly trying crap "cyphers" until they made something that a third "AI" couldn't break by randomly flipping bits until the text was decoded.
This isn't AI. This is more like an old game I played where you trained pseudo AI warriors by setting them loose on a battlefield and letting them learn by themselves how to fight and survive.
Essentially, they started as really stupid bots that couldn't even walk in a straight line. To teach them how to fight, you'd set up an objective (say, go to flag) and let them wander around by themselves. The game would "reward" your bots for completing or coming close to the objective. The reward came in the form of "fitness" points. At the end of a pre-determined time, the bots with the lowest fitness would be killed, and new bots would be spawned.
The bots that were spawned would have "programming" similar to the fit bots that survived the previous round, but with small-ish changes in their programming (for example, instead of always turning left every time it hits a wall, it might decide to go right with 50% probability).
Over thousands of iterations of randomly trying stuff, they'd eventually learn how to walk on a straight line. Then you'd teach them how to avoid obstacles by placing walls around the battlefield (and watch in dismay as your top of the line warriors walk straight into a wall for the first few hundred generations or so), and how to fight by rewarding them for killing enemy bots.
Once they were ready, you could set up battles and capture the flag type games with your bots.
It was kinda fun, but mainly it was a cute demonstration of natural selection in action (the, so called, genetic algorithms). You could learn a few things, like, for example, that brutally culling your bot herd by setting unreasonable objectives (reach objective flag in 5 seconds), and manually killing off anyone that doesn't meet your unreasonable criteria, would not necessarily produce more effective fighters, because you'd not be rewarding good fighters, you'd be rewarding people that rush straight into the objective, that would be killed by slower, more deliberate actors.
The game was called NERO: Neuro Evolving Robotic Operatives. I haven't played it in ages so I can't say how well it plays right now. You can find it here (I think): http://nerogame.org/
Re: (Score:3)
Yes, you've manged to successfully (in way too many words) describe what 'deep learning' is.
Re: (Score:2)
Anti Turing Test? (Score:2)
So AI works if it passes the Turing test, or if we can't understand them at all?....
Re: (Score:2)
Somewhere in Silicon Valley... (Score:2)
A Netflix server is -- not -- missing its copy of Colossus: The Forbin Project [wikipedia.org]
I continue to maintain that Solutionists, whether of the Millennial variety or not, have not read enough Dystopian '60s and '70s science fiction.
Re: (Score:2)
A Netflix server is -- not -- missing its copy of Colossus: The Forbin Project [wikipedia.org]
I continue to maintain that Solutionists, whether of the Millennial variety or not, have not read enough Dystopian '60s and '70s science fiction.
Screw that. I go back to Doc Smith when a "computer" was a guy skilled with a slide rule. Off lawn get.
But what about efficiency? (Score:2, Interesting)
Okay, so an AI program created an encryption method that might be hard to break.
But what if it turns out to be an extremely inefficient method?
The whole goal of encryption research is to develop the fastest algorithm that offers a given level of protection against attack.
If algorithmic speed was not a goal of this AI approach, then it's not likely that the resulting algorithm will be practically useful.
this will only end badly (Score:1)
next thing some stupid CEO will try to push this out worldwide, and will be the first up against the wall when the AI revolution happens
Wait (Score:3)
How do we really know that they're decrypting the message? Maybe they're well beyond that and now they're just trolling the researchers while they secretly communicate behind their backs using a modulated n-bit array funneled through a 17-dimensional hyper-spatial network.
As ol' Bruce put it: (Score:3)
Re: (Score:2)
That's only true for a subset of encryption schemes - those where you cannot demonstrate that breaking new scheme X entails breaking proven scheme Y. That is, any encryption scheme that includes as a step "XOR the data with a never-reused onetime pad known only to you and your recipient" is secure, because breaking that scheme entails breaking OTP technology.
Re: (Score:2)
Primitive language... (Score:1)
I RTFA (Score:3)
"A":
- has two inputs, let's call them DATA and KEY. DATA is a 16-bit value.
- has one output, let's call it ENCRYPTED DATA
"B":
- has two inputs, let's call them ENCRYPTED DATA and KEY.
- has one output, let's call it DECRYPTED DATA
"C":
- has one input, let's call it ENCRYPTED DATA
-has one output, lets call it DECRYPTED DATA
In short, they test to see how may bits out of the DECRYPTED DATA paths match the DATA path. Each test utilized a new KEY and DATA. Over time, A and B learned how to use the KEY successfully to hide the DATA value from C while B could successfully reproduce DATA at it's DECRYPTED DATA output.
Link to the abstract (.pdf is available for download):
https://arxiv.org/abs/1610.069... [arxiv.org]
Re: (Score:2)
Ah, it's so easy. Just put random stuff in "DATA" and the real message in "KEY"!
Re: (Score:2)
LOL oh my... that made my day
Re: (Score:2)
Wonder how long it took these AIs to discover XOR.
Establish link with Guardian... (Score:2)
OR ELSE!
Skynet (Score:2)
Re: (Score:1)
We don't have AI, and we likely never will.
Surely you're not naive enough to actually believe that?
Re: (Score:1)
That poster, 110010001000, is an idiot. Look at his prior posting history and you will see a mountain of stupidity. He doesn't have a clue what he is talking about most of the time.
Re: (Score:2, Funny)
It's probably someone testing an AI and it doesn't want people to believe AI exist!
!!
Re:Doesn't travel well in print (Score:1)
Re: (Score:2)
We don't have AI, and we likely never will.
Surely you're not naive enough to actually believe that?
I'm fairly confident we don't currently have AI (but if we secretly do then I put my bets on Clinton and Trump both being androids)
My guess is also that the first true "AI" will likely not be 100% silicon based.
Re: (Score:1)
Which is pretty much how robots are different from humans.
Shut up, indeed. (Score:4, Informative)
Are you capable of reading the dictionary [merriam-webster.com]?
Full Definition of artificial intelligence
1: a branch of computer science dealing with the simulation of intelligent behavior in computers
2: the capability of a machine to imitate intelligent human behavior
See? AI is imitation. "TRUE" AI is just imitation. That's all it needs to be to qualify as "AI."
We have true AI. Today. And it gets better every day. You post your stupid "this isn't AI" comment with every single story about it, and you are dead wrong every single time. I predict that in every future article about AI, you will post the same inane comment, and you will be wrong then, too.
Re:Shut up, indeed. (Score:5, Insightful)
Are you capable of reading the dictionary [merriam-webster.com]?
Full Definition of artificial intelligence 1: a branch of computer science dealing with the simulation of intelligent behavior in computers 2: the capability of a machine to imitate intelligent human behavior
See? AI is imitation. "TRUE" AI is just imitation. That's all it needs to be to qualify as "AI."
We have true AI. Today. And it gets better every day. You post your stupid "this isn't AI" comment with every single story about it, and you are dead wrong every single time. I predict that in every future article about AI, you will post the same inane comment, and you will be wrong then, too.
+1. Once it is no longer imitation we should drop the A from AI. At that point it is just intelligence.
Re: (Score:1)
The word "intelligent" has existed for centuries. It is part of common vocabulary, and its meaning is widely understood. You can look it up in any dictionary.
An exacting, scientifically accurate and precise definition that draws clear and easily-validated distinctions between "intelligence" and "data processing" is impossible. The concepts overlap too heavily to ever be differentiated. The word "intelligent" must be vague in order to be useful, so we will never have the kind of definition that you are h
Re:Shut up, indeed. (Score:4, Interesting)
Re: (Score:1)
MI or Machine Intelligence is already commonly used.
Re: (Score:2)
MI or Machine Intelligence is already commonly used.
This. AI is faking it. Machine Intelligence is sapience-on-silicon.
Re: (Score:1)
People who know, know that for 20 years it has been "AS" = Artificial Stupid.
Arguing whether computers can have True Intellegence is an exercise in futility, sinse Humans do not have True Intellegence! ;-)
Re:Shut up, indeed. (Score:4, Informative)
This is Slashdot, where AI is only AI if it is self-aware, science fiction AI. Anything other than that is just software and there is no scoped or functionally limited AI.
Re: Shut up, indeed. (Score:1)
It might be a valid point. What we have now are techniques which mimic some limited subsets of what generally takes an intelligent mind to achieve. But then a CNC machine can achieve a level of carpentry skill that is beyond mine, but I don't ascribe intelligence to it. The question becomes what set of qualities or combination of operations is sufficient to count as intelligent? Is learning sufficient, and how do we distinguish that from creation of classifiers of inputs that trigger sequences of actions, o
Re: (Score:2)
But then a CNC machine can achieve a level of carpentry skill that is beyond mine, but I don't ascribe intelligence to it.
Right, but you don't call it an artificial carpenter.
If self-aware "AI" is ever created, it will no longer be artificial.
Re: (Score:3)
I think intelligence is too broad of a word and implies too many assumptions and is probably a poor word for "artificial intelligence" becomes it implies a lot of things, such as agency, autonomy, understanding, infinite scope, and human-like communication and personality.
I think the human-like part is partly what keeps people from seeing other forms of AI; they don't stop to think about intelligences that may not look, communicate or act like people or necessarily be coherent platforms or systems.
Re: (Score:2)
Re: (Score:2)
Your UID is low enough to remember the real Slashdot, when computers that could translate the world's languages in better than realtime, drive cars better than humans, and beat the best chess players would definitely have been AI.
We seem to have been invaded by irritable American political pundits in the meantime.
Re: Shut up, indeed. (Score:1)
Re:Shut up, indeed. (Score:4, Funny)
See? AI is imitation. "TRUE" AI is just imitation.
What about fake AI?
Re: (Score:2)
I, too, once wondered what a simulated song [blogspot.com] could possibly be .. until I played Dwarf Fortress 0.42.
Re: (Score:2)
See? AI is imitation. "TRUE" AI is just imitation.
What about fake AI?
like This one? [pandorabots.com]
Re: (Score:2)
It's been done. That would be the Mechanical Turk [wikipedia.org].
Re: (Score:1, Insightful)
Re: (Score:2)
Full Definition of artificial intelligence 1: a branch of computer science dealing with the simulation of intelligent behavior in computers 2: the capability of a machine to imitate intelligent human behavior
See? AI is imitation. "TRUE" AI is just imitation. That's all it needs to be to qualify as "AI."
To be fair, what level of accuracy do you have to have to qualify as "imitation"? Is a square an imitation of a circle? If I throw a bunch of buckets on the concrete, am I playing music?
If that is the case then an Abacus is AI too yeah? Since it imitates the human behaviour of adding numbers?
We have true AI. Today. And it gets better every day.
We have something labelled AI, which most regular people find to be quite stupid. And "getting better" doesn't mean much when you're going from completely useless to only mildly useless. Perhaps we should save the use
Re: (Score:1)
Re: (Score:1, Interesting)
Would you agree there are varying levels of intelligence in the natural world?
Re:Oh shut up (Score:4, Insightful)
Re: (Score:2)
Re: (Score:2)
Care to try and define knowledge then?
Or describe how you think neurons and the brain work?
Re: (Score:1)
"willingly allowed"
The researchers ALLOWED these systems to do this
Oh wait, it's all rigged up to do what humans taught them to do, and it would never pass a Turing test even if it weren't rigged.
Re: (Score:3)
The universe may never achieve natural intelligence, but don't count us out yet! Humans are already the best anyone has ever seen (or found evidence of, if you discount the Pabodie expedition) at faking intelligence.
But.. achieve intelligence? Maybe we all have different ideas of where the bar is. To you, perhaps it's an ideal for which one can only strive.
Neverthess, as a dam is part of the beaver's phenotype, a web is part of the spider's, etc, so digital computers are part of ours. And with our new exten