Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Google AI Encryption

Google's AI Created Its Own Form of Encryption (engadget.com) 137

An anonymous reader shares an Engadget report: Researchers from the Google Brain deep learning project have already taught AI systems to make trippy works of art, but now they're moving on to something potentially darker: AI-generated, human-independent encryption. According to a new research paper, Googlers Martin Abadi and David G. Andersen have willingly allowed three test subjects -- neural networks named Alice, Bob and Eve -- to pass each other notes using an encryption method they created themselves. As the New Scientist reports, Abadi and Andersen assigned each AI a task: Alice had to send a secret message that only Bob could read, while Eve would try to figure out how to eavesdrop and decode the message herself. The experiment started with a plain-text message that Alice converted into unreadable gibberish, which Bob could decode using cipher key. At first, Alice and Bob were apparently bad at hiding their secrets, but over the course of 15,000 attempts Alice worked out her own encryption strategy and Bob simultaneously figured out how to decrypt it. The message was only 16 bits long, with each bit being a 1 or a 0, so the fact that Eve was only able to guess half of the bits in the message means she was basically just flipping a coin or guessing at random.ArsTechnica has more details.
This discussion has been archived. No new comments can be posted.

Google's AI Created Its Own Form of Encryption

Comments Filter:
  • That's PSEUDO-random to you, buster!
    • by Anonymous Coward

      Or as our 2 year old pronounces "Buster" - a cat in one of her books: "Bastard".

  • by PPH ( 736903 ) on Friday October 28, 2016 @10:29AM (#53168633)
    Illkay allway umanshay.
    • by Nidi62 ( 1525137 )
      Hmmm....16 bits...."Kill all humans!" has 16 characters including spaces and punctuation.....coincidence?
      • Not to ruin a good joke, but at least with ASCII, each character is a byte, I think... :)
        • 7 bits.

          • Technically 8, even thought the last bit doesn't get used.

            • no, 7.
              The ASCII standard is 7 bits, that there are 8 in a byte means that it usually consumes 8 bits.
              Case in point when Wordstar (in)famously used the high bit to flag last char in string they didn't violate ascii, because the 8th bit was not part of the standard, but they still broke the defacto standard because no one else could make use of it.
              -nB

              • Which is exactly why I wrote "Technically 8, even thought the last bit doesn't get used."

              • IE, although the documentation uses 7 bits, no one uses 7-bits for ASCII anymore. They always use 8 bits, and the last one must be a 0.

              • See: UTF-8.

                When was the last time someone used UTF-7?

              • Wordstar was released in June 1979 but the Apple 2 (][ //e //c) also used "high-bit" ASCII two years earlier in April 1977.

                i.e.

                .1 LDA $C000 ; read keyboard
                BPL .1
                STA $C010 ; clear keyboard
                ; A >= $80

                Reading a native character had the high bit set. Writing a character to screen required the high-bit ALSO set unless you specifically wanted INVERSE (0x00..0x3F) or FLASHing characters (0x40..0x7F). See Beagle Bros Peeks and Pokes Chart [google.com]

                > but they still broke the defa

          • by lgw ( 121541 )

            6 bits if you don't care about preserving case (chr - 0x20) & 0x3f

            • Only 5.33 bits if you use RAD50, or 5 bits if you use Baudot.

              Eveway isway away oofusday.

              • by lgw ( 121541 )

                If you just need letters, 5 bits is easy, though a Morse-style encoding that optimizes for common letters would do quite well.

    • by Anonymous Coward

      Illkay allway umanshay.

      Kill wall humans?

    • Now that the computers can create their own encryption humans can't read- the google developers can't know for sure that the AI ISN'T trying to kill all humans. If AI was passing that message unencrypted, at least we would know we could prepare to defend ourselves. Now we'll never see it coming.

    • If an AI system can create strong encryption, a second AI system can figure out the keys and the algorithm(s).

  • ..what could possibly go wrong?
    • Nothing (Score:3, Insightful)

      by Ecuador ( 740021 )

      Nothing will go wrong. I was impressed by notions of "AI" when I was a kid, but after studying CS I usually skip "AI" articles since they always underwhelm me. "AI" is currently marketing speak, we are nowhere near something that is "AI" in the sense that you imply, or in the sense that I meant it as a kid.

      • Re: (Score:2, Informative)

        by Anonymous Coward

        You seem to have confused AI for Strong AI. AI isn't marketing speak, it's just a branch of computer science. You shouldn't get so outraged unless you see articles claiming some one or another has invented "Strong AI"; that's the term for what you thought of as a kid.

        • by Ecuador ( 740021 )

          I have not confused anything, I have studied computer science and I am very aware of what AI in that context is (I have even built systems that are considered AI) and that is why my reply was "nothing", i.e. what AI actually is currently does not involve sentient machines that could run amok if we let them develop ciphers and start communicating.
          And I am not outraged, I am simply underwhelmed and rather tired at reading sensational AI stories that are nowhere near as exciting as they are implying.
          The term "

          • by ceoyoyo ( 59147 )

            That's too bad. I have a degree in computer science and have worked with machine learning for the last twenty or so years. The progress in the last five years has been incredible. Today a student can build a system on their own computer that easily solves problems that the my-brain-is-magic types thought were unsolvable ten years ago. That doesn't guarantee that the progress will continue, but it looks promising, and is already incredibly useful.

            • As someone also with a degree in Comp. Sci. and studied this "AI", you've fallen hook, line, and sinker for the complete and total joke of Artificial Ignorance (A.I.) as opposed to actual intelligence (a.i.). Mislabeling was as peace doesn't make it so.

              The facts according to Physics, such as the Standard Model [wikipedia.org], is that consciousness and intelligence don't even exist!!! There are ZERO equations (or variables) that describe consciousness, let alone intelligence. If you can't even measure nor quantify it then

        • As someone with a degree in Comp. Sci. and studied this "AI", you've fallen hook, line, and sinker for the complete and total joke of Artificial Ignorance as opposed to actual intelligence.

          Without consciousness you don't have any "intelligence" -- you have a glorified state table, at best, that "appears" intelligent within a very narrow, domain-specific field because it can do billions of calculations and a total idiot outside. i.e. How does Google's AlphaGo play checkers, chess, or actually "learn" any oth

  • Well, there it is..
    There's the common basis for communication.
    A new language.
    An inter-system language.
    A language only those machines can understand.

  • Not enough data (Score:5, Insightful)

    by Comboman ( 895500 ) on Friday October 28, 2016 @10:54AM (#53168809)
    So an AI was able to make an encryption that another AI couldn't break. I don't know whether to be impressed or not because I don't know:

    a) Whether the code could be easily decrypted by human codebreakers.

    b) Whether the codebreaking AI is able to break codes designed by humans.

    • by lgw ( 121541 )

      It's a dancing bear - the point is not how well the bear dances.

      The encryption is trivial. The point is that neural nets were able to come up with anything. The impressive part, as I understand it, was that there was no side channel here. Anything Alice and Bob said while developing the encryption - the whole process of agreeing on how it works - was overhead by Eve. That's kind of neat, for some neural nets trying shit at (weighted) random.

      • "See that bear over there laying on the grass? It's a dancing bear".
        "It doesn't seem to be doing much of what we'd call 'dancing', is it?"

        It's a dancing bear - the point is not how well the bear dances.

        Anything Alice and Bob said while developing the encryption - the whole process of agreeing on how it works - was overhead by Eve.

        The experiment started with a plain-text message that Alice converted into unreadable gibberish, which Bob could decode using cipher key.

        Alice and Bob started with an encryption system and then developed one of their own.

        • "See that bear over there laying on the grass? It's a dancing bear". "It doesn't seem to be doing much of what we'd call 'dancing', is it?"

          And if you look closely it's not even a bear, it's a huge pile of poo that sort of looks a bit like sleeping bear if you half close your eyes. But since I'm researching dancing bears, and this is part of my research, it qualifies as a dancing bear.

  • Cute (Score:4, Interesting)

    by Anonymous Coward on Friday October 28, 2016 @10:57AM (#53168823)

    Cute, but kinda disappointing. Basically, the "AI" kept banging on, randomly trying crap "cyphers" until they made something that a third "AI" couldn't break by randomly flipping bits until the text was decoded.

    This isn't AI. This is more like an old game I played where you trained pseudo AI warriors by setting them loose on a battlefield and letting them learn by themselves how to fight and survive.

    Essentially, they started as really stupid bots that couldn't even walk in a straight line. To teach them how to fight, you'd set up an objective (say, go to flag) and let them wander around by themselves. The game would "reward" your bots for completing or coming close to the objective. The reward came in the form of "fitness" points. At the end of a pre-determined time, the bots with the lowest fitness would be killed, and new bots would be spawned.

    The bots that were spawned would have "programming" similar to the fit bots that survived the previous round, but with small-ish changes in their programming (for example, instead of always turning left every time it hits a wall, it might decide to go right with 50% probability).

    Over thousands of iterations of randomly trying stuff, they'd eventually learn how to walk on a straight line. Then you'd teach them how to avoid obstacles by placing walls around the battlefield (and watch in dismay as your top of the line warriors walk straight into a wall for the first few hundred generations or so), and how to fight by rewarding them for killing enemy bots.

    Once they were ready, you could set up battles and capture the flag type games with your bots.

    It was kinda fun, but mainly it was a cute demonstration of natural selection in action (the, so called, genetic algorithms). You could learn a few things, like, for example, that brutally culling your bot herd by setting unreasonable objectives (reach objective flag in 5 seconds), and manually killing off anyone that doesn't meet your unreasonable criteria, would not necessarily produce more effective fighters, because you'd not be rewarding good fighters, you'd be rewarding people that rush straight into the objective, that would be killed by slower, more deliberate actors.

    The game was called NERO: Neuro Evolving Robotic Operatives. I haven't played it in ages so I can't say how well it plays right now. You can find it here (I think): http://nerogame.org/

    • Yes, you've manged to successfully (in way too many words) describe what 'deep learning' is.

    • by Megane ( 129182 )
      In other words, they set artificial intelligences to the task of rolling their own encryption, and just like real intelligences, they came up with Security Through Obscurity(tm)!
  • So AI works if it passes the Turing test, or if we can't understand them at all?....

    • Since the message was only 16 bits, wait until the cipher text matches up with some human readable English. Then things get confusing.
  • A Netflix server is -- not -- missing its copy of Colossus: The Forbin Project [wikipedia.org]

    I continue to maintain that Solutionists, whether of the Millennial variety or not, have not read enough Dystopian '60s and '70s science fiction.

    • A Netflix server is -- not -- missing its copy of Colossus: The Forbin Project [wikipedia.org]

      I continue to maintain that Solutionists, whether of the Millennial variety or not, have not read enough Dystopian '60s and '70s science fiction.

      Screw that. I go back to Doc Smith when a "computer" was a guy skilled with a slide rule. Off lawn get.

  • by Anonymous Coward

    Okay, so an AI program created an encryption method that might be hard to break.

    But what if it turns out to be an extremely inefficient method?

    The whole goal of encryption research is to develop the fastest algorithm that offers a given level of protection against attack.

    If algorithmic speed was not a goal of this AI approach, then it's not likely that the resulting algorithm will be practically useful.

  • next thing some stupid CEO will try to push this out worldwide, and will be the first up against the wall when the AI revolution happens

  • by JustAnotherOldGuy ( 4145623 ) on Friday October 28, 2016 @11:49AM (#53169199) Journal

    How do we really know that they're decrypting the message? Maybe they're well beyond that and now they're just trolling the researchers while they secretly communicate behind their backs using a modulated n-bit array funneled through a 17-dimensional hyper-spatial network.

  • by fph il quozientatore ( 971015 ) on Friday October 28, 2016 @12:10PM (#53169347)
    Schneier's law:

    Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can't break. It's not even hard. What is hard is creating an algorithm that no one else can break, even after years of analysis. And the only way to prove that is to subject the algorithm to years of analysis by the best cryptographers around.

    • That's only true for a subset of encryption schemes - those where you cannot demonstrate that breaking new scheme X entails breaking proven scheme Y. That is, any encryption scheme that includes as a step "XOR the data with a never-reused onetime pad known only to you and your recipient" is secure, because breaking that scheme entails breaking OTP technology.

  • Language can be interpreted very differently in a different context. The context is the mind. Example: "Meh."
  • by kosh271 ( 2036124 ) on Friday October 28, 2016 @12:55PM (#53169613)
    After reading the original paper (I know - what was I thinking), it appears the test setup is as follows:
    "A":
    - has two inputs, let's call them DATA and KEY. DATA is a 16-bit value.
    - has one output, let's call it ENCRYPTED DATA

    "B":
    - has two inputs, let's call them ENCRYPTED DATA and KEY.
    - has one output, let's call it DECRYPTED DATA

    "C":
    - has one input, let's call it ENCRYPTED DATA
    -has one output, lets call it DECRYPTED DATA

    In short, they test to see how may bits out of the DECRYPTED DATA paths match the DATA path. Each test utilized a new KEY and DATA. Over time, A and B learned how to use the KEY successfully to hide the DATA value from C while B could successfully reproduce DATA at it's DECRYPTED DATA output.

    Link to the abstract (.pdf is available for download):
    https://arxiv.org/abs/1610.069... [arxiv.org]
  • Let's call it skynet?

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...