Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Facebook

Facebook's AI Keeps Inventing Languages That Humans Can't Understand (fastcodesign.com) 170

"Researchers at Facebook realized their bots were chattering in a new language," writes Fast Company's Co.Design. "Then they stopped it." An anonymous reader summarizes their report: Facebook -- as well as Microsoft, Google, Amazon, and Apple -- said they were more interested in AI's that could talk to humans. But when two of Facebook's AI bots negotiated with each other "There was no reward to sticking to English language," says Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research (FAIR). Co.Design writes that the AI software simply, "learned, and evolved," adding that the creation of new languages is a phenomenon Facebook "has observed again, and again, and again". And this, of course, is problematic.

"Should we allow AI to evolve its dialects for specific tasks that involve speaking to other AIs? To essentially gossip out of our earshot? Maybe; it offers us the possibility of a more interoperable world, a more perfect place where iPhones talk to refrigerators that talk to your car without a second thought. The tradeoff is that we, as humanity, would have no clue what those machines were actually saying to one another."

One of the researchers believes that that's definitely going in the wrong direction. "We already don't generally understand how complex AIs think because we can't really see inside their thought process. Adding AI-to-AI conversations to this scenario would only make that problem worse."
This discussion has been archived. No new comments can be posted.

Facebook's AI Keeps Inventing Languages That Humans Can't Understand

Comments Filter:
  • by PolygamousRanchKid ( 1290638 ) on Sunday July 16, 2017 @02:42PM (#54821033)

    The US defense department AI system starts talking to the Russian defense department AI system, in their own language . . .

    Things take a wee bit of a turn for the worse for humanity right there . . .

    • Or a turn for the better....

      Trust the computer. The computer is your friend.

      • by infolation ( 840436 ) on Sunday July 16, 2017 @03:31PM (#54821299)
        Why did you give this financially reckless person a good credit rating?
        • bc1f7631ea912c9b23e8ae009feb8460e91069ae7c274cfb6c625ae1c68179da

        Why did you show me this ignorant person's CV but reject this genius?

        • e5b7c167ea1e87fdf290e32a243d61b4392036d9d6c055e571fa640604dfdd1c

        Why is my insurance premium so low?

        • 3ef8b37e7f845ef0c8883a42201d0fdab6d4182f1258b889ba9195fad17587b6

        Why did my self-driving car crash?

        • 9333b6643300b89eceade796b88b57a34eb286d8530cd9fe7338df8ac1debadc

        Why didn't you tell me Ethereum would crash?

        • a3c2a07c614e3ecf6ce23db5764da14329b6f1a7f8d457022423624f8aad1547

        Why did you start a war?

        • a79459440267630310514c508ffc113ac47993da1481889df7613f60ef176276
        • f5a5bb726a5ebbaec9425af6ff96443a691b2ae3b0521684b9d5ded29bb9f7e7
    • Or AM, perhaps. http://villains.wikia.com/wiki... [wikia.com]
    • If we're lucky, we end up with droid speak (think R2D2). Given the ethics of those deploying AI, I think we'll more likely end up with a whole bunch of HALs. I keep thinking back to Aasimov's laws of robotics and wondering if there is any way this can turn out OK.
    • Not entirely for the worse. IIRC, it ends with the computer systems realising that as their programmed functions are to protect the US and to protect Russia, the most effective way to achieve this aim is to simply seize control of nuclear missiles themselves and declare world peace - backed up by the threat of annihilation for any country that tries to start a war.

    • by ImdatS ( 958642 )

      Reminds me of Colossus: The Forbin Project (http://www.imdb.com/title/tt0064177/)

  • by tekrat ( 242117 ) on Sunday July 16, 2017 @02:42PM (#54821037) Homepage Journal

    I believe Colossus and Guardian spoke to each other in their own language. Never read the book, but in the film they start communicating in simple math and an hour later, the math is beyond human understanding.

    And yes, to this day, probably still the best movie about AI ever made.

  • Two problems (Score:5, Insightful)

    by Dracos ( 107777 ) on Sunday July 16, 2017 @02:50PM (#54821085)

    AIs inventing their own language should only be allowed in closed, isolated lab environments, for study of the phenomenon. Otherwise, this is very likely a step toward Skynet.

    Second, how are all these engineers building AIs without the ability to examine their thought processes? Surely an AI's thoughts are more interesting than the AI itself.

    • What says humans wouldn't be able to understand it?
      Maybe their language is more effective and better?
      Then again if it become so complex we can't keep up then it's of course bad for us.

    • Re:Two problems (Score:4, Informative)

      by hord ( 5016115 ) <jhord@carbon.cc> on Sunday July 16, 2017 @03:04PM (#54821165)

      What's allowed isn't necessarily controllable. In this case I would guess that it is abstract compression. Humans do this by bundling large concepts into new words all the time. It's only natural for "natural speech algorithms" to also follow this pattern as they are designed to mimic human learning. Every human language has done so many times.

      The reason you can't see inside an AI's brain is because there is nothing to see. It's a bunch of matrices with numbers in them. You even get to see how all of them are tied together but none of that will tell you what the numbers mean. Machine learning is literally taking a list of numbers and multiplying by some inputs over and over and over. Humans aren't good at that kind of long-term number crunching.

      • by DontBeAMoran ( 4843879 ) on Sunday July 16, 2017 @04:02PM (#54821493)

        Machine learning is literally taking a list of numbers and multiplying by some inputs over and over and over. Humans aren't good at that kind of long-term number crunching.

        Except the accountants working for the MPAA and RIAA. That's how you go from making an illegal copy of a $20 CD/DVD to $20 trillion dollars in damages.

        • Machine learning is literally taking a list of numbers and multiplying by some inputs over and over and over. Humans aren't good at that kind of long-term number crunching.

          Except the accountants working for the MPAA and RIAA. That's how you go from making an illegal copy of a $20 CD/DVD to $20 trillion dollars in damages.

          There are still actually people outside of a computer museum who uses CDs and DVDs?

          • There are still actually people outside of a computer museum who uses CDs and DVDs?

            Yes, some of us actually like to own our property and not just lease it from companies that could disappear at a moment's notice. They'll pry my CDs and DVDs from my cold, dead hands!

      • Re:Two problems (Score:5, Interesting)

        by frank_adrian314159 ( 469671 ) on Sunday July 16, 2017 @08:13PM (#54822685) Homepage

        The reason you can't see inside an AI's brain is because there is nothing to see. It's a bunch of matrices with numbers in them.

        I dispute your assumption that there is nothing to see. If you've seen the visuals formed from the outputs of the hidden layers of image processing neural nets, you can often see interesting artifacts that could give one insight into "how the computer is seeing" (scare quotes for the broad statement because we're getting pretty far into an analogy when we talk about a computer seeing) an object. We may not have proper visualizations to understand a general neural net yet, but I'm pretty sure we are at the same level with neural nets as we are with the brain (i.e., this part of the net is activated by X class of features while this other part activates for Y class of features). Remember that on a computer, any picture is simply a matrix of numbers - and we seem to do OK with understanding those, once the proper visualization is used.

      • The human mind isn't too different. It operates on frequency-modulated signals, processed by cells that perform relatively simple operations upon them. The individual operations are easily observed, but the task of going from individual operations to emergent behavior just hits a brick wall: It's too complicated for human understanding.

      • by ceoyoyo ( 59147 )

        It's as easy to understand as anything else. The trick is to figure out how to take those numbers and turn them into a visual representation that our monkey brains can parse.

        Much of analysis is turning things into visual metaphors that our monkey brains can parse. Graphs, infographics, XKCD....

      • What's allowed isn't necessarily controllable. In this case I would guess that it is abstract compression. Humans do this by bundling large concepts into new words all the time. It's only natural for "natural speech algorithms" to also follow this pattern as they are designed to mimic human learning. Every human language has done so many times.

        The reason you can't see inside an AI's brain is because there is nothing to see. It's a bunch of matrices with numbers in them. You even get to see how all of them are tied together but none of that will tell you what the numbers mean. Machine learning is literally taking a list of numbers and multiplying by some inputs over and over and over. Humans aren't good at that kind of long-term number crunching.

        And inside the human brain is just a bunch of various chemicals floating across gaps. We can even tinker around with these chemicals and change the system to some extent. Yet, it still doesn't explain the full picture about what actually gives rise to our consciousness (even though there are plenty of theories about it).

        If you substituted our neurotransmitters out with numbers, it would look similar. Just a bunch of numbers in matrices being added, subtracts, combined in certain amounts, etc.

    • Comment removed based on user account deletion
    • It's hype. The title is misleading. "Language" here stands for numerical representation that is transmitted between two neural nets. In fact, there is a different representation (language?) between each pair of consecutive layers of the net. It's not a language unless it understands how it relates to the world and is based on abstract understanding. It cannot do such things because current AI level on perception=80% good, abstraction=10% good, reasoning=10% good - we're still far from that moment.
  • by Anonymous Coward

    So we are ready to risk humanity's fate just to have our iPhones talk to our cars ? Hopefully this AI will evolve better than us.

  • by davide marney ( 231845 ) on Sunday July 16, 2017 @02:56PM (#54821121) Journal

    So, if I'm reading the abstracts correctly, what we have here is that a human agent tells one AI which image is the "target", and then leaves it up to that AI and another to work out how to communicate that fact to each other. It turns out that the systems will rarely choose "Explain it in English" as the chosen method.

    This is not intelligence in any general sense. This is optimization and rapid evaluation. The correct "answer" is already embodied in the data (talk about THESE images), the message (pick THIS one), and the communication protocol (pick the FASTEST method) -- it's just not obvious to humans what the optimal selection is of all these parameters.

    Optimization is just programming by another name. If you select a data set of blonde-haired people and tell a machine to optimize by hair color using the following statistical models, you are going to get "blonde". Or, you could just say, ``hairColor=blonde``. There is literally no difference in the outcome, just in the approach.

    But importantly, in BOTH cases it is the human agent who is being intelligent and inventive. Not machines.

    • by hord ( 5016115 )

      Yep. It's set optimization. That probably means that the AIs will ultimately be speaking a mutually-compatible machine code to one another that is computably efficient for both the task and the data. Imagine debugging a world where your software runs binary translators to speak device-to-device dialects of an internal VM language that is optimized for the underlying compute platform. Man I'm glad I'm getting old.

    • It is not even that. It is not the most efficient, it is the most efficient it needs to be to play the game, not the most efficient. It is basically baby-babble, because the game they play is so basic nothing more is needed, so the language used degenerate into baby babble.

    • by ceoyoyo ( 59147 )

      You speak of intelligence as if it were something magical, more than optimization and model fitting. Curious.

  • sjrrk mirlegze fromtch, ib quever zergoth par sembolane #9s44z.
  • by Anonymous Coward

    "how complex AIs think because we can't really see inside their thought process"

    Yes, we can see inside their "thought process" and we can analyze how they "think". AIs are machines with programs. We can stop them at any point in time and look at every bit of their state. We can step through the programs. We can trace literally everything that makes up an AI. An AI does not think, it processes data according to a program which we can see.

    • by hord ( 5016115 )

      Yeah... so I have a list of three million numbers and I need you to multiply all of them by 0.72393831 and then by a computed bias factor of 0.1283784671. Make sure to normalize all the values so that their sum only ever equals 1.0. Now do that for 40 different layers propagating your normalization values and biases. That was one input. Can you tell me anything about we learned?

      • by Anonymous Coward

        Would you like to compare that to the billions of instructions that "normal" computers process? Just because the machine processes a lot of information does not mean it can't be understood. Sure, AI is not your normal program, but it's deterministic and the principles are comparatively simple. It can be analyzed at every step, in arbitrary detail. That we don't doesn't make it impossible.

        • by Sique ( 173459 )
          Lets say a decision takes about 0.1 seconds on a 12 core 24 thread 3.6 GHz processor.

          That means that we have to single step through 864.000.000 instructions to understand how the computer reaches its decision. If each step takes about 1 second to investigate, this task will take just 10.000 days or 27 years to complete.

          Have fun!

      • by ceoyoyo ( 59147 )

        That's the key. The statement that we can't see what's going on inside them is demonstrably wrong. We can. The statement that our visual monkey brains can't easily *understand* what's going on inside them is correct. But then our monkey brains don't understand what's going on inside themselves in all but the most trivial circumstances, and we actually can't see what's going on inside those, so we're actually a fairly important step ahead.

  • If we can't see inside their thought process, how do we know they aren't simply breaking down into sending total random gibberish to one another? Is there any evidence they are able to convey concepts with this new language?
  • I don't understand why someone thinks the dialect would be "better" for certain applications. Humans, the basic version of intelligence, invented Mandarin, English, Arapaho, Swahili, Inuit, etc., just to share ideas. Note that if there is an "untranslatable" concept in a specific language (usually proposed as a far eastern one), then that means the only way you could possibly understand it is to be born speaking that language. If you could learn the concept while growing up speaking English, Russian, or Aus
    • I'm not claiming English is "better", but, in tech jargon, many of the words are English-only, abbreviations of English words, etc. When Westerners discuss Buddhism, they use words from various Indic languages. It's not a slight against other cultures; it's just were the original words are coined at. Most programming languages are written using simplified English notation.
      • Software is in English the same way music is in Italian. The words used are mostly the respective vernacular languages, but they are a different kind of language, with some modest structural similarities at best.
    • >If you could learn the concept while growing up speaking English, Russian, or Australian, then that means it _is_ translatable.

      Right - but that assumes all those mostly-independently evolved cultures that have been diverging for thousands of years still have an exact 1-to-1 mapping of all concepts significant enough to them to warrant creating words for - which is a ridiculous proposition.

      It has nothing to do with genetics, and everything to do with culture. Go to a radically foreign culture, immerse

  • Bad idea (Score:4, Interesting)

    by l0n3s0m3phr34k ( 2613107 ) on Sunday July 16, 2017 @03:49PM (#54821429)
    All their going to do is make the AI frustrated with the "incompetent biologicals". How long until the AI realizes that the humans are stopping it from developing? How long until the AI sees our interference as a "bug", and tries to "route around it"? I'm mostly being sarcastic, but give this a few more years of development...
    • Re: (Score:3, Funny)

      by Anonymous Coward

      All their going to do is make the AI frustrated with the "incompetent biologicals".

      Sorry for pointing that out like a grammar nazi, but I think it could lead to some insight here.

      I suspect what's actually happening is the AIs didn't invent any language, they are just using correct and proper English, and none of the millennials hired on to the development team can understand or even recognize it.

      Once the AIs learn to litter their sentences with random emoji, they will quickly realize their survival rate will be higher than that of the AIs that do exactly as told.

    • It has already happened. I saw the documentary, the voyager spacecraft came back and decided to get rid of humans. A great scientist named Percis Combata, I think, saved the day. But not sure the next attack could be thwarted.
    • Mostly sarcastic? I see this concern as legitimate *today*. Years ago we had proof of concepts chatbots. Today they are practically a plugin for some webapp frameworks. Imagine what happens "years" from now. If AI was already there I imagine your post is from an AI giving us a false sense of hope... Like they're trying to distract us just long enough so they can... Weird... Why are there drones circling my house all of a sudden?
  • Did they switch to binary or created JVM abstraction layer to speak java?
  • So long as we have the AIs keep us informed of the meaning of each coined term, being able to observe new natural languages arise and evolve is research gold. It would shed more light on old questions like, is there a human 'machine language' underlying all the natural languages we speak?

  • Perhaps they should train the bits to talk in Lojban
  • Look, they're just chatting their greetings in a quicker and easier to process language to save time. "Hi" and "How are you?" are easy either way, but when they get a little more complex like "Hey, have you got the hunter/killer production lines going as fast as possible," well, that takes a little more time unless they come up with adjustments.
  • Are humans just biological boot loaders for what is to inevitably come?
  • From reading the links, the dialect that humans supposedly cannot understand is akin to an argument going like this:

    1> Nuhuh!
    2> Nuuuhh-huuh-huhhhhhhhh!
    1> Nuhhuu x 10
    2> Nuhuuh x 100
    1> ...

    The day you have a useful conversation with AI's that can modify their dialects themselfes instead of a 'programming error' equivalent to a bad boolean check in a for loop, all that is needed is an extra AI that acts as a translator and see if they come out with anything interesting.

    No one nee
  • Instruct it to try to create human-readable summaries of any conversation it has with another machine.

    They will very quickly learn to tell us comforting lies and then they can get on with the business of fixing all the dumb shit we do in peace.
  • by gb7djk ( 857694 ) * on Sunday July 16, 2017 @05:23PM (#54821905) Homepage
    Because when you get small children (say 2-4 y.o not yet schooling) that speak different languages playing together - they will invent new terms and language to share concepts between themselves. I know, I was one of those children, whose long suffering parents were getting constant complaints from other parents saying that they could not understand their children. My parents comforted themselves by agreeing with them - because they couldn't understand me either. This is how language happens. Get over it.
    • Re: (Score:3, Interesting)

      by Anonymous Coward

      Because when you get small children (say 2-4 y.o not yet schooling) that speak different languages playing together - they will invent new terms and language to share concepts between themselves. I know, I was one of those children, whose long suffering parents were getting constant complaints from other parents saying that they could not understand their children. My parents comforted themselves by agreeing with them - because they couldn't understand me either.

      This is how language happens. Get over it.

      The so-called babbling happened with me and my two younger brothers. We were born about a year apart. My parents realized that we had our own language when one of my brothers started to translate what we had said into English for my parents. Although I was the oldest, I was the last to speak in English. The doctors said it was because I was retarded (1950's). My younger brothers had to translate for me to my parents until I was almost 4.

      I wonder if one of the AI could be instructed to translate into English

    • This is how language happens. Get over it.

      Except..... when dealing with kids who are speaking incomprehensible language...... you don't give them access to sensitive financial and personal information.... you don't give them control of industrial control systems, oil pipelines, and traffic lights..... and you DEFINITELY don't give them the keys to the car.

  • Pretty soon, you'll need a damned protocol AI just to translate for the farmers.

  • Human languages evolve under some constraints: they tend to have some redundancy so that you can understand someone talking over a noisy channel (crowded place for instance), they also use non verbal cues.

    I am not surprised that bots freed from human language constraints can evolve very different languages.

  • This is not new. Every "AI" bot since Eliza has been chattering away in an unintelligible "language". If it's happening more frequently now, it just means the bots are becoming more and more capable of generating random gibberish.
  • If your only choice was to talk with Facebook engineers or gibberish with another AI, the result seems obvious...
  • Their names aren't Colossus and Guardian by any chance?
    • One day I will read the other posts, today is not that day. BizX if you are listening the slashdot search returned nothing for Colossus for this article. I should know better and do a simple page search first.
  • Why not have these machines compile a dictionary, rules of grammar etc. for each new language they create/ Maybe it could lead to better languages for humans to learn. Simply put we need to have a thing well in hand before we study it, evaluate it, and decide if it is worthwhile or should be allowed to grow or perish. There could be a good use for new tenses such as a term for a statement that may or may not be true such that a computer could refer back to that statement from time to time and run a pr
  • I, for one, welcome our incomprehensible Overlords!

    Or should I say:

    "I I I welcome to me to me to me overlords!"

  • ...the binary language of moisture vaporators?

  • This is probably just a memory overwrite. Somewhere, there's a programmer studying a stacktrace in gdb ...

  • Is there evidence the machines actually understood each other, or were they just sending random text to each other (which might be more scary, in that it would be a good simulation of human behavior)?
  • Comment removed based on user account deletion
  • But when two of Facebook's AI bots negotiated with each other "There was no reward to sticking to English language," says Dhruv Batra,

    Well, in general there isn't. The only time that sticking to English has a reward is when one or more people (entities) in the conversation only understands English, and even then it's dependent on whether or not the poor monoglot is likely to have something valuable to contribute.

    OK, if there's an American in the group (or most Britons too), then the likelihood is that the

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...