Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Facebook AI Open Source

Facebook Open Sources AI Hardware Design (facebook.com) 56

UnknowingFool writes: Facebook has released specifications on their newest Open Rack-compatible hardware server they named Big Sur. It is for AI computing at a large scale. Using eight 300W GPU slots the server is touted to offer more efficient neural network training by using GPUs. The announcement reads in part: "We plan to open-source Big Sur and will submit the design materials to the Open Compute Project (OCP). Facebook has a culture of support for open source software and hardware, and FAIR has continued that commitment by open-sourcing our code and publishing our discoveries as academic papers freely available from open-access sites. We're very excited to add hardware designed for AI research and production to our list of contributions to the community. We want to make it a lot easier for AI researchers to share techniques and technologies. As with all hardware systems that are released into the open, it's our hope that others will be able to work with us to improve it. We believe that this open collaboration helps foster innovation for future designs, putting us all one step closer to building complex AI systems that bring this kind of innovation to our users and, ultimately, help us build a more open and connected world."
This discussion has been archived. No new comments can be posted.

Facebook Open Sources AI Hardware Design

Comments Filter:
  • Using eight 300W GPU slots

    Dubbed "Big Sur," the hardware includes eight high-performance GPU boards of up to 33 watts

    Quite a bit of difference...

  • neural networks and machine learning are not 'artificial intelligence'. anyone who confuse these with AI are just parroting hype. refer to them by their proper names. and leave vaguely defined 'artificial intelligence (AI)' out of any serious article until there is actual intelligence artificially created.

    • by gweihir ( 88907 )

      So far CS research is even struggling to create Artificial Stupidity. No AI that deserves the name anywhere in sight, not even as a credible theoretical model.

    • You don't seem to understand that AI stands for. Artificial Intelligence, not Actual Intelligence.
    • Why does "AI" have to be dominated by "classical AI"? Classical AI didn't even attempt to model intelligence -- it was pure behaviourism at heart -- so I don't really know if it can justify being called AI at all.
    • by ranton ( 36917 )

      neural networks and machine learning are not 'artificial intelligence'. anyone who confuse these with AI are just parroting hype. refer to them by their proper names. and leave vaguely defined 'artificial intelligence (AI)' out of any serious article until there is actual intelligence artificially created.

      Stop confusing AI with strong AI. Any system that artificially creates results that humans would perceive as intelligent qualifies as Artificial Intelligence. Your definition of AI is not the industry standard so why would any serious article adhere to it?

  • by gweihir ( 88907 ) on Thursday December 10, 2015 @09:31PM (#51098389)

    Calling neural nets "AI" is about as far removed from what that term implies as possibly without leaving the area of classificators. The only thing neural nets do is if you show them enough examples from a specific thing, they eventually have a good chance to recognize other instances of that thing. No intelligence involved, just pattern matching were you can train the patterns instead of having to configure them.

    • by joaommp ( 685612 ) on Thursday December 10, 2015 @09:42PM (#51098435) Homepage Journal

      Isn't that exactly how our brains work? Hmmm... I wonder why they call them "neural nets"... Isn't it reasonable, then, to regard it as a (even if very primitive) form of artificial intelligence, or at least, a component of it?

      • by Anonymous Coward

        The number of variables that our brain processes to make decisions is quite large. Our brains take shortcuts when recognizing patterns, but even with the shortcuts, there is nothing that comes close to this level of processing. So, for instance, consider how all five senses can be used to recognize situations. e.g. I smell burning wood, I see smoke, I can hear a fire engine, there must be a fire beyond that hill. Our knowledge from different areas and life experiences merge together to help us recognize and

        • Re: (Score:3, Informative)

          by Anonymous Coward

          I wonder if anyone has calculated how many parallel processes and how much instantly accessible RAM would be needed to simulate the human brain. I'm guessing the compute power needed would be jaw dropping.

          http://www.extremetech.com/extreme/163051-simulating-1-second-of-human-brain-activity-takes-82944-processors

          "It took 40 minutes with the combined muscle of 82,944 processors in K computer to get just 1 second of biological brain processing time. While running, the simulation ate up about 1PB of system memory"

          • http://www.extremetech.com/extreme/163051-simulating-1-second-of-human-brain-activity-takes-82944-processors

            "It took 40 minutes with the combined muscle of 82,944 processors in K computer to get just 1 second of biological brain processing time. While running, the simulation ate up about 1PB of system memory"

            82,944x40x60=~200,000,000, which is about 2^27,5.

            In other words, 27,5 Moore's Law iterations to have the simulation real-time.

            That's 41 years...

            2056. I expect to still be alive then, in my early 80s, barring the wars for oil reaching Norway (it could happen...).

      • by KGIII ( 973947 )

        We actually have a few AI researchers here on this site and, if they're to be believed, the answer is no and that it's not even likely to be a step in the right direction. I don't know enough to opine but that's what they have shared on more than one occasion. They seem to be, for the most part, in agreement on this. I believe they use a fairly strict definition of the phrase, however.

        • Comment removed based on user account deletion
          • by ranton ( 36917 )

            I think many have abandoned the term AI. There's too much history and it's misleading. Machine learning is more often used, but the phrase I think is most appropriate is statistical learning.

            AI is still the most appropriate term when you are referring to the entire field as a whole. There are subsets of artificial intelligence that do not use machine learning or statistical models. At its core, artificial intelligence is artificially creating systems which exhibit intelligent behavior. Identifying faces in images or transcribing text from speech certainly qualify. Even simple Bayesian networks in video games that allow computer controlled characters to interact with human players qualify. Some

          • by gweihir ( 88907 )

            Fully agree to that. The thing is that pattern recognition with neural nets is an entirely mechanical process, it does not resemble anything humans can do at all. It has no understanding of what it does at all.

            I do agree that the term "AI" has now been so warped by the media and the public that it is worthless. Even "strong AI" or "true AI" are getting compromised. The thing to remember is that "AI" does not mean anything resembling human intelligence, but is something entirely different.

      • by Tablizer ( 95088 )

        No, the brain also runs on love and beer.

      • Isn't that exactly how our brains work?

        We don't know "exactly how our brains work". Even if two things are outwardly appearing to do the same thing they may in doing very things under the hood. In the specific case you're referring to, there's likely an even bigger difference: I reckon brains are able to generalise much better than machines using much few samples. They are also far better at adapting previous knowledge to novel situations.

        • by gweihir ( 88907 )

          Human beings are fundamentally better at this type of classification tasks. Neural nets have no chance of ever matching them, and it is not a question of computing power. There is a fundamental difference in the quality of the problem, which humans usually describe as "recognizing the nature of the thing" or something like it, and there is no computer equivalent for that. It seems to be a process involving consciousness in humans.

          That is not to say that neural nets are useless. They just do not do anything

          • We don't know that neural nets can't match human performance because we don't know how humans do it. How we describe our classification ability verbally ("recognizing the nature of the thing" or whatever) isn't relevant: it's the wrong level of description to be comparing to a neural network. The correct level of description is a detailed theoretical understanding of how a defined circuit of brain cells does things like generalization, pattern completion, pattern recognition, etc. That is something you can
            • by gweihir ( 88907 )

              it's the wrong level of description to be comparing to a neural network, The correct level of description is a detailed theoretical understanding of how a defined circuit of brain cells does things like generalization, pattern completion, pattern recognition, etc. That is something you can meaningfully compare to what in silico network is or may be capable of. When you do that you may realise that the circuit of brain cells is also not doing anything clever.

              You are deep into the area of belief here as you assume, without proof or even good reason that "the circuit of brain cells is also not doing anything clever", despite ample indication to the contrary. In fact, we do not even know that it is merely a "circuit of brain cells" that produces these results. Again there is a lot of indication that it may not be and only a physicalist (unfunded, religious) assumption that it is.

              This makes your argument circular: Assume the mental abilities of a human are created

              • In fact, we do not even know that it is merely a "circuit of brain cells" that produces these results.

                We see no evidence that anything else is involved. I'm not advocating or proposing any fundamental truths, I'm just suggesting that we keep doing experiments and see what we learn. So far what we learn supports what I say. There's no religious conviction in that, just data. If the data suggest something else then I'll change my mind.

                By saying "nothing special" I don't mean that the process isn't amazing, because it is. I just mean that as answers come in we may find what is going to be slightly more pros

                • by gweihir ( 88907 )

                  A bit like now, since the revolutions in molecular biology, genetics, and developmental biology, most people no longer consider "what is life" to be such a deep question. Nowadays anyone with an education in these fields really knows quite well what "life" is. There's no big shocking mystery there, just an enormous quantity of really complicated stuff.

                  Actually some incomplete models that people mistakenly believe to complete. Sure, it looks like cells are just chemical machines, but until one has been built from scratch, that is just an idea, not a hard fact. And with intelligence and consciousness? For the latter there is no physical mechanism in the sense that physics simply does not apply. For the former, it increasingly looks like physics alone cannot do it either.

                  • Actually some incomplete models that people mistakenly believe to complete. Sure, it looks like cells are just chemical machines, but until one has been built from scratch, that is just an idea, not a hard fact.

                    We are indeed . I think you're greatly underestimating what we've learned since the 1950s if you think it amounts to "just an idea". [theguardian.com]

                    And with intelligence and consciousness? For the latter there is no physical mechanism in the sense that physics simply does not apply. For the former, it increasingly looks like physics alone cannot do it either.

                    If not ultimately physics then what?

                    • by gweihir ( 88907 )

                      Actually some incomplete models that people mistakenly believe to complete. Sure, it looks like cells are just chemical machines, but until one has been built from scratch, that is just an idea, not a hard fact.

                      We are indeed . I think you're greatly underestimating what we've learned since the 1950s if you think it amounts to "just an idea". [theguardian.com]

                      I am abreast of the pertaining scientific advancement. They are impressive, but they do not say that we understand the mechanisms involved fully or even mostly. You are misinterpreting them. This is not "creating life from scratch". It is about comparable to doing a BIOS update. That does not mean you know what you are doing and why it works or what the rest of the computer actually does. It does not even make you sure (if you are smart) that what you worked on is a computer.

                      And with intelligence and consciousness? For the latter there is no physical mechanism in the sense that physics simply does not apply. For the former, it increasingly looks like physics alone cannot do it either.

                      If not ultimately physics then what?

                      Unknown at this time. That you d

                    • This is not "creating life from scratch".

                      Yes, obviously, but it's a big start.

                      Anyway... I think if so far we've been able to explain a thing in physical terms then there's no reason to bring anything else into it simply because we don't understand everything. You seem to think the opposite, because in continuing to do that one is falling back on what one already understands: so something unknown is needed and we don't know what it is. So I think we agree to disagree. Cheerio...

                    • by gweihir ( 88907 )

                      You mistake my stance: I have been following AI research for 30 years, and they consistently and repeatedly hit a concrete wall. That made me think. As it turns out, we cannot explain Intelligence, Consciousness or even life itself in physical terms, unless gross simplifications are applied. To me, these simplifications seem to be far too gross and also seem to hide immense complexity. Now, with regard to physics, there does not seem to be a way to hide this complexity in the outer shape apparent, not enoug

      • by gweihir ( 88907 )

        Nobody knows how humans do what they can do. And the neural nets on these machines are massively and fundamentally different from what is found in the brain. Some base understanding required though.

    • by Anonymous Coward

      Maybe FB realizes they will run out of humans to assimilate to maintain growth rate, so they are having to create new ones through AI.

    • by LWATCDR ( 28044 )

      Ahh so you follow the classic definition of AI.
      AI is what we can not make a computer do well yet.

  • Does it bug anyone else that none of the articles about Big Sur currently out there do not mention the system specs of this server? I want to know what kind of GPU's they are using, and I bet that that I'm not the only one.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      Telsa M40s
      http://www.nextplatform.com/2015/12/10/facebook-to-open-up-custom-machine-learning-iron/

  • by gonz ( 13914 ) on Friday December 11, 2015 @03:24AM (#51099123)

    A friend of mine works at a company where the lawyers reviewed Facebook's "open source" licensing terms (surreptitiously buried in a text file entitled "Additional Grant of Patent Rights") and concluded that it isn't safe. They issued a company-wide order that all projects must immediately remove any Facebook open source with these license terms. The terms basically allow Facebook to unilaterally terminate the open source license if you take "any action" against their patent claims. The exact wording is:

    "The license granted hereunder will terminate, automatically and without notice, if you (or any of your subsidiaries, corporate affiliates or agents) initiate directly or indirectly, or take a direct financial interest in, any Patent Assertion: (i) against Facebook or any of its subsidiaries or corporate affiliates, (ii) against any party if such Patent Assertion arises in whole or in part from any software, technology, product or service of Facebook or any of its subsidiaries or corporate affiliates, or (iii) against any party relating to the Software."

    ...


    A "Patent Assertion" is any lawsuit or other action alleging direct, indirect,
    or contributory infringement or inducement to infringe any patent, including a
    cross-claim or counterclaim.

    In this thread, a Google employee says that their lawyers came to the same conclusion:

    https://news.ycombinator.com/i... [ycombinator.com]

    If so, why would Facebook do this? Why isn't it more widely discussed?

    • A friend of mine works at a company where the lawyers reviewed Facebook's "open source" licensing terms ... and concluded that it isn't safe.

      if you think that's bad, just wait 'til you read the terms under which Facebook's AI releases the next Facebook AI. ;)

    • by JesseMcDonald ( 536341 ) on Friday December 11, 2015 @12:19PM (#51100911) Homepage

      Those terms seem perfectly reasonable to me. If you want to cling to your patents, don't use their code. We could do with a few less patents in the world.

      Suing someone over patent infringement while taking advantage of code they've freely contributed for everyone's use, on the other hand—now that would be evil.

      • Those terms seem perfectly reasonable to me. If you want to cling to your patents, don't use their code. We could do with a few less patents in the world.

        Agreed, but if I understand correctly this is not the actual legal effect of those the terms. When they say "any [...] other action alleging [...] indirect [...] infringement to any patent [...] against any party relating to the software" the trigger is ridiculously broad. Even saying something bad about Facebook because *they* sued *you* could qualify.

        B

        • OK, I can see how the "or other action" language can make that overly broad. You wouldn't want something said in casual conversation to result in termination of all your licenses. On the other hand, I can see why they wouldn't want to limit it to only fully-formed lawsuits, since that would open the door to patent holders trying to extort payments through settlements without actually risking license termination by carrying through with a lawsuit.

          Still, that seems like a mere case of poor wording rather than

  • Is it worth it to spend all these resources and hardware on making farmville more challenging?

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...