Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Supercomputing Intel Hardware

Cray, Intel To Partner On Hybrid Supercomputer 106

An anonymous reader writes "Intel convinced Cray to collaborate on what many believe will be the next generation of supercomputers — CPUs complemented by floating-point acceleration units. NVIDIA successfully placed its Tesla cards in an upcoming Bull supercomputer, and today we learn that Cray will be using Intel's x86 Larrabee accelerators in a supercomputer that is expected to be unveiled by 2011. It's a new chapter in the Intel-NVIDIA battle and a glimpse at the future of supercomputers operating in the petaflop range. The deal has also got to be a blow to AMD, which has been Cray's main chip supplier."
This discussion has been archived. No new comments can be posted.

Cray, Intel To Partner On Hybrid Supercomputer

Comments Filter:
  • Re:AMD worried? (Score:3, Informative)

    by pipatron ( 966506 ) <pipatron@gmail.com> on Tuesday April 29, 2008 @06:37AM (#23236096) Homepage

    every Nintendo product since the gamecube has used ATI hardware

    I'll list them for you:

    1. Gamecube*
    2. Wii

    *The company that made the Gamecube hardware was later bought by ATI, so ATI didn't have much to do with that.

  • Most likely? (Score:5, Informative)

    by Whiney Mac Fanboy ( 963289 ) * <whineymacfanboy@gmail.com> on Tuesday April 29, 2008 @06:56AM (#23236158) Homepage Journal
    it will most likely just be used for more nuclear weapons simulations [emph mine]

    The majority (but not all) supercomputers on the top 500 supercomputer list [top500.org] are related not to nuclear weapons research, but meteorological/oceanographic & other scientific uses.
  • by chuckymonkey ( 1059244 ) <charles@d@burton.gmail@com> on Tuesday April 29, 2008 @06:58AM (#23236172) Journal
    It's not always about just how much data they can process. It's more about being able to do it quickly and in parallel. Say for instance you want to simulate a black hole. You have so much raw math that needs to be handled all at the same time, there's no way you can do this with current internet technology. Another example is a weather simulation, you have to take so many things into account all at once. That's why the compute nodes in supercomputers are connected by extremely high speed interconnects. They want all the CPUs in these things to have the latency of a local bus. Now if all they need to do is crunch raw data with no emphasis on parallel processes then yes, things like Folding@home are grand for that purpose.
  • by chuckymonkey ( 1059244 ) <charles@d@burton.gmail@com> on Tuesday April 29, 2008 @07:35AM (#23236330) Journal
    Your smug is showing, I work with one on a daily basis for the government in the missile defense arena. Hell in two months I'm going to be building one of those new IBM machines, we just signed the purchase with IBM. Yes I said that I'm going to be building one, IBM is not allowed in our building. I don't even have to rent nodes of it, we have it all to ourselves. It's not the applications or the hardware that is the problem, it's the latency. I don't care how fast your internet connection is, you cannot match the interconnect fabric of these machines. If you want to parse out little bits of data to a vast number of computers using the spare cycles of home computers is great, I'm not trying to downplay that. You just cannot run them in parallel and do real time simulations on them. That is why we have these huge monolithic computers. Let me give you two examples: Protein folding, not parallel and also not time sensitive. More of a when you finish I'll give you a new problem to chew on. Tracking millions of orbits from shit in space, very parallel requires correct timing low latency transactions between CPU nodes. Also needs results as events occur, there's no room for "When your done I'll give you a new one". Working out the problems with star travel as the original parent said is a grand idea using a distributed system, running the simulations in real time to actually have an idea of whether or not those solutions will work is where computers such as the ones I work with come in.
  • by chuckymonkey ( 1059244 ) <charles@d@burton.gmail@com> on Tuesday April 29, 2008 @08:17AM (#23236542) Journal
    MMORPG is real time as far as the human mind is concerned. If you look at all of them they have a latency counter too, they suffer badly sometimes from that problem. Hell the new supercomputer systems are not even real time, they have problems with latency as well. That's usually what the limiting factor as far as computing nodes is, the farther you space nodes out, or the more hops that they take over the fabric all has latency. For instance, one of our old SGI machines is limited to 2048 processors (SGI claims 512) because the NUMA link interface is too spread out beyond that. Of course that's running over copper with electrical signalling, newer systems use fiber which is very fast over the line, but the bottle neck is in the connections. So yet again we run into the problem of latency being the limiting factor. They even have specialized routers in them that are designed to be transparent to the overall machine, but beyond a certain number of hops you still have latency. I wish I could post diagrams and say a little more, but I'm already treading into the "trade secrets" ground. The difference between real-time simulation and an MMORPG though is a little more sticky problem. Think of it like this, the MMORPG connects to a main server, that server has the world running on it, it keeps track of all the other players in the game. The client computer merely syncs with that server, it doesn't do anything other than present the world to the end user and take the data from the server and display it on the screen. There really isn't a strong emphasis on real-time as compared to a weather simulation. When you're running these huge simulations you have multiple independent processes and threads all going through the machine at the same time, all to achieve one single end result. I'm sorry if I'm not doing too well at making sense, I have a little trouble explaining it because I'm more of a visual person. The best I can really say is that the comparative complexity of the problems between the two is vast. Someone out there that's a little better with words feel free to step in and help me out here. Now, when we all have fiber running to every computer connected to the internet maybe then the distributed systems become more of a reality. Another problem that I see with distributed systems though is the variation in hardware. When the programs get written for the supercomputing platforms there is an expectation of sameness for the hardware. All the processors, all the memory, all the fabric links, all the buses, all the ASICS, everything is the same from one point to another. Intelligently identifying hardware differences and exploiting them for real time simulation would be a real trick if someone could pull it off. Hmmm, my firefox spell check seems to think I'm British.
  • by encoderer ( 1060616 ) on Tuesday April 29, 2008 @09:24AM (#23237076)
    The MMORPG argument is a bit like comparing a VNC session to a cluster.

    In both cases you're harnessing the power of at least 2 CPU cores over the internet to accomplish a computing task.

    But the capacity of the two is separated by multiple orders of magnitude.

    And, really, a 10 second delay is hardly even an annoyance for a human as we swap between our IM, Email, iTunes and the game we're playing. But that same 10 seconds in a parallel computing environment where X nodes are idled waiting for a result from Y?

    Also, you seem a bbit like a douche bag. No offense. But emoticons? Seriously?
  • Department of Energy (Score:3, Informative)

    by flaming-opus ( 8186 ) on Tuesday April 29, 2008 @10:49AM (#23238026)
    DOE, which does the US nuclear weapons simulations, is probably the largest single buyer of capability-class supercomputers, but still a small fraction of the total. Even within DOE, only a large minority of systems are dedicated to Nuke simulation. Sandia, livermore, and Los Alimos all have 2-3 large nuclear simulation machines each. (or will admit it publicly) Large systems at Pacific Northwest, Oak Ridge, Lawrence Berkely and Argonne are used for open science research.

    High-end supercomputers are used, in significant ways, for climate research, short-term weather forcasts, seismic modeling, cosmology, fusion research, protein folding, predicting the size of petrolium deposits, automotive and aircraft designs, and a host of other engineering codes. Even with that stated, the piece of the pie chart labelled "other" is 35% of the total.

    On the other hand, nuclear weapons simulation is a difficult enough problem, and requires a powerful enough machine, that it subsidizes the design of super-scalable machines that are then sold to other customers for other tasks.

Say "twenty-three-skiddoo" to logout.

Working...