Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Supercomputing Intel Hardware

Cray, Intel To Partner On Hybrid Supercomputer 106

An anonymous reader writes "Intel convinced Cray to collaborate on what many believe will be the next generation of supercomputers — CPUs complemented by floating-point acceleration units. NVIDIA successfully placed its Tesla cards in an upcoming Bull supercomputer, and today we learn that Cray will be using Intel's x86 Larrabee accelerators in a supercomputer that is expected to be unveiled by 2011. It's a new chapter in the Intel-NVIDIA battle and a glimpse at the future of supercomputers operating in the petaflop range. The deal has also got to be a blow to AMD, which has been Cray's main chip supplier."
This discussion has been archived. No new comments can be posted.

Cray, Intel To Partner On Hybrid Supercomputer

Comments Filter:
  • I'm sure the volumes of chips they sell in Crays is a drip in the ocean compard to other channels. It's not like Supercomputers are a big seller...
    • After their acquisition of VIA and then later ATI, they have established themselves in a larger market than simply performance graphics chips for end users. Heck, every Nintendo product since the gamecube has used ATI hardware.

      The last line of that summary is clearly flamebait.
      • Re: (Score:3, Informative)

        by pipatron ( 966506 )

        every Nintendo product since the gamecube has used ATI hardware

        I'll list them for you:

        1. Gamecube*
        2. Wii

        *The company that made the Gamecube hardware was later bought by ATI, so ATI didn't have much to do with that.

        • Are you telling me that all these chip company mergers are there to get on Nintendo's good side and start making chips for Nintendo instead of their competitors?

          I mean, Company Y makes GC chip, but gets bought by ATI, ATI gets branded on GC.
          Nintendo likes the chip, gets ATI to make it for Wii. ATI gets branded on Wii.
          AMD buys ATI-- ATI stays as a brand name.

          SHIT!! This is off topic... ah well.. who cares.
      • VIA Technologies is an independent company and I don't recall any significant talk of a merger with AMD. Since AMD acquired ATI they have little to gain from buying VIA anyway.
    • Re: (Score:3, Insightful)

      by dreamchaser ( 49529 )
      It's more about bragging rights and PR/marketing than about volume of chips sold. I doubt AMD is terribly worried as they have much bigger concerns right now.
    • Re: (Score:3, Insightful)

      by lakeland ( 218447 )
      AMD might be worried. Cray and similar deals are all about bragging rights, not about sales.

      Like that Fujitsu supercomputer... it makes you think 'hey, maybe there is something to Fujitsu more than photocopiers...'

      I don't know what influences normal customer's perception of a company like AMD. I don't even know who AMD's main customers are - white-box manufacturers? enthusiasts? So while industry analysts put a lot of weight on these high-profile shifts, ... well, it might sway public opinion.
      • >it makes you think 'hey, maybe there is something to Fujitsu more than photocopiers...'
        Interesting to see how different territories have different takes on this. I've never seen or hear of Fujitsu making photocopiers. When I think of them I think of laptops/desktops & hard drives.
        • by monsted ( 6709 )
          And when i think of them i think of pain and suffering, mostly for people who are unfortunate enough to have bought their laptops/desktops.
          • I have an old P133 Fujitsu and it is a tank. Dropped it 3Ft to concrete and all that broke was the status LCD. I also have a pair of Stylistic pen tablets (486DX4100) and they rock for what they are.
            -nB
    • Re:AMD worried? (Score:4, Insightful)

      by Kjella ( 173770 ) on Tuesday April 29, 2008 @06:10AM (#23236228) Homepage
      Please make sure to make a "Supercomputers is an irrelevant little niche" comment in a thread about Linux in supercomputers. Let me know how the charred remains of your karma is doing afterwards. It's all about bragging rights, in particular "the world's most powerful supercomputer" title. Most of these are trying to run some O(ugly) problem and improving the model or algorithms probably means a lot more than just adding 10x more power.
    • Isn't a petaflop Cray the minimum hardware required to run Duke Nukem Forever?
  • When I see this stuff I wonder if everybody who wanted to have this kind of computing power for themselves could agree on a simple project like F@H or Electric Sheep and connect everybody on the planet to compute as a single unit for a certain portion of the day. The connectivity and power would be awesome and I don't think it is -that- weird an idea. I would rather schedule a super task that could perhaps consider the methods to perform a hyperspace transfer from planet to planet. I would contribute my 8 t
    • Re: (Score:3, Informative)

      It's not always about just how much data they can process. It's more about being able to do it quickly and in parallel. Say for instance you want to simulate a black hole. You have so much raw math that needs to be handled all at the same time, there's no way you can do this with current internet technology. Another example is a weather simulation, you have to take so many things into account all at once. That's why the compute nodes in supercomputers are connected by extremely high speed interconnects
      • I am not quite sure that you understand systems computing well ( no offense intended ) . It is the program itself and the structure of it as it works with the hardware that allows results with the equipment. I have worked on the design of many -real- super computers and I guess that I have a little practical experience in that area and I say that this concept would kick that hybrid computer's ass by a country light year :) I have access to a 'Blue Gene' at school in genetics and so I am not saying this jus
        • by chuckymonkey ( 1059244 ) <(moc.liamg) (ta) (notrub.d.selrahc)> on Tuesday April 29, 2008 @06:35AM (#23236330) Journal
          Your smug is showing, I work with one on a daily basis for the government in the missile defense arena. Hell in two months I'm going to be building one of those new IBM machines, we just signed the purchase with IBM. Yes I said that I'm going to be building one, IBM is not allowed in our building. I don't even have to rent nodes of it, we have it all to ourselves. It's not the applications or the hardware that is the problem, it's the latency. I don't care how fast your internet connection is, you cannot match the interconnect fabric of these machines. If you want to parse out little bits of data to a vast number of computers using the spare cycles of home computers is great, I'm not trying to downplay that. You just cannot run them in parallel and do real time simulations on them. That is why we have these huge monolithic computers. Let me give you two examples: Protein folding, not parallel and also not time sensitive. More of a when you finish I'll give you a new problem to chew on. Tracking millions of orbits from shit in space, very parallel requires correct timing low latency transactions between CPU nodes. Also needs results as events occur, there's no room for "When your done I'll give you a new one". Working out the problems with star travel as the original parent said is a grand idea using a distributed system, running the simulations in real time to actually have an idea of whether or not those solutions will work is where computers such as the ones I work with come in.
          • Cool. I love intelligent conversation. I worked on the 'neither confirm nor deny' myself many years ago at DARPA, however, that alone does not qualify me to be the final word on what is possible. I probably should not make any specific quotes from people or situations, however I can say that I was not impressed with the dimensional reasoning of the systems themselves. I do wonder about new methods and I can say that from 'my experience' that this is a valid concept. Then you say that MMORPG's cannot exist,
            • by chuckymonkey ( 1059244 ) <(moc.liamg) (ta) (notrub.d.selrahc)> on Tuesday April 29, 2008 @07:17AM (#23236542) Journal
              MMORPG is real time as far as the human mind is concerned. If you look at all of them they have a latency counter too, they suffer badly sometimes from that problem. Hell the new supercomputer systems are not even real time, they have problems with latency as well. That's usually what the limiting factor as far as computing nodes is, the farther you space nodes out, or the more hops that they take over the fabric all has latency. For instance, one of our old SGI machines is limited to 2048 processors (SGI claims 512) because the NUMA link interface is too spread out beyond that. Of course that's running over copper with electrical signalling, newer systems use fiber which is very fast over the line, but the bottle neck is in the connections. So yet again we run into the problem of latency being the limiting factor. They even have specialized routers in them that are designed to be transparent to the overall machine, but beyond a certain number of hops you still have latency. I wish I could post diagrams and say a little more, but I'm already treading into the "trade secrets" ground. The difference between real-time simulation and an MMORPG though is a little more sticky problem. Think of it like this, the MMORPG connects to a main server, that server has the world running on it, it keeps track of all the other players in the game. The client computer merely syncs with that server, it doesn't do anything other than present the world to the end user and take the data from the server and display it on the screen. There really isn't a strong emphasis on real-time as compared to a weather simulation. When you're running these huge simulations you have multiple independent processes and threads all going through the machine at the same time, all to achieve one single end result. I'm sorry if I'm not doing too well at making sense, I have a little trouble explaining it because I'm more of a visual person. The best I can really say is that the comparative complexity of the problems between the two is vast. Someone out there that's a little better with words feel free to step in and help me out here. Now, when we all have fiber running to every computer connected to the internet maybe then the distributed systems become more of a reality. Another problem that I see with distributed systems though is the variation in hardware. When the programs get written for the supercomputing platforms there is an expectation of sameness for the hardware. All the processors, all the memory, all the fabric links, all the buses, all the ASICS, everything is the same from one point to another. Intelligently identifying hardware differences and exploiting them for real time simulation would be a real trick if someone could pull it off. Hmmm, my firefox spell check seems to think I'm British.
              • I know it's bad form to reply to my own post, but as to the MMORPG problem I had another epiphany. The main difference is that in those games you aren't trying to send all the data from a maxed out processor over the internet, it's just sending a lot of little bits of data. You're just sending your position in the world and your actions within it. If you were going to do on the supercomputer level you would be sending not only that, but all the weather that you generated around you, the windspeed of your
                • Re: (Score:3, Funny)

                  by Zebra_X ( 13249 )
                  "I know it's bad form to reply to my own post, but as to the MMORPG problem I had another epiphany."

                  Indeed. I am not sure we really need you to spend time writing any of this down.

                  Nothing to see here. Move along.
                • Great stuff, and I know how you feel about trying to say something and then realizing you are not allowed to say that because of non-disclosure or other issues. I do some MMORPG software and apparently you have too. I grasp that part of what you are saying, however I see any computing problem as NANDs because that is how we always designed. A sea of gates. If I had unlimited money to throw at a solution I could make a CAM 'content addressable memory' array that responds in a single cycle time to any set of
              • What you're talking about is high inter dependency in complex systems vs lower order or non-dependency. The difference you're thinking of is extremely interdependent and time critical applications vs those of lower order/complexity/dependency.

                The interdependency doesn't even have to be with lots of caculations + outputting lots of data, that n number of other processes/nodes need for next iteration, say like having the results of critical elements that need to be routed to x,y,z (say in a specificed order
            • Re: (Score:3, Informative)

              by encoderer ( 1060616 )
              The MMORPG argument is a bit like comparing a VNC session to a cluster.

              In both cases you're harnessing the power of at least 2 CPU cores over the internet to accomplish a computing task.

              But the capacity of the two is separated by multiple orders of magnitude.

              And, really, a 10 second delay is hardly even an annoyance for a human as we swap between our IM, Email, iTunes and the game we're playing. But that same 10 seconds in a parallel computing environment where X nodes are idled waiting for a result from Y?
              • There still remains the fact that your brain can comprehend what I am saying and it only runs in the millisecond range. I'm guessing just as a simple example: If each computer simulated a single neuron effectively, and they were connected by an IP address that they would in fact simulate a brain of x neurons based on how many computers participated. And I do not think I have any usefulness in feminine hygiene.
          • You're absolutely right that latency matters. However for problems that don't parallelise well single processor computation rates (FLOPS) are still important too. Many important problems require a lot of calculation and a lot of communication between subdomains. This means that they can be parallelised but with diminishing returns for increasing numbers of cpus. Climate models are a good example of this. Running on fewer faster CPUs may be better than simply throwing more slower CPUs at the problem. T
          • Will you be using it to promote war, or will you be using it to promote peace?
            • I've done the war thing with the Army. I much prefer the defensive nature of the systems that I use now. You really cannot use them to attack other countries, you can however use them to defend your country.
    • by tgd ( 2822 )
      Yeah, thats all we need to break the laws of physics, a billion PCs all working together!

      Computers can't consider anything. They can't contemplate, they can't theorize.

      They pretty much do math.

      Of course as I read your post, I realize you're probably joking. Oh well, my statement stands.
      • I wasn't joking. I work in AI and I have designed supercomputers and I worked with some of the early designs at CRAY and IBM and I do work with RISC. It seems that even though this is the site that I find the most intelligent and funny people in the world usually. I always like to be proved wrong in my assumptions, since it allows me to plug a leak in my mental process, however I see no real reason that this could not work. As far as intelligence and the simulation of such, connectivity is the greatest key
        • by tgd ( 2822 )
          I don't know if you have worked on the things you claim and just are confused now, or if you worked for companies that did those things and are just overstating your involvement, or what... but I do find your reply funny in a way. If you think your understanding of modern supercomputing architectures and cognitive science is up to the task, I'm sure you can find someone to back the prototyping of such a system.

          However, this sort of reminds me of the guy who inspired this video: http://www.youtube.com/watch? [youtube.com]
          • LOL, That was great, yes I have worked on those things. I didn't design Crays, but I did design supercomputers. It is my real name, and the people I have worked with will probably eventually see this post. This is the first subject on Slashdot that I have had an active interest in the development of. I have seen so much vapor ware and announce ware that I really appreciate the youtube. If I had any mod points left, I would mod you up for funny. I hate that guy too, there are sooo many people looking for mon
  • Always makes me wonder why they need all this power, after all anybody can build a very impressive home cluster these days that would of been classed a super computer a few years ago. I guess computing requirements rise to meet available systems thus fueling demand.

    I support AMD right now, and if they got bigger then Intel then I would support Intel.

    My belief is that any firm needs adequate competition to keep it innovative, competitive and customer focused. When one of them has a monopoly then we sho
    • About the only thing I can see this being used for is pixars ever increasing demand for more computer power (with the least possible power consumption). I guess that depends, though. Could supercomputers be used that way? I'm sure pixar would have considered all viable alternatives. I wonder what they would get if they combined a supercomputer and a mainframe (a waste of space?). Sadly, the more informed will have to answer that :(
      • by mikael ( 484 )
        The problem with high-end animation is that you need to load in many different textures and geometry models before being able to render the final image and write out a single frame. Most of the supercomputer work seems to have everything in CPU node memory at the same time, and just run one iteration instantly (a 2048^3 3D grid of CFD cells for simulating supernova).

        Previous research in parallel processing tried allocating processing nodes to different locations in the scene or different geometric models, o
    • I run full-wave electromagnetic simulations to investigate fields generated inside the human body. My runtime is reasonable if I pick some parameters, but running an automated optimizer could easily take weeks using a 30 node Opteron cluster. If you give me more cycles, I can think of stuff to keep them busy. But if you want to see a really power-hungry project, talk to the protein folders - the guys that model chemical interactions starting at quantum mechanics, and try to find out how the shapes of prote
      • "And are you not," said Fook leaning anxiously forward, "a greater analyst than the Googleplex Star Thinker in the Seventh Galaxy of Light and Ingenuity which can calculate the trajectory of every single dust particle throughout a five-week Dangrabad Beta sand blizzard?"

        "A five-week sand blizzard?" said Deep Thought haughtily. "You ask this of me who have contemplated the very vectors of the atoms in the Big Bang itself? Molest me not with this pocket calculator stuff."

    • by jimmypw ( 895344 )
      "I support AMD right now, and if they got bigger then Intel then I would support Intel."

      Although i dont disagree where your coming from, why would i buy something when i can get something better for the same price from another company.
    • by dave420 ( 699308 )
      You support the little guy solely because he's the little guy? That's pretty silly, surely. Size doesn't mean they're doing the right thing. What if AMD started to throw babies off mountains tomorrow - would you still support them? Your post seems to suggest you will.
    • by geekoid ( 135745 )
      "
      I support AMD right now, and if they got bigger then Intel then I would support Intel."

      that's just stupid.
      Why not support the better chip? that's the market, not supporting inferior products because the company is smaller. A lot of RnD came out of Intel before AMD arrived, and will after AMD leaves. In this specific case, I believe the competition held up RnD and advancement. Intel was moving towards dual chips and cores years ago. Most of that focus and money was diverted to makes faster clocks.

      While comp
    • My cell phone has more memory and is faster than the original Cray supercomputer.
      The fastest computer is 1/2 petaflop. A supercomputer then is anything above 50 teraflops.
  • Sorry, but I have to correct that: petaflops range. Floating-Point Operations Per Second. It is a "unit" without singular or plural forms. Picky me.
    • One meter, two meters...
      One petaflop, two petaflops
      Two petaflops up to a exaflop would be the petraflops range.
      • by rastan ( 43536 ) *
        Nope, sorry. "flops" (or "flop/s") is the unit, meaning Floating-Point Operations Per Second. See http://en.wikipedia.org/wiki/Flops [wikipedia.org]

        Ergo "petaflops range".
      • One meter, two meters...
        One petaflop, two petaflops
        One mph, two mph
        One flops, two flops (not two flopss)
        One petaflops, two petaflops

        The single trailing 's' cannot be dropped since that is the unit of time over which the work is performed.

        I'm not learning much about a computer that is capabile of performing a quadrillion floating point operations. My laptop can do that in 90 minutes. Doing that in a second? Now that's something!
  • at least no animals will be harmed in the process.
  • by noidentity ( 188756 ) on Tuesday April 29, 2008 @06:07AM (#23236214)

    Intel convinced Cray to collaborate on what many believe will be the next generation of supercomputers -- CPUs complemented by floating-point acceleration units.

    Let me guess, it's going to be called the 8087 [wikipedia.org].

  • by Jesus_666 ( 702802 ) on Tuesday April 29, 2008 @06:10AM (#23236230)
    A few years from now Intel will unveil their shocking new techology - they will build the floating point accelerator right into the CPU! For massive performance gains! And then a few years later they will move it out of the CPU for better performance. And so on, and so forth, etc. etc. etc.
  • Since no one else seems to have mentioned it yet; blah blah blah it's the birth of Skynet (this time with an improved graphical interface).
  • The data at Top500 shows a linear increase (on a semi-log plot) for the entire time from 1993 to today. Every seven years, the performance increases by a factor of 100, but Moore's Law predicts an increase of 2^(7/1.5) = 25, meaning that the supercomputer market is besting Moore's Law by a factor of 4.
    • NOT performance. It's simply a statement about the number of transistors we can cram onto a chip.

      That said, you make a very interesting observation, since performance in the desktop PC market has scaled pretty well with transistor density (and therefore Moore's Law). Given what you're saying, is the ratio of performance in supercomputers to regular PCs increasing?
  • A Hybrid? All of this has happened before, and it will all happen again.
  • Surprised no one has asked if this thing can play Crysis in all the bloomed particle splendor...
  • Isn't that a step backwards for computing??1 I don't think running a gas/electricity powered system is a good idea outside of generators for power outages..
  • I thought Itanium is the ingradient for High Performance Computing.

If I want your opinion, I'll ask you to fill out the necessary form.

Working...