Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Intel Supercomputing Upgrades

Intel Announces Xeon E5 and Knights Corner HPC Chip 122

MojoKid writes "At the supercomputing conference SC2011 yesterday, Intel announced its new Xeon E5 processors and demoed their new Knights Corner many integrated core (MIC) solution. The new Xeons won't be broadly available until the first half of 2012, but Intel has been shipping the new chips to a small number of cloud and HPC customers since September. The new E5 family is based on the same core as the Core i7-3960X Intel launched Monday. The E5, while important to Intel's overall server lineup, isn't as interesting as the public debut of Knights Corner. Recall that Intel's canceled GPU (codenamed Larrabee) found new life as the prototype device for future HPC accelerators and complementary products. According to Intel, Knights Corner packs 50 x86 processor cores into a single die built on 22nm technology. The chip is capable of delivering up to 1TFlop of sustained performance in double-precision floating point code and operates at 1 — 1.2GHz. NVIDIA's current high-end M2090 Tesla GPU, in contrast, is capable of just 665 DP GFlops."
This discussion has been archived. No new comments can be posted.

Intel Announces Xeon E5 and Knights Corner HPC Chip

Comments Filter:
  • I mostly understand the figures this post states, but it sounds like engineering dialog from 'Star Trek: Voyager'. But, all this means to me is that the chips from last year are now cheaper that they've been out-classed.
    • by Anonymous Coward

      Summary: Faster chips out. You can't get them. Also a 50 core chip was released.

      • More importantly, an x86 chip. Not a GPU. Which means anyone who knows even the fundamentals of programming can use one with minimal additional training. No screwing around with the inability of GPUs to do recursion or deep nesting, no trying to deal with your data as if it were a texture. Just code and go.
        • You mean the fundamentals of parallel processor programming. It's not exactly a widely held skill yet.

        • by makomk ( 752139 )

          Not exactly. You haven't needed to treat your data as though it was a texture on GPUs for a couple of generations now, and getting decent performance out of Knights Bridge means writing very similar incredibly-wide SIMD code to what GPUs use - except that GPUs have some decent tools to making the porting process easier, and I'm not sure if Knights Bridge does. (For example, if you have a loop over a bunch of elements that does the same operations on each but does a different number of passes for different e

      • Just to clarify, the 50-core beast hasn't been released as yet.

        They gave a demo of what is most likely a prototype chip.

    • by Surt ( 22457 )

      If you could make your question clearer, you'll probably get a more effective answer.

      • If you could make your question clearer, you'll probably get a more effective answer.

        An interjection followed by two statements does not a question make. ;-)

  • When they said nobody needed multicore processors I heard the echos of "640K should be enough for anyone" and "There is no reason for any individual to have a computer in his home" Now they're trying to see how many they can jam on one die. 50 is a pretty odd number, though. Usuall see things in powers of 2 (2, 4, 8, 16) Perhpas they neede space on the die for Mickey or an etched portrait of Jobs.

    • by Anonymous Coward

      When they said nobody needed multicore processors

      [citation needed]

    • More than likely, there's 64 cores but only 50 are activated because they can't get a decent yield of perfect chips. That also means that you might be able to get samples of 25 core chips that didn't even make the 50 core cutoff. (One core might also be dedicated for book keeping purposes)
      • What natural phenomenon would require that the number of course on a chip be a power of 2? I can't think of any.
        • by gstoddart ( 321705 ) on Wednesday November 16, 2011 @03:38PM (#38076872) Homepage

          What natural phenomenon would require that the number of course on a chip be a power of 2? I can't think of any.

          Because computers count in binary, which is powers of two. And, I'll assume you meant cores.

          Historically such things have been powers of two to make the addressing simpler without having extra magic or control lines left over. So, 1, 2, 4, 8, 16, 32 and 64 all make sense in terms of being expressable in a fixed number of bits ... 50 to some of us seems like a fairly arbitrary choice. Since you use an unusual combination of wiring, it might as well be 37 or 51 since it's not a number that 'naturally' lends itself to computers. The device is likely wired in such a way that it could count to 64 ... or they're doing things in a slightly odd way.

          Anyway, that's why some of us find it to be a little odd. And it's also why the hard-drive makers deciding "1 GIG" is "1,000,000,000 bytes" is irksome ... with all of those extra powers of two, it should be "1 073 741 824 bytes". Which means you lose about 72MB/GIG ... so my 2TB drive isn't.

          • Indeed, cores. And I still don't see any reason, and AMD has 3 core processors. I can have 3G of memory. I can have 9G of memory. Binary numbers are not pervasive by mandate in all areas of computing.

            Though I do agree base-10 usage for hard drives is ridiculous.

            • Indeed, cores. And I still don't see any reason, and AMD has 3 core processors. I can have 3G of memory. I can have 9G of memory.

              Well, in fairness, on the memory side, you do that with some combination of memory modules which are addressable by powers of two. (eg. 2GB + 1GB, or 4GB + 4GB + 1GB), each of which is discrete from the others. I don't believe you can buy a 3GB or 9GB memory module.

              Binary numbers are not pervasive by mandate in all areas of computing.

              Nope, absolutely not. Not saying that ... ju

              • Well, in fairness, on the memory side, you do that with some combination of memory modules which are addressable by powers of two. (eg. 2GB + 1GB, or 4GB + 4GB + 1GB), each of which is discrete from the others. I don't believe you can buy a 3GB or 9GB memory module.

                Certain models of Xeon processor have three memory controllers. Which, when configuring for maximum memory bandwidth, leads to memory being measured in terms of three times powers of two (3 x 2^30.)

              • Well, in fairness, on the memory side, you do that with some combination of memory modules which are addressable by powers of two. (eg. 2GB + 1GB, or 4GB + 4GB + 1GB), each of which is discrete from the others. I don't believe you can buy a 3GB or 9GB memory module.

                However certain intel processors do use interleaved triple channel memory so there must be a division by 3 going on in the memory addressing system somewhere.

          • by c ( 8461 )

            The device is likely wired in such a way that it could count to 64 ... or they're doing things in a slightly odd way.

            Or it's 64 cores with an average usable yield of 50 "good" ones.

          • by CODiNE ( 27417 )

            With cores it's a little bit different than RAM in that you're physically limited by how many you can squeeze in a certain size.

            So the addressing may be limited to 16, 32, 64, whatever cores, but physically you may not quite be able to get 16 in that space so they might say max out at 14 and then get a few dead ones here and there so end up selling 8, 10's and 12's with the rare perfect 14's being used for some special customers.

            Now with the 50 cores, you might actually have 53 or so actually WORKING cores

          • by Shinobi ( 19308 )

            The SI prefixes are specifically base-10 units, and have been so since the 1800's, with the metric system, and later adapted into the SI system. The fact that computer scientists and programmers misused the units and disregarded an established standard of communications and data encapsulation, and the fact that people STILL do it, is what's vexing, not the fact that the storage manufacturers have taken to use the proper approach.

        • Addressing.

          Let's say you've set aside 6 bits in every data structure that deals with core administration. You can grow to 2^6, or 64 cores without re-architecting your data structures.

          As long as we are using binary in computers, making everything 2^N will make the most efficient use of space.

          Of course, space isn't always the limiting factor, so sometimes for cost or speed reasons, we see objects that number 2^N-M.
          • Nah. That extra bit you lose isn't going to cause anyone any heartburn. And nobody's using 6 bits for anything. It's a specious argument.
            • It's not storing 6 bits in a data structure. It's running traces (if that's even what they're called in IC design) throughout the die connecting these things together. At that level adding two extra traces to carry those two bits is an expense you might want to forgo. However once you've got six wires/bits out there, the only reasons I can think of to not use 64 whatevers is the previously mentioned heat management and die yield issues.

      • by mlts ( 1038732 ) * on Wednesday November 16, 2011 @03:08PM (#38076492)

        I wonder if Intel is taking a page from IBM's playbook.

        Upper end POWER7 CPUs have the ability to have half their cores turned off. The cores that are on can then use the disabled neighbor's caches, and run at a higher clock speed. For some things, this switch actually speeds up some tasks that can't be evenly broken up into balanced threads.

        I can see Intel doing this where some cores are disabled due to manufacturing defects (which happen to all dies), and having the operable cores use nearby caching which would otherwise go to waste.

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      Odds are... they have it lined up such that... they are in a 5x10 grid. Or a 5x5 Grid front/back.

      Just because it's a computer doesn't mean it's bound by the power of two. Boards are rectangular. Chips laid out aren't necessarily in binary distribution.

    • Your average consumer doesn't need 50 cores. For HPC which this was designed for, multiple cores are essential. As for the number of cores, I would guess die size was a factor. There also might be redundancy. It's a RAID 50 CPU. ;)
      • by David Greene ( 463 ) on Wednesday November 16, 2011 @02:53PM (#38076288)

        Your average consumer doesn't need 50 cores.

        Sure they do. What do you think a GPU is? History has shown over and over that we can never have enough computing power. Now that we're at the physical limits of clock speeds, parallelism is going mainstream.

        • by Desler ( 1608317 )

          Now that we're at the physical limits of clock speeds,

          Since when? You can easily overclock most modern chips to 4ghz and with enough cooling to 5 or 6+ ghz. The i7 sandy bridge chips for example have been overclocked past 6ghz. So exactly what supposed "physical limit" do you mean?

          • Re: (Score:3, Interesting)

            by zpiro ( 525660 )
            At 6Ghz, you are very close to the speed of light in copper, so unless you can break the speed of light... its a "physics limit".
            Below this point you have the problem of energy efficiency, i.e. whats the point of spending more energy on cooling than on actually powering the thing?
            Intel's 3d-transistors are HUGE because of this, they can push higher clock speed more easily.
            • I agree generally, like AMD's bulldozer hitting 8GHz on a single core before failing to the limits of physics (even with extreme cooling). I'm assuming nobody will never be able to get more than 1 or 2 cores active (out of 8) while getting to 8GHz on that architecture.

              But these days, the chips run in multiple clock domains. I believe the Intel chips are separated by a base clock, L3 Clock, Core clocks, RAM clocks, and bus clocks. The architectures are moving ever toward asynchronous operation in order to pa

            • Overclockers are up to 8.4 GHz now, with AMD chips.

              • by Bengie ( 1121981 )

                Amazing what a liquid nitrogen jacket with a liquid helium center can do when overclocking.

          • Since when?

            Since the point we reached the ability to handle the power and heat dissipation requirements economically. Engineering is about tradeoffs. Until we get better materials, multicore is more cost-effective than push the clock beyond the reasonable cost envelope.

      • by Bengie ( 1121981 )

        "Your average consumer doesn't need 50 cores [yet]"

        Games are getting pretty good at using my 1536 core GPU, which is just a co-processor

      • by Sloppy ( 14984 )

        Your average consumer doesn't need the 80386. There's hardly any software compiled to take advantage of its features anyway. I can see maybe someone using them for servers, but that's a pretty small niche.

        • Your average consumer doesn't need the 80386. There's hardly any software compiled to take advantage of its features anyway. I can see maybe someone using them for servers, but that's a pretty small niche.

          I remember almost exactly that quote in PC Magazine back in the day. I think at the time it was the 80486, but same thing. they probably said the same thing about the '386 too.

          Of course, I have a quad-core machine sitting on my desk at home with 8GB of RAM, and running at a clock speed two orders of magn

          • by Bengie ( 1121981 )

            "I still remember the first time I saw a PC with a 1GB hard-drive ... a bunch of us stood around it thinking "WTF will we ever do with that much disk space?"."

            Now we're like "Damn 2GB texture pack."

        • by yuhong ( 1378501 )

          Ah, the disaster that is the move from real to protected mode.
          Summary: First fiasco was that in year 1982 MS ignored the announcement of the 286 around the time and proceeds to develop a real-mode multitasking version of DOS, and only in around 1985 when IBM refused to license it that it was realized it was a mistake. And while the resulting OS/2 1.x sucked and lost it's chance with Windows 3.x (which was incompatible and both designed for 16-bit protected mode), second fiasco was when MS broke the JDA with

    • by mikael ( 484 )

      I'm guessing there would have to be glue logic to get all these processors to share the memory space as well as read/write access. From the promotional pictures of other multi-core chip dies, each core is usually surrounded by a band of interface logic as well as a hefty large block of cache memory. That seems to be the biggest change in the evolution of CPU's. It seems easier to just create larger caches or more cores than anything low level.

      Maybe they accept one or more non-functional cores in exchange fo

    • by fuzzyfuzzyfungus ( 1223518 ) on Wednesday November 16, 2011 @02:38PM (#38076088) Journal
      Intel's period of dismissive attitude toward advanced features(multiple cores, 64-bit support on x86, something that sucked less than FSB) was never really serious. Back when they still thought that they had a chance of making IA64 the 'serious' platform and gradually letting x86(and AMD) sink into the bargain bin, they did some tactical rubbishing of what "normal users" needed in order to justify restricting those features to the high-end SKUs; but they worked on them.

      Once it became clear that that particular plan wasn't a happening thing, and that AMD was delivering serious server parts and knockdown prices, and Nvidia was doing interesting things with GPUs, and ARM licensees were pumping out increasingly zippy low-end chips, they stopped fucking around. These days they'll still charge as hard as they can for the features provided; but their hopes of sandbagging x86s in order to sell IA64s are dead
    • by Surt ( 22457 )

      Who said nobody needed multicore processors? That seems like a pretty unlikely claim, particularly from intel who were very much into selling multi-cpu systems to the high-end long before multicore became the norm. I had a dual-socket pentium II consumer grade system ages ago. That we were headed to multicore was obvious even then.

      • Reminds me that I have a dual Deschutes 350 in the attic somewhere. Served me faithfully from 1998 to 2004. If it wern't for the 128MB of memory and the price of electricity - I might still have it do..... uhm something. Trouble is it's still hard to do multithreading, and our programming languages are still inherently single thread, maybe with some thread primitives glued on.

  • How can that be? (Score:4, Insightful)

    by gr8_phk ( 621180 ) on Wednesday November 16, 2011 @03:16PM (#38076580)
    A 50 core chip at 1GHz is going to need to perform 20 double precision floating point ops per cycle per core to achieve 1Tflop performance. OK, so 1.2GHz cuts that down to 16flops/clock. Since when can anything Intel Architecture achieve that many flops per cycle? Two 4-element dot products is only 14 flops. I suppose if they did two vector-scaler multiply-adds that would get 16 flops per cycle. So I just answered my own question. But can they really keep the FP unit running continuously at that rate? On all 50 cores?
    • Maybe, but probably not. The key to high performance computing when dealing with parallel workloads like this is not just raw processing power, but memory bandwidth. The Nvidia Tesla M2090 mentioned in TFS has a peak memory bandwidth of 177GB/s with specially designed memory and controllers designed for raw throughput. Conventional CPUs with fastest DDR3 memory available can barely crack a small fraction of that. A terraflop of sustained DP performance is going to be completely useless without the memory bandwidth to back it up.

      • Depends on how much cache is on the chip, and how big the problem being solved is. GPU's have a lot of FP units, but they have such a tiny amount of cache that they basically have to transfer ~everything they operate on over the memory bus. On a CPU, your dataset can be several MB and still fit on-chip, but of course you have fewer FP units. The algorithm I designed for my Ph.D. operate on the same few megabytes of data many times, and it ended up being about equally fast on both architectures, so I'm h

        • On a CPU, your dataset can be several MB and still fit on-chip...

          Clearly you've never dealt with any HPC programming before. In the vast majority of massively parallel compution problems, the kind which are solved by these kind of chips the data sets are also necessarily large; hundreds of megabytes or gigabytes of data. The algorithms that allow massively parallel compution will compute a single step of an algorithm on a large number of elements.

          Consider the scenario the GP was referring to, massively parallel dot product, for matrix operations or other algorithms used

          • Well, I suppose either my application (face recognition for hundreds of users) is under the threshold for your definition of HPC, or it's a notable exception. Our algorithm consists primarily of repeated BLAS level 1 and 2 operations on chunks of data that fit in CPU cache, but not GPU cache. Essentially, it's low arithmetic intensity operations performed repeatedly (hundreds of times) on gallery image sets that take up a couple megs at a time (and there are a couple hundred of those that can be computed

            • by Shinobi ( 19308 )

              Your application is one in which GPU's excel normally, so I have to say that yours must be very badly written.

              By treating it as streams of texture sets, rather than just working chunk by chunk, you improve performance. That way, you can just set up different Streaming Processors in a chain to perform the various steps. When programmed in that way, my dual quad core Xeon is outperformed by an old GTS 250.

              When you program a GPU, even with CUDA or OpenCL, a DSP programming mindset is more appropriate than a ge

              • I would not be so quick to accuse people of writing poor code when you know very little about the problem they're working on. And remember, ~most code runs faster on CPUs. If you read some of Vasily Volkov's papers (he's the guy who wrote the early versions of CUBLAS), it is very clear that you might as well not bother with the GPU if you're mostly doing blas level 1-2 stuff, since the arithmetic intensity isn't high enough. For our application we had some specific operations we could combine and tricks

                • by Shinobi ( 19308 )

                  Oh, it was an academic project. No wonder then.

                  Basically, everything you've said so far was that you treated the GPU like a slot-in general purpose processor, which it's not. Take a look at what's been done in the INDUSTRY, not academia, to see how effective GPU's are at image recognition and processing.

                  Games push boundaries that academia has yet to reach.

                  In special effects, multi-object motion path detection, tracking and compensation is done on GPU's nowadays, because a cheap GPU can do it more efficientl

    • A 50 core chip at 1GHz is going to need to perform 20 double precision floating point ops per cycle per core to achieve 1Tflop performance. OK, so 1.2GHz cuts that down to 16flops/clock.

      By your math it means that each core has a 1024-bit wide vector unit. And that means 64-bit FP, not 80-bit. Not impossible, but perhaps unlikely to ever run at theoretical max across all cores in anything but the most carefully crafted case.

    • You seem to be forgetting about SIMD and vectorization. If you pack more instructions into the bits available for one, it can do much more than your typical 32- or 64-bit core. That is often how early benchmarks are tested to give the highest results possible for the data throughput.

      • by gr8_phk ( 621180 )

        You seem to be forgetting about SIMD and vectorization.

        Dot product or vector multiply-add IS an SIMD instruction. I chose it because it does the most FLOPs of any instruction I'm aware of. If it can retire 2 of those per cycle then the FPU will have the claimed performance. Then I questioned the memory performance. And after recalling my own efforts to optimize the cache behavior of matrix operations I'm convinced they can do it with not too much cache per core.

    • Intel claims each core can perform 16 FLOPS per cycle, at least at SP. Each core has a 512-bit wide vector unit. I'm not sure where their DP claims are coming from, though.
      • by six ( 1673 )

        The vector unit must be FMA capable just like Larrabee, hence the doubling of FLOPS/cycle.

    • there are lots of useful computations that are more flops-intensive (relative to memory footprint) than dot-products. matmul, fft, almost anything montecarlo, etc.

      • by gr8_phk ( 621180 )

        there are lots of useful computations that are more flops-intensive (relative to memory footprint) than dot-products. matmul, fft, almost anything montecarlo, etc.

        matmul IS dot products. FFT is dot products too. Most anything DSP is dot products. I chose dot product because it is the instruction that does the most floating point operations.

    • OK, so 1.2GHz cuts that down to 16flops/clock. Since when can anything Intel Architecture achieve that many flops per cycle?

      Since LRBni and its 512-bit vectors. A double-precision FMA gets you 16 ops in a clock.

      But can they really keep the FP unit running continuously at that rate? On all 50 cores?

      Easily. HPC codes regularly keep thousands of cores busy.

    • by Bengie ( 1121981 )

      It has 512bit AVX-like registers. You can do a lot of FP/clock with SIMD like that. But like you said(vector-scaler multiply-adds), they probably have multi-operand commands to allow fused math.

  • Today I can go to the store and buy the Nvidia board that they mention. When can I buy a system with a Knight's Corner chip? What about a PCI-E board? The answer is never. It will only be sold to Intel's partners in labs and research environments for special projects. It means very little to most of us.
    • Intel claims it will be released as a commercial product in the near future.
      • Wonder if they'll produce a consumer version.

        I use an ATI card as my main video card, wouldn't mind sticking a physics card in the other PCI-E slot. The thing is that if I put in an Nvidia card it won't work as a physics card since Nvidia has written the drivers in such a way that if you have a non-Nvida video card as your primary video card Nvidia will not allow you to use their cards just for physics.

        So my hope is that if Intel puts out a consumer version then either I'll be able to buy an Intel board ju

    • "It means very little to most of us."

      Just like your comment.

  • We may yet see high-end Intel discrete graphics cards in the future.

    Knights Corner sounds like it is basically a high-end GPU without the actual graphics output. This lets Intel position it as a professional product for HPC and supercomputing, and squeeze out as much profit as possible from the early models. Then, once the R&D cost has been amortized and the fab technology is advanced further, they can add a HDMI output, dedicated RAM, and glue logic, and write appropriate drivers to make it a full-fled

    • It originally was a video card (Larrabee project), but things didn't look good for consumer performance and they repositioned it.
    • Knights Corner sounds like it is basically a high-end GPU without the actual graphics output.

      To me it sounds like much more. The "cores" on a GPU are not equivalent to CPU cores [langly.org], whereas on Knight's Corner you get 50 actual x86 cores. It is sure to be much more general purpose. From the article: "Unlike other co-processors, the MIC is fully accessible and programmable as though it were a fully functional HPC node." It sounds like a cluster on a chip. I am curious about the memory model.

      • I'm curious about the memory model too. I'm pretty certain that bit about "cluster on a chip" is just marketing hyperbole, and it's actually still a shared memory system running one instance of the linux kernel. They're not going to make you run 50 linux kernel instances and communicate between them using network sockets.

      • by makomk ( 752139 )

        They're essentially using x86 cores for a vaguely GPU-style wide SIMD unit, from what I can tell. AMD's next generation of GPUs appear to be heading towards a similar destination from the opposite direction - they're adding a non-vector core to each 16-wide block of vector cores for control code that can't easily be vectorized.

  • by Nite_Hawk ( 1304 ) on Wednesday November 16, 2011 @05:22PM (#38078446) Homepage

    I'm at SC11 right now and just attended NIC's MIC presentation. The scaling looks fantastic according to various codes that they compiled to run on it, but what was notably absent was performance relative to traditional x86 chips. The final presenter even said that now that the technology has been demonstrated to work (with minimal porting effort required) the next step will be to optimize and improve performance. The take away is that relative to Intel's other chips, MIC performance wasn't impressive enough to include in the presentation. That's fine in my book because it's an ambitious project, but it sounds like there is still some work to do.

  • Just shows you the progress in CPU power: ASCI Red was the first supercomputer to go over 1TFlop, and was massive, now we have this with just one chip!

  • A beowolf cluster of these! but seriously even one wouldn't be efficient enough to be worth it yet even in top-of-the-line OS's. We need a whole new paradigm of algorithms and maybe even a new language to do this right.
  • Everyone seems to be defining High Performance Computing as CISC/RISC chips with multi-core processors utilizing Instruction Level Parallelism and Thread-Level Parallelism with extremely fast multi-level Caching. HPC High Availability computing is a synergy of CISC/RISC chips combined with Application, Integrated Instruction, Facility, Graphic and Cryptography assisted processor technologies supplemented with Integrated Coupling facilities. These processors must share access to large amounts of fast Dynamic

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...