Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Intel Supercomputing

Intel To Ship Xeon Phi For "Exascale" Computing This Year 77

MojoKid writes "At the International Supercomputing Conference today, Intel announced that Knights corner, the company's first commercial Many Integrated Core product will ship commercially in 2012. The descendent of the processor formerly known as Larrabee also gets a new brand name — Xeon Phi. The idea behind Intel's new push is that the highly efficient Xeon E5 architecture (eight-core, 32nm Sandy Bridge) fuels the basic x86 cluster, while the Many Integrated Core CPUs that grew out of the failed Larrabee GPU offer unparalleled performance scaling and break new ground. The challenges Intel is trying to surmount are considerable. We've successfully pushed from teraflops to petaflops, but exaflops (or exascale computing) currently demands more processors and power than it's feasible to provide in the next 5-7 years. Intel's MIC is targeted at hammering away at that barrier and create new opportunities for supercomputing deployments."
This discussion has been archived. No new comments can be posted.

Intel To Ship Xeon Phi For "Exascale" Computing This Year

Comments Filter:
  • by Anonymous Coward

    It got me first post.

  • I'm not sure what the "news" is here - it's larrabee with a die shrink. Doesn't intel do this with their tick-tock strategy on ALL their architectures?
    • Re:Larrabee (redux) (Score:4, Interesting)

      by raftpeople ( 844215 ) on Monday June 18, 2012 @07:00PM (#40365141)
      Yes, but this is the first time they've applied the "tick-tock" strategy without actually shipping the product.
      • by Auroch ( 1403671 )

        Yes, but this is the first time they've applied the "tick-tock" strategy without actually shipping the product.

        I don't know. I can't cite a single example, but I doubt this is the case. It's probably more like that it's the first time they've applied the "tick-tock" strategy without shipping the product after making a big deal about the un-shipped product.

        • Am I missing something?

          Why in the world Intel wants to resurrect a many-cored graphic chip that performed far worse than what Nvidia or ATi are offering?

          • by 0123456 ( 636235 )

            AFAIR, Larrabee was a ton of cut-down x86 cores on a graphics chip. So it could be far more useful for massively parallel computing.

          • Re: (Score:2, Interesting)

            by Anonymous Coward

            Larrabee never really was about graphics, that's only one use. It works like a normal cpu, runs x86 and memory access is not gpu-like retarded. There are 512-bit SIMD instructions to help parallel processing, which is good if you need to render pixels, but sandy/ivy-bridge is closing the gap on it with 256-bit AVX. When haswell comes around (AVX2) with possibly even more cores I don't really see why anyone would buy it, unless it bumps the core count to like 32/64 and adds a lot of cache.

            • It works like a normal cpu, runs x86 and memory access is not gpu-like retarded.

              define 'normal' ? Memory access was GPU-like retarded. It reminded me of the cell....

              I don't really see why anyone would buy it, unless it bumps the core count to like 32/64 and adds a lot of cache.

              Knights corner was originally quoted as being ~48 cores, each core could do 4 way hyperthreading, and each thread could process 16 floats instead of your usual 8 (with AVX). That was all backed up by 2Gb DDR5 memory. The hardware prototypes they've shown, certainly seem to be in that ball park, so I don't imagine the specs have got any worse over the last year or so. I'd buy one, for no other reason than it looks like it w

              • by Ruie ( 30480 )

                define 'normal' ? Memory access was GPU-like retarded. It reminded me of the cell....

                What do you mean ? The instruction set manual gives not hints of this. It even has a scatter/gather instruction.

            • by makomk ( 752139 )

              Larrabee doesn't support any of the existing x86 SIMD instruction sets, so any code's going to need to be rewritten in order to get decent performance on it anyway.

          • by gman003 ( 1693318 ) on Monday June 18, 2012 @08:19PM (#40365647)

            It was, sort of.

            Modern GPUs are essentially thousands of very simple, low-speed cores. Think of a thousand 486s. They use driver software to make it do the graphics calculations, because that means they can be more flexible. There are no fixed-function pipelines anymore - it's all software, either in the drivers, or in the customizable shaders.

            Intel's plan was to make a GPU that has a few dozen (32 or so) more complex cores, that were x86 compatible. They added some specialized extra-wide SIMD stuff and some fast-blitting texture units, but it was still x86 compatible. And they had some very impressive drivers to make it function as a "graphics card" - they even demonstrated real-time raytracing in 2009, something nVidia only demonstrated their cards doing this year (and Intel did it in an actual game, not a tech demo).

            However, that flexibility made it a bit underwhelming at the things most games actually do, so it really couldn't compete in that marketplace, at least not at the prices they expected to need to be profitable. But that highly-flexible but also highly-parallel architecture seems perfectly suited to supercomputing.

            • Looking over the comments so far... I want one! Not because I play a lot of games, but because I do some video editing, and even though nVidia's VDPAU is good, it's not as good as having quite a few more cores chewing away at some HD footage. Where do I sign up?
            • by s.petry ( 762400 )

              It's thousands of Math processors, like the old DX systems had. Not like the x86 chips. Yes, there is a big difference.

    • by gman003 ( 1693318 ) on Monday June 18, 2012 @07:15PM (#40365235)

      It's more of "they're actually RELEASING it".

      Larrabee, essentially, was never released. It was demonstrated, and a few were even given/lent to researchers for testing, but it was never used in an actual product.

      That's why it's *still* known only by its codename. You don't need a real name until you're actually planning to put it on shelves.

      That's why it's "news". It's transitioning from a *fascinating* research project to a real, commercial product.

    • Re:Larrabee (redux) (Score:4, Informative)

      by Nemyst ( 1383049 ) on Monday June 18, 2012 @07:42PM (#40365393) Homepage

      There's a difference in terms of target. Larrabee was initially supposed to be Intel's first shot at being competitive in the GPU market. This reuses a lot of the tech, but it's more like having a bunch of Xeon processors in a PCI-E slot. It's general purpose, massively parallel computing power, which could make it a sweet spot for things like video transcoding or CGI (as GPU solutions tend to be fairly lossy).

      The interesting thing about this is that it could basically transform any desktop computer with a modern motherboard into a mini-supercomputer. With two PCI-E slots you can get over 100 CPU cores, which is rather nice as it's all off-the-shelf hardware (well, aside from the probably ridiculously pricey Xeon Phi boards themselves).

  • by CajunArson ( 465943 ) on Monday June 18, 2012 @07:19PM (#40365267) Journal

    For all of the anti-Intel FUD that is about to be unleashed because this thing doesn't play games: The MIC system is *LIGHT YEARS* ahead of any other device in this space when it comes to supporting Linux and open source. This thing is not just compatible with Linux... it actually runs a Linux kernel natively. Intel has already released an open source software stack, and while it will take some work there will eventually be full GCC support for the 512-bit SIMD instructions that give this thing its processing oomph.

    Driver? THERE IS NO DRIVER.. Ok there is a rather simple driver that's already been submitted to the Linux kernel 100% open source for transferring data back and forth from the card, but it is about a trillion times simpler than the monstrosities that pass for GPU drivers. This is a *co-processor*, not some black-box video card that where you pass off code written in a quasi-standard format to be compiled (!) by a video-card driver.

    This thing is already more open-source and Linux friendly than anything made by Nvidia or AMD (and no, dumping partially completed documentation 6 months after the card is released with no real support is *not* the same thing as *freaking running Linux on the board 100% open-source before the product even launches*).

    If people on this site were rational they'd be setting off fireworks to celebrate this architecture, but because it doesn't have the name "ARM" or "AMD" on it they idiotically reject it as "closed source" for some reason....

    • You're projecting. The thing is awesome. I can't wait to get one.

      And I wouldn't be so sure it won't be adapted to play games.

      • I've written a nifty inpainter that can do an excellent job of removing TV-channel logos from animation... at about one frame per day. With this thing, I might actually be able to run it!
    • ... Intel does not charge an arm and a leg for their new "Xeon Phi"

      If Intel can make the thing affordable (but I really doubt Intel will ever sell anything that is "affordable") then yes, the Linux community, particularly those who do not have any resource to any super-computer, can start running simulation programs that requires massive crunching power

      • But I really doubt Intel will ever sell anything that is "affordable".

        Do you live in the third world? Since when has a $150 computer considered "Unaffordable"?

    • by s.petry ( 762400 )

      What they don't talk about for supercomputing in this article is the interconnect. Because the X86 instruction sets are massive, it generally does not work very well. Piping that many large instructions scales poorly, hell it's hard to get 8 chips on a board let alone having an interconnect bundling 128 or so. This is the advantage that RISC has always had over architecture, and why every supercomputer up to date worth speaking of has used RISC over Intel X86.

      I'm going to assume it's just a pipe dream

      • What they don't talk about for supercomputing in this article is the interconnect. Because the X86 instruction sets are massive, it generally does not work very well. Piping that many large instructions scales poorly, hell it's hard to get 8 chips on a board let alone having an interconnect bundling 128 or so. This is the advantage that RISC has always had over architecture, and why every supercomputer up to date worth speaking of has used RISC over Intel X86.

        I disagree. Half of the top 10 current supercomputers at top500.org are x86 (some with GPGPU), 4 are IBM POWER, and 1 is SPARC64. Most of them are using Infiniband. Honestly, #4 is x86. Just add more computers and it'll be #1. That's basically how the current #1 got there. It's just a matter of how much money institutions are willing to spend to get there.

        • by s.petry ( 762400 )

          Sorry, but you can't count GPU based systems as X86. You are using a RISC based chip as the workhorse in the cluster, with exceptional math co-processing of course. The X86 chips act as a bridge, which is what they can do best in my opinion.

      • by Kjella ( 173770 ) on Monday June 18, 2012 @10:59PM (#40366421) Homepage

        This is the advantage that RISC has always had over architecture, and why every supercomputer up to date worth speaking of has used RISC over Intel X86.

        Well, if you don't feel 87% of the current top500 or half the top 10 aren't worth speaking of then I guess you only see what you want to see. True, the top three are not x86 but they're the bulk of the world's supercomputers.

    • by Kjella ( 173770 ) on Monday June 18, 2012 @10:41PM (#40366335) Homepage

      With all due respect, this seems more similar to barebones shader programming than to the usual issues running OpenGL which is why it's "about a trillion times simpler".

      Assembler ~= shader programming ~= Xeon Phi programming
      OpenGL ~= Java, C#, C++/Qt, C++/Gtk ~= nothing the stack you talk about does

      From what I gather this chip essentially gives you SSE on steroids, it's an alternative to CUDA and OpenCL but nothing else a graphics card does. And while currently the Linux capability is just to get the chip up and running, it doesn't actually use the new instructions unless you write it in assembler:

      The changes do not include support for Knights Corner vector instructions and related optimization improvements. GCC for Knights Corner is really only for building the kernel and related tools; it is not for building applications. Using GCC to build an application for Knights Corner will most often result in low performance code due its current inability to vectorize for the new Knights Corner vector instructions. Future changes to give full usage of Knights Corner vector instructions would require work on the GCC vectorizer to utilize those instructionsâ(TM) masking capabilities. This is something that requires a broader discussion in the GCC community than simply changing the code generator.

      • Uh... so you just admitted that its possible to write assembler for the new SIMD instructions (which are already 100% publicly documented). And if you can write assembler... what else could you do? Maybe... incorporate the new instructions into GCC? It worked for the C programming language! Sure it would take work, but when ARM came out with NEON instructions they were in the exact same boat, and I haven't seen anyone insulting ARM about that. That is MIC's huge advantage in addition to having a very co

    • Re: (Score:3, Insightful)

      by FithisUX ( 855293 )
      Open source is the real diamond here. It gives a new opportunity for *BSDs and Haiku and Solaris. I can't wait to burn some Knights Corners. Even scientific compilers like Pathscale and Open64 can benefit. Go Intel go.
  • Had to do a double-take... I thought for a brief moment that Raspberry Pi made a deal with Intel... for a credit card sized über-idkw.
    • You are lucky

      My first thought was that it must be a joke

      But then, April is some two moons ago

      Yes, the first time I seen it I thought I saw "Xeon pffffft" !

  • Windows? (Score:4, Interesting)

    by im_thatoneguy ( 819432 ) on Monday June 18, 2012 @09:06PM (#40365867)

    Can I run regular x86 Windows apps on it or do I need to write it specifically for this chip? I'm thinking rendering applications specifically.

    • by godrik ( 1287354 )

      I do not know about windows compatibility. But I programmed for that chip, it supports almost directly programming using openmp, intel cilk or intel TBB.

      I benchmarked a prototype version of the card of unstructured memory access kernels (graph algorithms) http://bmi.osu.edu/hpc/papers/Saule12-MTAAP.pdf [osu.edu]

  • It was nice knowing you.

    Any power consumption data available yet?

  • Exa means 10^18

    So will this thing have 10^18 cores or something?

    Or will it just cost 10^18 dollars

    Even if they pnly sold one it would be enough to end the worlds economic problems
    Of course it would take another sollar system to buy it.

    • by Ruie ( 30480 )
      It will make it possible to execute 10^18 instructions per second.
      • Specifically 10^18 floating point operators per second. Floating point math doesn't get much use in games or most things a home user would do, but in HPC work - overwhelmingly scientific and engineering simulations - floating point performance is everything.
    • by godrik ( 1287354 )

      the exascale here refers to 10^18 flop/s. One chip will of course not achieve that. But a bunch of them might.

  • you don't suppose... (Score:3, Interesting)

    by alices ice ( 699932 ) on Tuesday June 19, 2012 @01:13AM (#40366941)
    that this is the mystery meat going into those "later next year" promises of something wonderful for the mac pro users?
  • Should MongoDB people be worried?
  • Since my high school dance card.
    Sigh.

You are always doing something marginal when the boss drops by your desk.

Working...