Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Operating Systems Software Windows Microsoft

Microsoft Dropping Itanium Support For Clusters 265

upsidedown_duck writes "According to an article at TheStreet.com, Microsoft is opting not to support Itanium on its coming release of Windows Server 2003 Compute Cluster Edition. Instead, Microsoft will focus on AMD's offerings and Xeon."
This discussion has been archived. No new comments can be posted.

Microsoft Dropping Itanium Support For Clusters

Comments Filter:
  • by Thaidog ( 235587 ) <[moc.hsuh.myn] [ta] [357todhsals]> on Friday November 12, 2004 @06:36AM (#10796551)
    The only place I see the Itaniums making it anywhere is SGI. They're using them for all their supercomputers running linux. Let's hope they keep the mips line... just in case ;)
  • by Dancin_Santa ( 265275 ) <DancinSanta@gmail.com> on Friday November 12, 2004 @06:38AM (#10796558) Journal
    Does Microsoft's dropping of the Itanium from it's supported platform list herald the end of Itanium? No. In fact, Microsoft wasn't even the first to drop it, rather HP was the first to go ahead and stop using it in its high end servers. The whole thing boils down to the cost/benefit ratio which is insanely high for Itanium-based machines.

    So Intel now gets a boost to its Xeon line of chips which are leading the high-performance server market percentage-wise. With this, Intel can put more effort into ramping Xeon production and subsequently driving the prices down there, and likewise continue producing the superfast Itaniums in servers running Linux or some other proprietary supercomputer operating system.

    The demand for supercomputers is low. It will always be low. As technology progresses, the normal users like us get to reap the rewards of this high technology and eventually those supercomputers will be available to us on a single board. The supercomputers of that future will be supersupercomputers and the demand will still be small.

    So let the Itanium fit its niche in the super-highend market. Let the Xeons fill in the normal server market. And let Microsoft stay out of the supercomputer market where it simply doesn't fit.
  • Makes economic sense (Score:4, Interesting)

    by gilesjuk ( 604902 ) <giles.jones@ze[ ]o.uk ['n.c' in gap]> on Friday November 12, 2004 @06:41AM (#10796566)
    Itanium is too small a market for Microsoft to devote developer time to. They're better off getting longhorn ready than supporting an already dead platform. Itantium will go the way of the Pentium Pro, another hyped up CPU that never really delivered.

    Seems like the Wintel alliance isn't so strong these days. Microsoft opting for IBM's PPC processor for XBox 2 is another example of how they're looking what hardware is best for the job, instead of what their traditional partners can offer.
  • by luvirini ( 753157 ) on Friday November 12, 2004 @06:48AM (#10796583)
    You would be surprised at the number of people who are currently trying to run low end "supercomputer"-like things on windows machines or groups of them.

    I do not currently see any special reason for anyone to run that on the highend level, as all those things are so specialised anyway, so you can get the right staff.

    But the fact is, many of the aplications that low-end supercomputing could be used for are quite "common" in many enviroments. This coupled with the fact that extremly many companies have very entrenched Microsoft-only IT-cultures, makes me think there will be quite many of "supercomputers" running windows.

    Please note the use Supercomputer in quotes, as most of these systems are really not going to be supercomputers, more something like "mini-supers".

  • by smu johnson ( 309071 ) on Friday November 12, 2004 @06:53AM (#10796602)
    The problem here is intel spent a fortune developing a whole new architechture trying to get people away from x86. They cant be content to just let the market flood with xeons. A lost itanium sale doesnt automatically mean a xeon sale. While more XEON sales mean money for intel, they really need to try to make up their investment or it would be like throwing money in the garbage.

    Despite all of this i agree wiht you..MS doesnt belong in the supercomputer market. But i doubt intel spent billions developing the itanium so it could be used in a few supercomputers worldwide. They tryed for mass market servers and failed. Cpus are a very low margin business and the failure of such an investment really just shaves their margins even thinnner.
  • Re:Future (Score:5, Interesting)

    by turgid ( 580780 ) on Friday November 12, 2004 @07:07AM (#10796630) Journal
    The Itanium was built for a niche market.

    No it wasn't. Intel developed to itanic as a "post-RISC" design to crush all the 64-bir RISC processors, and to take over the workstation and server market. It was designed to be _the_ volume 64-bit processor with spectacular performance and low price due to economies of scale.

    Those of us with a passing interest in microprocessors knew it was a turkey.

    The only thing itanic has going for it is high SPEC FP scores. On everything else it is either poor or mediocre. It is hot, power-hungry, expensive, have virtually no software support, no developer community etc.

    If you look closely at the "benchmark" comparisons that HP and intel put out for public consumption, you will see they usually only compare with very old models from competitors. Also notice the kind of workloads they compare and the configuration of the machines.

    SGI recently might have given NASA a free itanic supercomputer if the rumours are true, accounting for a whole 10% of this years itanic shipments. That sounds like a processor in trouble.

    Itanic was a solution looking for a problem. It was based on out-dated ideas of processr design, it was late, over-engineered and basically a damp squib for all but the handful of people who can afford it for numbercrunching. This is a far cry from the de-facto 64-bit, mass-market, low-cost processor with world domination that intel intended for it to be.

  • Re:Future (Score:5, Interesting)

    by EyeSavant ( 725627 ) on Friday November 12, 2004 @07:23AM (#10796674)
    The only thing itanic has going for it is high SPEC FP scores. On everything else it is either poor or mediocre.

    I have to second that. My feeling on it is when they had a meeting with a blank piece of paper to design this chip they only invited hardware people. All the tough stuff has been moved into software.

    I think the lack of out of order execution really hirts them. If you don't do an amazing job with the compiler then the processor moves like a slug. In the supercomputer centre I used to use they "upgraded" their 512 processor MIPS machine by adding a 400 processor (or so) Itanic box. For a lot of things without extra optimization of the source code (i.e. just compìling the thing, assuming you could get it to compile, but that is another story) the Itaniums were SLOWER than the 3 year old MIPS processors. It takes a lot of tweaking to get anything like peak performance

    There are 3 FPU pipleines that you have to fill at compile time to get maximum performace out of the thing. Identifying THREE parallel instructions at compile time, ALL THE TIME, is damn hard, and normally the compilers fail. Hence slow.

    It is just too hard to get anything like the theoretical peak performance out of the thing for stuff other than benchmarks.
  • by Flaming Foobar ( 597181 ) on Friday November 12, 2004 @07:29AM (#10796682)

    Linus was right [theinquirer.net], then, I guess...

  • by Anonymous Coward on Friday November 12, 2004 @07:35AM (#10796694)
    SGI has bet the company no more on Itanium than HP has. Sure, they'd rather stick to their current line of IA64-based products for a little while, but if Itanium dies, SGI can still move to another chip. No doubt the costs would be significant, but I wouldn't expect it to be so bad that SGI would go belly up over it.

    Why?

    Because their core technology seems to be relatively independent of the CPU. The Altix line really just builds on the Origin line. It's the connections between machines (NUMAflex), and their understanding of high-performance computing in general, that will keep them afloat.

    What's more interesting is, what would they move to iff IA64 would be discontinued (which is still very unlikely, but let's assume it does)? AMD64 is an option, Cray are showing it works well with their RedStorm machines. Or perhaps SGI can find an ally in IBM with their POWER chips. The latter is IMO more likely because SGI is a firm believer in RISC, and when IA64 is dead, POWER is the last in the line of RISC chips with competitive performance. Or perhaps they can revive their MIPS based lines.

    What's actually more interesting is, what is HP going to do when more vendors move away from IA64 and they risk ending up being the only ones selling them???

  • by DrYak ( 748999 ) on Friday November 12, 2004 @07:36AM (#10796696) Homepage
    In a way, this shows us the limits of closed source developpement :
    Compagnies have to concentrate their (limited) efforts on a few software/platform combinations. They cannot developpe a version for every CPU existing on this planet.

    Microsoft has already a lot of work to do (Longhorn, 64bits XP, XP reloader, still supporting deprecated Win98, developing specials like WinCE, WinMedia, etc...) so they just cannot afford supporting more than 2 CPU types.

    In open source, it's the opposite. Because the source is Open, even if the main developper can only target 1 CPU type, everyone is free to try to recompile/port the code to another architecture.
    Just have a look at the impressive number of architectures supported by Linux (including weird platforms like cellphones, gaming console [DreamCast/XBOX/GameCube] ).

    Maybe this trends will change if Microsoft finds a way to use "write once run everywhere" vm like .NET for it's OSes. But until then, they are tied to Intel x86, and can make some exceptions a few times...
  • AMD stock (Score:4, Interesting)

    by Sai Babu ( 827212 ) on Friday November 12, 2004 @07:38AM (#10796700) Homepage
    Wonder how this will affect the market.
    AMD 2 year chart [google.com].
    I bought a little bit back when the Athlon 64 was announced. Trading volume has been up since. Opteron announcement didn't seem to make much of an impression on the market.
    Post election, the markets been up overall.
    Do you think we'll see a runup to $30 over the next couple of days?
    Now I'm feeling like I should have bought a bit more AMD but historically I've been bitten on almost every investment decision based on the techniclal merits of the product.
    WHat's the feeling out there in /. land? Does the big M$ gorilla's 'endorsement', Sun's decision to use opteron in their low end servers, AMD technical superiority, Intel's seeming 'mis-steps', the overall market upswing, the fact that A64 is a NICE piece of hardware, that AMD is NOT intel, and make AMD a very attractive investment?
    Whay about AMD taking on $600,000,000 debt the other day and adding a guy from Radio Shack (see latest SEC filing).
    My favorite way of looking at stocks (useless for decisions as I still don't grok it) is the correlation between the analyst recommendations and price/volume.
    What sort of analysis do these guys do? Ouija board?

    BUT wait. What I really want to know is how you /.'ers who invest are planning to react to this Intel news.

  • by Fallen Andy ( 795676 ) on Friday November 12, 2004 @07:45AM (#10796719)
    http://www.theinquirer.net/?article=14310
    (the link to the video is at the end).

    I think we all know EPIC is dead. So is Moore's law.
    Get used to learning how to parallelize (??) your
    program.

    Itanic I knew it not at all. Lot's of 64 bit CPU's out there means we can (finally) write nice emulators for the 36 bit ones (grins)

  • by Craig Ringer ( 302899 ) on Friday November 12, 2004 @07:50AM (#10796730) Homepage Journal
    *looks over at PPro firewall*

    The PPro may have been over-hyped, but it _was_ a seriously good chip. In fact, it heralded the best line of CPUs Intel ever produced, the PII/PIII/PM line. They're currently in the process of ditching the Pentium 4 to go _back_ to the PM, which is at heart a PPro. The PPro also spawned the Xeon line, until Intel moved it across to Pentium 4 a while ago. The PIII Xeon was a _mightly_ fine chip.

    Overall ... I'd argue that the PPro really did deliver.
  • by gilesjuk ( 604902 ) <giles.jones@ze[ ]o.uk ['n.c' in gap]> on Friday November 12, 2004 @08:08AM (#10796781)
    While the core was an improvement on the previous chips the original PentiumPro was rather expensive and in my eyes didn't really offer the sort of performance gains to justify it. Likewise with the initial P4 processors.

    Subsequent processors based on the core have been better. But going from a 750Mhz PIII to a 900Mhz Athlon was an incredible leap in performance, so I'd argue that AMD have forced Intel to buck up their ideas.
  • by Anonymous Coward on Friday November 12, 2004 @08:18AM (#10796813)
    I'd like to add its bad 16-bit performance was actually Microsoft's fault, becaues they had told Intel that Windows 95 would be 32-bit. Intel designed the chip with exclusively 32-bit performance in mind. The second Pentium Pro (the "Pentium II") had the 16-bit performance issues fixed.
  • by Lonewolf666 ( 259450 ) on Friday November 12, 2004 @08:22AM (#10796825)
    The initial P6 had bad 16bit performance, which made it a bad choice for consumers are that time, but it was very competitive in normal 32bit mode, idea for NT, Linux and other PC Unixen. The 2nd iteration of the P6 architecture fixed the 16bit issue and was enormously successful.
    Sorry, wrong on the 16bit issue. The 2nd iteration of the P6 architecture, aka Pentium III, still sucked with 16 bit software. It was saved by the introduction of 32bit software and a (mostly) 32 bit OS.

    I remember a software project I was working on in 1998, where we still used Delphi 1 (16bit) because the customer still had a Win3.11 environment.

    When we ran that program side-by-side on a Pentium MMX with 200MHz and a Pentium III with 450 MHz, the old Pentium MMX was roughly twice as fast.
  • Comment removed (Score:4, Interesting)

    by account_deleted ( 4530225 ) on Friday November 12, 2004 @09:48AM (#10797155)
    Comment removed based on user account deletion
  • by 59Bassman ( 749855 ) on Friday November 12, 2004 @10:31AM (#10797444) Journal
    The 2004 SuperComputing conference in Pittsburgh is just wrapping up today. I've spent the last few days soaking it in, and chuckling with other folks about the Windows Cluster concept.



    However, Microsoft isn't targeting techies. They're not going after linux users for sure. They know that their solutions are a total flop where scaling is concerned, and it appears that they're conceding the mid- and high-end markets to the *nix vendors. MS is going after the small ones. Don't know anyting about Linux but think you need a bit more power than a desktop? No problem! Run Windows Cluster Edition on your 24 node cluster!



    Hell of a marketing strategy. You take a company that everybody knows, and leverage it into the small cluster market. I don't think MS honestly thinks they can compete with say a 256 node SGI Altix, or certainly not one of the big Crays, but they can compete with Penguin, Linux Networx, Verati, etc in the small-scale market (even though those companies would rather sell you a 128+ node system.)


    Cray, SGI, and the other big system experts can only sell so many large scale parallel systems per year. Microsoft would rather have the few thousand small systems than a couple of Red Storm size machines from the look of it.


    And on the Itanic, Intel kept screaming through the conference that "IT'S THE COMPILER!!!! YOU NEED AN OPTIMIZED COMPILER!" Apparently, you will likely need to re-engineer the code as well. The best fun of the week was hearing one smaller cluster vendor start bashing the Itanic in front of a mixed crowd. After a couple minutes an Intel guy announced his affiliation and the cluster rep turned about fifteen shades of pale. Was amazingly good entertainment.

  • Re:Wrong... (Score:2, Interesting)

    by MrMr ( 219533 ) on Friday November 12, 2004 @11:00AM (#10797666)
    I guess it shows that Microsoft cannot make the 'quantum leap' in scalability that the linux kernel made (with a lot of help from SGI, who had been there on their MIPS platforms).

    Perhaps they should have hired some SGI engineers (instead of the CEO...)
  • Re:Future (Score:3, Interesting)

    by po8 ( 187055 ) on Friday November 12, 2004 @11:13AM (#10797755)

    Identifying THREE parallel instructions at compile time, ALL THE TIME, is damn hard, and normally the compilers fail. Hence slow.

    Actually, one of my MS students and I did some work, later extended in a MS thesis [passagen.se] by Svante Arvedahl, that showed that it is pretty straightforward to produce decently-scheduled code for the IA-64 on a JIT basis using combinatorial search techniques and related heuristics. The cool part about this is that you can then use HotSpot(TM)-type techniques to get your instruction-level parallelism way up.

    If the IA-64 hadn't tanked so badly in the marketplace, I'd still be working on this stuff...

  • Re:Wrong... (Score:5, Interesting)

    by Paul Jakma ( 2677 ) on Friday November 12, 2004 @11:14AM (#10797763) Homepage Journal
    Itanium cpu limit: 512 cpus.

    SGI is being very successfull with it's 512 itanium machines running Linux.


    Note that SGI are doing this with very very special hardware. IIRC each CPU brick in an Origin has 4 itanics. All these bricks are then interconnected with very very special CPU interconnect routers.

    That these machines go to 512 CPUs has *nothing* to do with the CPUs being Itanic, it's all down to the ccNUMA interconnect technology (which SGI initially acquired from Cray). If you need further convincing of this, note that the Origin 3k architecture SGIs machines have essentially the same architecture, but use MIPS CPUs. This architecture could be applied to Opteron too, and probably with less effort, as Opteron natively supports ccNUMA and comes with CPU networking built-in.

  • Re:Wrong... (Score:3, Interesting)

    by cmaxx ( 7796 ) on Friday November 12, 2004 @01:04PM (#10799088)
    Ironically Opteron would be a less good fit than, say, Xeon, for the engineering that SGI have done for MIPS and Itanium, exactly because it has native support for ccNUMA and has integral memory controllers.

    Someone else pointed out the scaling numbers. Opteron scales better than 8 CPUs. 8CPUs is what you can do without glue chipsets, which is pretty darned great.

    Newisys have a chipset that extends the CC and addressing of Opterons so that you can put upto 32 in a system.

    When dual opterons are available, that'll be 64 cores in a single system, which is where Altix was about a year ago.

    This is all reimplementation of stuff that SGI are already doing with CPUs that do all their memory access over the same buses as they do everything else.

    So we're very unlikely to get SGI goodness and Opteron goodness in the same box any time soon. Which is a little sad, but no biggy really.

    Xeons kick butt too - the top Xeon, Opteron and Itanium performance numbers and prices (for server use remember) are actually surprisingly close given they're all clean-sheet approaches wrt each other.

  • Re:Wrong... (Score:2, Interesting)

    by ameline ( 771895 ) <{ian.ameline} {at} {gmail.com}> on Friday November 12, 2004 @01:12PM (#10799236) Homepage Journal
    Actually that's not correct -- SGI had the MIPS Origin machines working in house before the aquisition of Cray. They called the interconnect "Cray Link" but it had nothing to do with Cray and everything to do with some pointy haired MBA bean counter type trying to "brand" something. This said, most of the post above is correct, but one problem with scaling the opteron up to such large systems is the physical address space in current implementations. It's 40 bits. For large ccNUMA machines, you really want a minimum of 48 bits of physical address space. The other thing the hub chip does on SGI machines is to handle the page directory stuff, counting remote vs local accesses, and handling automatically migrating pages to nodes that are making heavy use of them. Most of this cool stuff is patented, so I doubt the Opteron is doing it.
  • Re:Itanic Itanium (Score:3, Interesting)

    by Jeremiah Cornelius ( 137 ) on Friday November 12, 2004 @03:01PM (#10800702) Homepage Journal
    Intel sank Alpha, to back this loser!

    At least better features of the Alpha design were cribbed into PIII and PIV designs...

  • Re:Wrong... (Score:1, Interesting)

    by Anonymous Coward on Friday November 12, 2004 @04:43PM (#10801806)
    It seems that Transmeta's Efficeon processors would be great for clustering... cheap and very cool (doesn't need a fan)... not as fast maybe, but cheap should count for something.

The Tao doesn't take sides; it gives birth to both wins and losses. The Guru doesn't take sides; she welcomes both hackers and lusers.

Working...