Microsoft Dropping Itanium Support For Clusters 265
upsidedown_duck writes "According to an article at TheStreet.com, Microsoft is opting not to support Itanium on its coming release of Windows Server 2003 Compute Cluster Edition. Instead, Microsoft will focus on AMD's offerings and Xeon."
Itanium is Linux bound (Score:5, Interesting)
The correct response: So what? (Score:5, Interesting)
So Intel now gets a boost to its Xeon line of chips which are leading the high-performance server market percentage-wise. With this, Intel can put more effort into ramping Xeon production and subsequently driving the prices down there, and likewise continue producing the superfast Itaniums in servers running Linux or some other proprietary supercomputer operating system.
The demand for supercomputers is low. It will always be low. As technology progresses, the normal users like us get to reap the rewards of this high technology and eventually those supercomputers will be available to us on a single board. The supercomputers of that future will be supersupercomputers and the demand will still be small.
So let the Itanium fit its niche in the super-highend market. Let the Xeons fill in the normal server market. And let Microsoft stay out of the supercomputer market where it simply doesn't fit.
Makes economic sense (Score:4, Interesting)
Seems like the Wintel alliance isn't so strong these days. Microsoft opting for IBM's PPC processor for XBox 2 is another example of how they're looking what hardware is best for the job, instead of what their traditional partners can offer.
Re:Windows Supercomputer? (Score:4, Interesting)
I do not currently see any special reason for anyone to run that on the highend level, as all those things are so specialised anyway, so you can get the right staff.
But the fact is, many of the aplications that low-end supercomputing could be used for are quite "common" in many enviroments. This coupled with the fact that extremly many companies have very entrenched Microsoft-only IT-cultures, makes me think there will be quite many of "supercomputers" running windows.
Please note the use Supercomputer in quotes, as most of these systems are really not going to be supercomputers, more something like "mini-supers".
Re:The correct response: So what? (Score:2, Interesting)
Despite all of this i agree wiht you..MS doesnt belong in the supercomputer market. But i doubt intel spent billions developing the itanium so it could be used in a few supercomputers worldwide. They tryed for mass market servers and failed. Cpus are a very low margin business and the failure of such an investment really just shaves their margins even thinnner.
Re:Future (Score:5, Interesting)
No it wasn't. Intel developed to itanic as a "post-RISC" design to crush all the 64-bir RISC processors, and to take over the workstation and server market. It was designed to be _the_ volume 64-bit processor with spectacular performance and low price due to economies of scale.
Those of us with a passing interest in microprocessors knew it was a turkey.
The only thing itanic has going for it is high SPEC FP scores. On everything else it is either poor or mediocre. It is hot, power-hungry, expensive, have virtually no software support, no developer community etc.
If you look closely at the "benchmark" comparisons that HP and intel put out for public consumption, you will see they usually only compare with very old models from competitors. Also notice the kind of workloads they compare and the configuration of the machines.
SGI recently might have given NASA a free itanic supercomputer if the rumours are true, accounting for a whole 10% of this years itanic shipments. That sounds like a processor in trouble.
Itanic was a solution looking for a problem. It was based on out-dated ideas of processr design, it was late, over-engineered and basically a damp squib for all but the handful of people who can afford it for numbercrunching. This is a far cry from the de-facto 64-bit, mass-market, low-cost processor with world domination that intel intended for it to be.
Re:Future (Score:5, Interesting)
I have to second that. My feeling on it is when they had a meeting with a blank piece of paper to design this chip they only invited hardware people. All the tough stuff has been moved into software.
I think the lack of out of order execution really hirts them. If you don't do an amazing job with the compiler then the processor moves like a slug. In the supercomputer centre I used to use they "upgraded" their 512 processor MIPS machine by adding a 400 processor (or so) Itanic box. For a lot of things without extra optimization of the source code (i.e. just compìling the thing, assuming you could get it to compile, but that is another story) the Itaniums were SLOWER than the 3 year old MIPS processors. It takes a lot of tweaking to get anything like peak performance
There are 3 FPU pipleines that you have to fill at compile time to get maximum performace out of the thing. Identifying THREE parallel instructions at compile time, ALL THE TIME, is damn hard, and normally the compilers fail. Hence slow.
It is just too hard to get anything like the theoretical peak performance out of the thing for stuff other than benchmarks.
Linus was onto something... (Score:5, Interesting)
Linus was right [theinquirer.net], then, I guess...
Re:Itanium is Linux bound (Score:4, Interesting)
Why?
Because their core technology seems to be relatively independent of the CPU. The Altix line really just builds on the Origin line. It's the connections between machines (NUMAflex), and their understanding of high-performance computing in general, that will keep them afloat.
What's more interesting is, what would they move to iff IA64 would be discontinued (which is still very unlikely, but let's assume it does)? AMD64 is an option, Cray are showing it works well with their RedStorm machines. Or perhaps SGI can find an ally in IBM with their POWER chips. The latter is IMO more likely because SGI is a firm believer in RISC, and when IA64 is dead, POWER is the last in the line of RISC chips with competitive performance. Or perhaps they can revive their MIPS based lines.
What's actually more interesting is, what is HP going to do when more vendors move away from IA64 and they risk ending up being the only ones selling them???
The Limit of closed source developpement (Score:5, Interesting)
Compagnies have to concentrate their (limited) efforts on a few software/platform combinations. They cannot developpe a version for every CPU existing on this planet.
Microsoft has already a lot of work to do (Longhorn, 64bits XP, XP reloader, still supporting deprecated Win98, developing specials like WinCE, WinMedia, etc...) so they just cannot afford supporting more than 2 CPU types.
In open source, it's the opposite. Because the source is Open, even if the main developper can only target 1 CPU type, everyone is free to try to recompile/port the code to another architecture.
Just have a look at the impressive number of architectures supported by Linux (including weird platforms like cellphones, gaming console [DreamCast/XBOX/GameCube] ).
Maybe this trends will change if Microsoft finds a way to use "write once run everywhere" vm like
AMD stock (Score:4, Interesting)
AMD 2 year chart [google.com].
I bought a little bit back when the Athlon 64 was announced. Trading volume has been up since. Opteron announcement didn't seem to make much of an impression on the market.
Post election, the markets been up overall.
Do you think we'll see a runup to $30 over the next couple of days?
Now I'm feeling like I should have bought a bit more AMD but historically I've been bitten on almost every investment decision based on the techniclal merits of the product.
WHat's the feeling out there in
Whay about AMD taking on $600,000,000 debt the other day and adding a guy from Radio Shack (see latest SEC filing).
My favorite way of looking at stocks (useless for decisions as I still don't grok it) is the correlation between the analyst recommendations and price/volume.
What sort of analysis do these guys do? Ouija board?
BUT wait. What I really want to know is how you
See also this ref (bit old, 1hr+long but fun) (Score:3, Interesting)
(the link to the video is at the end).
I think we all know EPIC is dead. So is Moore's law.
Get used to learning how to parallelize (??) your
program.
Itanic I knew it not at all. Lot's of 64 bit CPU's out there means we can (finally) write nice emulators for the 36 bit ones (grins)
Re:Makes economic sense (Score:3, Interesting)
The PPro may have been over-hyped, but it _was_ a seriously good chip. In fact, it heralded the best line of CPUs Intel ever produced, the PII/PIII/PM line. They're currently in the process of ditching the Pentium 4 to go _back_ to the PM, which is at heart a PPro. The PPro also spawned the Xeon line, until Intel moved it across to Pentium 4 a while ago. The PIII Xeon was a _mightly_ fine chip.
Overall
Re:Makes economic sense (Score:3, Interesting)
Subsequent processors based on the core have been better. But going from a 750Mhz PIII to a 900Mhz Athlon was an incredible leap in performance, so I'd argue that AMD have forced Intel to buck up their ideas.
Re:Makes economic sense (Score:1, Interesting)
Re:Makes economic sense (Score:5, Interesting)
Sorry, wrong on the 16bit issue. The 2nd iteration of the P6 architecture, aka Pentium III, still sucked with 16 bit software. It was saved by the introduction of 32bit software and a (mostly) 32 bit OS.
I remember a software project I was working on in 1998, where we still used Delphi 1 (16bit) because the customer still had a Win3.11 environment.
When we ran that program side-by-side on a Pentium MMX with 200MHz and a Pentium III with 450 MHz, the old Pentium MMX was roughly twice as fast.
Comment removed (Score:4, Interesting)
Re:Windows Supercomputer? (Score:3, Interesting)
However, Microsoft isn't targeting techies. They're not going after linux users for sure. They know that their solutions are a total flop where scaling is concerned, and it appears that they're conceding the mid- and high-end markets to the *nix vendors. MS is going after the small ones. Don't know anyting about Linux but think you need a bit more power than a desktop? No problem! Run Windows Cluster Edition on your 24 node cluster!
Hell of a marketing strategy. You take a company that everybody knows, and leverage it into the small cluster market. I don't think MS honestly thinks they can compete with say a 256 node SGI Altix, or certainly not one of the big Crays, but they can compete with Penguin, Linux Networx, Verati, etc in the small-scale market (even though those companies would rather sell you a 128+ node system.)
Cray, SGI, and the other big system experts can only sell so many large scale parallel systems per year. Microsoft would rather have the few thousand small systems than a couple of Red Storm size machines from the look of it.
And on the Itanic, Intel kept screaming through the conference that "IT'S THE COMPILER!!!! YOU NEED AN OPTIMIZED COMPILER!" Apparently, you will likely need to re-engineer the code as well. The best fun of the week was hearing one smaller cluster vendor start bashing the Itanic in front of a mixed crowd. After a couple minutes an Intel guy announced his affiliation and the cluster rep turned about fifteen shades of pale. Was amazingly good entertainment.
Re:Wrong... (Score:2, Interesting)
Perhaps they should have hired some SGI engineers (instead of the CEO...)
Re:Future (Score:3, Interesting)
Identifying THREE parallel instructions at compile time, ALL THE TIME, is damn hard, and normally the compilers fail. Hence slow.
Actually, one of my MS students and I did some work, later extended in a MS thesis [passagen.se] by Svante Arvedahl, that showed that it is pretty straightforward to produce decently-scheduled code for the IA-64 on a JIT basis using combinatorial search techniques and related heuristics. The cool part about this is that you can then use HotSpot(TM)-type techniques to get your instruction-level parallelism way up.
If the IA-64 hadn't tanked so badly in the marketplace, I'd still be working on this stuff...
Re:Wrong... (Score:5, Interesting)
SGI is being very successfull with it's 512 itanium machines running Linux.
Note that SGI are doing this with very very special hardware. IIRC each CPU brick in an Origin has 4 itanics. All these bricks are then interconnected with very very special CPU interconnect routers.
That these machines go to 512 CPUs has *nothing* to do with the CPUs being Itanic, it's all down to the ccNUMA interconnect technology (which SGI initially acquired from Cray). If you need further convincing of this, note that the Origin 3k architecture SGIs machines have essentially the same architecture, but use MIPS CPUs. This architecture could be applied to Opteron too, and probably with less effort, as Opteron natively supports ccNUMA and comes with CPU networking built-in.
Re:Wrong... (Score:3, Interesting)
Someone else pointed out the scaling numbers. Opteron scales better than 8 CPUs. 8CPUs is what you can do without glue chipsets, which is pretty darned great.
Newisys have a chipset that extends the CC and addressing of Opterons so that you can put upto 32 in a system.
When dual opterons are available, that'll be 64 cores in a single system, which is where Altix was about a year ago.
This is all reimplementation of stuff that SGI are already doing with CPUs that do all their memory access over the same buses as they do everything else.
So we're very unlikely to get SGI goodness and Opteron goodness in the same box any time soon. Which is a little sad, but no biggy really.
Xeons kick butt too - the top Xeon, Opteron and Itanium performance numbers and prices (for server use remember) are actually surprisingly close given they're all clean-sheet approaches wrt each other.
Re:Wrong... (Score:2, Interesting)
Re:Itanic Itanium (Score:3, Interesting)
At least better features of the Alpha design were cribbed into PIII and PIV designs...
Re:Wrong... (Score:1, Interesting)