Intel and HP Commit $10 billion to Boost Itanium 272
YesSir writes "Support for the high-end processor that has had difficulties catching on is coming in from its co-developers Intel and HP. 'The 10 billion investment is a statement that we want to accelerate as a unified body' said Tom Kilroy, general manager of Intel’s digital enterprise group."
Last Gasp for Big Iron? (Score:5, Insightful)
So as I'm reading this there's a big plug for AMD Opteron just below the article. This would appear to me to be the threat to the Itanium, the same which effectively has killed big iron -- inexpensive commodity hardware. Sink a few thousand into Opteron systems and run what you already have, or sink far larger amounts into some gobble-de-gook system which won't run, except under software emulation, what your multiprocessor system does. Sorry HP/Intel and everyone else dumping money down this rabbit hole, I think you've lost the plot. Today's super computers are parallel computing down with 64bit Gen x86 processors, like the AMD Opteron. The glue is in the software, not in big fat chunks of expensive silicon.
if still not convinced, i might have a few meg of core to sell you
Re:Last Gasp for Big Iron? (Score:4, Insightful)
Re:Last Gasp for Big Iron? (Score:2, Interesting)
It smacks of prior business arrangement HP, et al, agreed to back in days of yor, while Itanium was supposed to be "the next big thing", when Intel was telling everyone they wouldn't need the 64 bit CPU's AMD was gearing up to peddle. Intel's calling in all those promisory notes after making compilers and stuff available for so long. Having their druthers, I think everyone else would rather not.
Re:Last Gasp for Big Iron? (Score:4, Funny)
This aint time for Intel to walk away.
This is time for Intel to RUN RUN RUN!!!
Re:Last Gasp for Big Iron? (Score:4, Interesting)
I've been a HPUX sysadmin for 9 years (help manage 16+ large servers). We started buying a number of Itanium boxes and have moved some apps to them. The IA64 chip is much faster than the PA-RISC, 40-60% improvements for us and overall cheaper to buy.
However, the Opteron machines we have running Linux blow them away at less than half the price loaded up. I honestly don't see IA64 lasting another 2-3 years and I'm already making plans to migrate what we can from PA-RISC to Linux based machines instead of IA64.
I really like HP-UX. It's not the most robust OS, but it's been rock solid for us over the years. Very, very, expensive like all closed Unix Vendors but for a large business it was money well spent at the time compared to Windows NT.
This is NOT it's last gasp. (Score:4, Funny)
I'm sure that SOMEONE out there is willing to pour money down the toilet for this platform. And they'll make HP/Intel very very happy.
Then again, there's people who're into snorting drain cleaner too...
Re:This is NOT it's last gasp. (Score:4, Funny)
Please.
Fortune 500 companies mainline.Re:Last Gasp for Big Iron? (Score:2, Insightful)
What Intel and HP need to do is admit that the chip is a waste and just let it die, like dumpy the waste man from Drawn Togehter you can hear the Itanium say "KILL ME"
Re:Last Gasp for Big Iron? (Score:4, Insightful)
Re:Last Gasp for Big Iron? (Score:2)
It seems Intel and HP are like the macho guy who is lost and is obviously going the wrong way, but rather than admitting he made a mistake he keeps going further.
Re:Last Gasp for Big Iron? (Score:3, Funny)
Re:Last Gasp for Big Iron? (Score:2, Insightful)
Clusters are only really good for embarassingly parallel problems. The interconnects just can't be as fast as a local bus.
How's this for an alternative: "Commodity Big Iron"?
Why can't a supercomputer be based largely on off-the-shelf CPUs, RAM, Drives, etc.? I know important pieces are missing, but it should be possible to open this stuff up, and have it become a commodity
Re:Last Gasp for Big Iron? (Score:5, Insightful)
You are overall sort of right about that. It's just science isn't standing still and most of new algorithms are specifically invented to be parallelized.
Most problems of physics are solved with matrices. And matrices are of course are easy to parallelize. And physics - is the only who are buying most of big iron anyway.
Nowadays, most of the weather prediction tasks, astronomical tasks, optical tasks, micromeasurements tasks are also optimzed for clusters - not big iron. It's not about top performace - it's about price/performance ratio. For the same money people can buy cluster with e.g. 10 times more raw performance to run unoptimal (e.g. 2-3 times slower) algorithms - but task are done quicker. And cheaper. Yeah, clusters have higher latencies - but they are still dominated by batch jobs, not interactive jobs. Big Iron has better interconnects - but the redundant interconnects take lion share of such system costs.
In fact, the main reason why this have happened (clusters took over big iron) is the RAM prices. In my versity times (early-mid 90s), we all were occupied with shared memory problem: RAM was very expensive. Now people go to general store, pick several 1GB nics, pick several GBs of RAM, and pay a nickel for all that.
Ask anyone in Computer Science now, everyone started throwing RAM at latency problems of clusters. It does look bad on paper and in theory. But in practice it just works.
P.S. On-topic. IA-64 has great performance. But again, on price/performance scale it loses immediately to Intel's own Xeons and AMD's Opterons. Intel constantly refuses to amend its Itanic focus from features to focus on more affordable prices. The story line was quite well covered by The Register. Check posted links.
Re:Last Gasp for Big Iron? (Score:2)
Also, you can get some pretty cool (ahem) machines running Itanium (SGI Altix for example) running 512 processors and 128Tb RAM in a single ima
Re:Last Gasp for Big Iron? (Score:2)
The software doesn't necessarily work so well for efficiency. Even the Opteron systems need "big fat chunks of expensive silicon" to run more than eight physical processor chips (currently, 16 cores).
Re:Last Gasp for INTEL Big Iron? (Score:2)
Big iron is just fine. The deal is that Intel had delusions as to what it meant and tried to apply PC standards to a world which lives at a much higher level.
PowerPC archietecture is alive and well in true big iron. It will even find its way into IBM mainframe technology. Server farms are not big iron, they are glorified PCs with a little bit more reliability yet suffer from all the issues PCs normally do.
The big iron at work here, Tandem, IBM Mainframe, an
Re:Last Gasp for Big Iron? (Score:3, Insightful)
Alpha (Score:5, Funny)
Too bad HP won't spend $$$ to bring back the Alpha.
I miss architecture diversity....
Re:Alpha (Score:5, Interesting)
Could Jesus microwave a burrito so hot that he himself could not eat it?
Re:Alpha (Score:3, Funny)
More seriously, though... The Alpha was kicking ass and taking names back in the day. 64-bit and ran at 200 MHz when the Pentium was barely able to fart out 66 MHz.
The funny thing is that most places just don't run Itanium chips - they use Xenon chips instead.
Re: (Score:2)
Re:Alpha (Score:2)
the reason why ppl would use videogame consoles [wikipedia.org] for serious server tasks realy escapes me...
oh, you meant Xeon [intel.com], right ? sorry, move on, nothing to see here...
Re:Alpha (Score:2)
Re:Alpha (Score:5, Interesting)
It seems to me, just about all the huge advantages that alternative architectures (like the Alpha) held over x86, have been washed away in the past few years.
64-bit memory space. Insanely large cache. Very low-latency access to RAM. Incredible memory throughput. PCI-X/PCI Express slots on cheap motherboards. Seriously high-end graphics. DMA. SMP. Built-in 1000Mbps NICs. RAID. etc.
What advantages could something like Alpha have over x86 now? A few years ago, I was anxious to jump ship to another platform, but with the introduction of the Opteron and kin, I'd say I'm quite happy with x86 now.
The only feature I really want now is a new way to handle interrupts... Then simple things like copying CDs, or a little network traffic won't bring PCs to a crawl. Perhaps add a socket for an FPGA or other simple processor to specifically handle those tasks, like the math coprocessors of the old days.
Re:Alpha (Score:5, Interesting)
In case you're wondering, no, parallel computing was never a good option. There's a large class of scientific problems that just don't work very well in parallel, because of large-wavelength correlations that make it painful in the extreme to write a parallel algorithm, if you can do it all.
Re:Alpha (Score:3, Interesting)
See, with that big polymer backbone you can't break the system up into cells or anything, and you can't divide up the problem in the time domain, because of course what happens at t+dt depends very much on what's happened at t. So you're stuck, you've just got to do the simulation in a single thread.
You can use multiple processors t
See also SGI. (Score:2)
The Alpha still rocks. It just happened to take the rest of the industry better than a decade to catch up.
Unfortunately, there's no modern Alpha to flog the x86 with.... and all SGI has to wag over the competition is gigs of texture memory (for the price of a good sized whitebox render farm).
Re:Alpha (Score:3, Interesting)
The Cell processor is highly optimized for graphical output, while the X86 is a "workhorse" number cruncher an
Re:Alpha (Score:2)
That's nice and all, but I don't want my x86 processor to render videogames... I want my videocard's ASIC to do that. Videocards are getting very impressive, particularly in comparison to consoles.
Who knows, maybe in a few years, we'll have some variation on cell processors built r
Re:Alpha (Score:2)
The compression is the CPU-intensive process. All the rest in your list uses practically no CPU time on a standard processor. Denoising can waste a lot of cycles (a small fraction compared to compression), but it doesn't have-to, since
Re:Alpha (Score:3, Interesting)
- your already mentioned interrupt handling
- effectively using Registers for argument passing
- no need for real mode switch to access firmware/Bios
- thread switches in less than 50 cycles
- a memory table lookup in less than 50 cycles, or occasionally even COW and dirty pages flushing without table lookup at all
Just compare the virtual memory handling and the interrupt handling and the parameter passing the task switch sections in the linux
Re:Alpha (Score:2)
Re:Alpha (Score:2)
Have x86 caught in the SpecFP?
RISCs or Itanium tended to spank x86s on FP computations due to better memory subsystem (this is no longer an advantage) and to better FPU (here I'm not sure that x86 have caught up).
Re:Alpha (Score:2)
Well, it is pretty hard to wrestle useful information out of the spec.org website... From what I found, it looks like the SpecFP of the fastest Opteron is 75% of the fastest Itanium 2, with the fastest POWER being a bit better still.
Re:Alpha (Score:2)
"huge" doesn't tell me much. Would you like to attach a number to that? I don't follow CPU development closely, but it is commonly said that the instruction translator is now really quite tiny and insignificant in comparison to the rest of the chip. Sources would be welcomed.
See above. If it's only a difference of 0.5%, there's no reason to worry about it.
Re:Alpha (Score:2)
That said I don't think that this will improve their performance much: I bet the problem is more in the man-month needed to maintain this part than about the transistors used.
hear hear (Score:2)
Funny thing how Digital's hardware dominance seemed to just dry up and blow away, tho'. I seem to recall in the 80s and 90s it was the place to be if you were a hot and ambitious hardware hacker. Wonder what happened?
Re:Alpha (Score:2)
Unfortunately, whether or not the tech was at fault, they aren't coming back. It costs an excessive amount of money to develop a chip and far more to develop and maintain the software that uses it to justify so many architectures. I mean, in something similar, I wish that Matrox was "coming back" because they have features I'd actually use, but the expense
Re:Alpha (Score:2)
While that sentence may bring to mind the Itanium, I suspect that "never" might have been an improvement for such an EPIC disaster. Oh well, I remember in 2000 being skeptical of Intel managing to succeed with EPIC where 80s VLIW vendors had cratered. I also expressed serious criticism about how the P4's very long pipeline would get slaughtered on stalls from branch mispredictions, and how it would be hard to supply the peak memory bandwidth needed to keep the beast fed with data
tisk tisk (Score:5, Insightful)
Tsk Tsk Tsk.. (Score:5, Funny)
How do you know they aren't planning this as some method of helping bring an end to wars? If they get the pentagon buying Itanium equipped missiles, just think what they could do!
Re:Tsk Tsk Tsk.. (Score:2)
Wetting your pants on that one is such a fine way to start the day...
It's interesting to note that AMD will be close by (Score:3, Interesting)
Re:It's interesting to note that AMD will be close (Score:2)
Ft. Collins has certainly seen a lot of high-tech in the last few years. Itanium was largely developed at HP Fort Collins, and now most of those engineers are Intel engineers. Ther
Itanium vs. Ultrasparc T1 (Score:3, Interesting)
True but can they compete with the UltraSparc T1 [sun.com] (which has 32 threads compared to Intel Itanium's 2 threads)?
Re:Itanium vs. Ultrasparc T1 (Score:3, Funny)
Dual Core = 4 threads
Hyperthreading = 8 threads
Quad Processors = 32 threads
Ta Da!
Re:Itanium vs. Ultrasparc T1 (Score:2)
Re:Itanium vs. Ultrasparc T1 (Score:2)
I'd strongly suggest you to reflect upon the following words: "Intel Pentium Extreme Edition 840 'Smithfield core'"
Re:Itanium vs. Ultrasparc T1 (Score:2, Insightful)
The UltraSPARC T1, if Sun can market it well, is the ultimate webserver, database server, and J2EE CPU. I'm extremely interested to see how many T1 servers Sun sells.
Re:Itanium vs. Ultrasparc T1 (Score:3, Insightful)
Re:Itanium vs. Ultrasparc T1 (Score:2, Informative)
"Sales of IBM's Unix systems, called the pSeries, grew 15% in the first quarter and 36% in the second quarter--far outpacing Sun and HP. The trend should continue in the fourth quarter--historically, industrywide Unix sales have spiked 25% during this period--and into 2006, when IBM introduces a new high-end chip called Power5+."
Re:Itanium vs. Ultrasparc T1 (Score:2)
Re:Itanium vs. Ultrasparc T1 (Score:2)
Even Sun don't claim that a T1 is comparable to an Itanium/Power/Sparc for tasks which need a few fast cores, which is why they use examples like Java application servers as the primary benchmark.
The Ultrasparc T1 is not a high-end machine, it's a low end one designed to compete against cheap x86 machines, I think the main surprise for me is that it's not available in a blade form-factor.
On 90% or more of workload
Re:Itanium vs. Ultrasparc T1 (Score:3, Informative)
Well, the server I am logged into right now has 358 processes running. Each of which has a least 1 lwp which equates to at least one thread. How many people have a server running one process with one thread?
Like spe
Re:Itanium vs. Ultrasparc T1 (Score:3, Informative)
Yes, you're right about this. The T1 can only do a single thread of floating point ops at a time. This is why it's being marketed to the web/ap server market which don't do many flops. Sun is working on a new chip code named Rock which will address these issues. If I remember correctly rock will support 8 floating point threads at a time. It will also have some really awsome I/O lookahead features that allow a special 'thread' to read thousands of instructions ahead and
Here is the problem (Score:5, Insightful)
Re:Here is the problem (Score:3, Insightful)
What they want you to know now is that Itanium is not, repeat not a competitor for Xeon, Opteron, or the x86 architecture. Itanium's market is in high-end "mission critical computing" and as a replacement for RISC chips (meaning Power and Sparc).
Where once they pushed the 64-bitness of the chip, the x64 extensions have muddied the wa
Re:Here is the problem (Score:2)
Microsoft have recently severely cut back their support for itanium.
As for where you buy the server, sure you can buy an itanium box from many vendors, but the cpu will only come from intel. With sparc the processor and/or the entire system can come from Fujitsu or Sun, you actually have _MORE_ choice if you go with Sparc...
Short Intel now (Score:5, Insightful)
Uhh, it could hardly lose share could it? If it lost any share the product wouldn't exist. What, did they double their share from 1 to 2 users?
Ten billion is an awful lot to throw away on this loser chip.
I mean, few people actually WANT to run a different chip (and thus a different OS and versions of apps) in their data centre, compared to their desktops. They used to do it, because it was necessary. Now it isn't necessary, so people don't want to do it. Intel's only hope is to try and get people to use it EVERYWHERE, on their desktops too. But there aint no hope of that either.
The point is? (Score:3, Insightful)
Intel just removed 32bit hardware support (Score:4, Informative)
Anyone want to tie this into their $10 billion push?
Let me get this straight (Score:5, Insightful)
Re:Let me get this straight (Score:3, Interesting)
That ain't hard at all understand. Are you familiar with the term "minimizing your losses?"
Intel and HP clearly believe that in spending $10B they will generate more than $10B in revenue. In other words, if they spent no more money at all, they would lose $X, now they expect to lose $(X+10B-Y) where Y is some number larger than $10B.
Re:Let me get this straight (Score:5, Funny)
1) Profit!
2) ???
3) Itanium.
Apple (Score:2, Interesting)
"The history of science is cluttered with the relics of conceptual schemes that were once fervently believed and that have since been replaced by incompatible theories." -Thomas S. Kuhn
Re:Apple (Score:3, Informative)
It is rather uninspiring to see all the negativity (Score:3, Informative)
Re:It is rather uninspiring to see all the negativ (Score:2)
They seem unable or unwilling to face up to the fact that the Itanium is a complete albatross, and there's really nothing admirable about that.
Itanium isn't dead yet (Score:3, Insightful)
It seems the finally found the market:
Last week Intel went back on x86 compatibility, only software emulation [com.com]. Makes sense, the market for Itanium is big iron. It is way to expensive for anything less. And the users better run 64-bit Itanium optimized code to get their money's worth.
Microsoft trashed all Itanium plans for the small and mid segment. They will support Itanium only where it makes sense in their product line, just Windows Server,
Intel's Motherboards supporting both Xeon and Itanium have now been postponed to 2009. This makes sense too, Itanium customers won't be interested in saving a few thousand bucks on commodity motherboards.
And finally 10 billion $ pumped in; good news. I'd think Itanium will be back, by 2008. Architecturally, it is nothing to laugh at atleast. It is just that it lacked everything else, platform-compiler-apps support.
I hope this works. (Score:4, Informative)
It's other caveats, for example, poor compiler support, are issues that need to be considered carefully. I'd like to specifically address the poor compiler support. I am not concerned about this issue for the following reasons:
1. Compilers can improve easily, with a recompile. If the architecture achieves a critical mass, then more people and organizations will justify the time and effort to improve compilers on the architecture. Not only can they improve, but taking advantage of such improvements would not require replacing hardware, which makes it an issue of time.
2. The architecture is much more realistic about the guarantees that it's willing to make as a processor. One of the early complaints, was that initial generation of compilers for IA64 would generate, on average, 40% NOPs. It's important to consider a few details when regarding that statement.
A. First, each clock cycle could allow the execution of up to 3 concurrent operations.
B. Second, the architecture is not inserting extra NOPs transparently into the pipeline, as almost all modern processors do in the event of a pipeline data hazard. This fact can be viewed different ways.
i. Most modern processors have to evaluate wether to insert a pipeline stall every single time that an instruction is executed. This is, essentially, wasted work because such a computation could be done by the assembler, however, it does spare the processor the burden of loading useless NOPs into the pipeline and the cache. On the other hand, minimizing the logic that a processor has to complete per cycle generally decreases the minimum amount of time necessary per clock (meaning that it could scale to higher clock speeds.)
ii. The immediate question is, does reading all these NOPs out of memory cause a bigger hit to performance, than making the processor calculate the data hazards? Personally, I don't know. But, let's consider the idea for a moment. On both processors, let's assume that the instruction cache is fast enough to deliver data without wait states, assuming the cache has the data. When your processor is prefetching well, then the NOP issue shouldn't be a big issue. (Except for the fact that the NOPs will now be in the binary, making the binaries larger. I consider this a moot point given the inexpense of modern storage.) When your prefetcher can't anticipate correctly, though, I think the IA64 loses. Both IA64 and other modern architectures have branch predictors, so I suspect unanticipated branches which cause a pipeline flush (unavoidable) and unanticipated cache fills (unavoidable) will be mitigated roughly equally, But because the IA64 has longer instructions that aren't quite as dense, the IA64 will stall longer. Btw, I'm ignoring data stalls, to simplify my argument and because I don't think the architectural differences in the IA64 will significantly impact it. I'd enjoy being corrected on this point.
The IA64 includes a predicate register, which stores the results of comparison instructions. Instructions in an IA64 'bundle' can be qualified to be executed conditionally, based on the condition of a certain bit in the predicate register. This allows the IA64 to avoid some branches. The compiler/assembler can pack a bundle which includes the appropriate two instructions, each qualified to execute for different states of the predicate register. Essentially, the processor is simultaneously issued the commands for both p
Re:I hope this works. (Score:2, Insightful)
ad 1: Compilers can improve easily, with a recompile. this remark I consider extremely naive and it really, really hurts your credibility. The fact that a compiler can be recompiled does not mean it also automatically improves its logic. The problem with all the compilers for Itanium is in the logic, not in the execution. Recompiling the compiler without improving the logic might give you a faster compiler, certainly not a better one.
In order to improve compilers for
Re:I hope this works. (Score:3, Informative)
Agreed. The point I was trying to make was that realizing the benefits of compiler improvements requires updating your software, not replacing the processor. Obviously, recompiling the same software isn't going to be an advantage.
B huh? Are you mixing up RISC and VLIW (EPIC) designs?
No, I'm not mixing them up. I was trying to compare their merits.
Essentially,
Re:I hope this works. (Score:2, Informative)
Ah... but you see. this is the problem. improving compiler technology is extremely hard. Of course, the big hope in VLIW and EPIC architectures was that compiler technology would improve by some huge factor. This hasn't really panned out. Most code that we run is highly data depe
$10 billion? I don't think so (Score:3, Insightful)
Assuming that each Itanium chip retails for roughly $1,000, Intel/HP could simply give away 10,000,000 chips for the investment they're making. Do they really think that there will be enough demand for these chips between now and 2010 to make up for that kind of marketing expense?
I have a hard time believing they will actually spend anything near this amount on marketing, even if the campaign is successful.
Re:$10 billion? I don't think so (Score:2)
You misunderstand the nature of the investment. $10 billion is the amount of money currently on the table from all of the members of the Itanium Solutions Alliance combined, in terms of commitments to produce Itanium hardware, software, and support options. Probably some fraction of that amount will be spent on marketing, but most of it is going to be spent on R&D and manufacturing.
Re:$10 billion? I don't think so (Score:2)
How many chips have they sold so far? (Score:2)
Let's say that Intel contributes half, so USD 5 000 000 000.
Let's say that Intel nets USD 5 000 per chip (probably WAY overestimating sales price and underestimating costs)
Intel would need to sell 1 000 000 chips to make this additional investment break even.
This excludes opportunity cost, cannibalism of existing Xeon sales (though, it's probably the other way around), and probably a host of other things.
It looks like sketchy math to me. To me, it seems obvious
In other news .. flogging a dead horse.. (Score:2, Funny)
$10 billion all itanic chips ever sold?? (Score:3, Insightful)
Performance (Score:4, Interesting)
http://www.tpc.org/tpcc/results/tpcc_perf_results
and THE top one for clusters.
It makes sense for such an inventmen to go to
a) improving the fabrication facilities - achieving lower defect rates
and reducing price;
b) improving the fabrication process - aiming at higher clock rates
Remember also the recent announcement that an Itanuim CPU will no longer contain essentially a whole IA-32 CPU.
~velco
ITER? (Score:3, Interesting)
AAAARRRRRGGHHHH!!! (Score:2, Interesting)
Wow, that's a lot of money. (Score:2)
Aw, jeez... (Score:4, Insightful)
AMD is starting to kick Intel's pants in the most lucrative arena, small- and medium-sized servers. Instead of trying to compete technologically in that area (as opposed to just marketing), they're throwing good money after bad into a failing/failed architecture which only makes sense for a few highly-specialized applications. If it weren't for the fact that most holders of Intel stock know next to nothing about the industry, I would expect a cry for a change of leadership.
Sure, there are a few supercomputing-type applications where the Itanium really, really shines - but they're sufficienty specialized that Intel just doesn't move a very large number of CPUs.
Like I've said before, Intel is in a bind because of its own laziness and arrogance. Look at one of the primary advantages of the A64/Opteron architecture - the on-die memory controller. More memory bandwidth, lower latencies, and a memory subsystem that scales with the number of CPUs. Big-iron vendors proved that technology long before AMD decided to use it. Yet Intel has always enjoyed the superior manufacturing side of the business - if *anyone* could afford to have put those extra transistors on the die, it was Intel. Since they're almost always a step ahead of AMD in making smaller transistors, they had the *ability* to do something along those lines long before AMD did - but relied on the old tradition of more megahertz and lots of marketing. I don't think that this move is much different, they're putting their efforts in the wrong direction.
steve
Intel is up to something... (Score:4, Interesting)
Recently an article [anandtech.com] was published on anandtech that puts the itanium in a new light: it's actually very efficient in terms of die area utilization. Combine this with Intel's recent announcement that they were scrapping the hardware x86 compatibility on the itanium, which takes up a fair bit of die space, and you have a very small core of the sort that's absolutely perfect for multi-core applications.
Itanium needs a lot of cache to function well, for reasons that the aforementioned article describes, but it's not unreasonable to assume that intel's shared cache technology from Yonah will make its way into Itanium.
This thing might be trying to compete with chips like the Ultrasparc T1.
You know... (Score:4, Funny)
people dont buy it (Score:2, Insightful)
Intel and HP Commit $10 billion to Boost Itanium (Score:2)
Itanium is a great processor (Score:4, Insightful)
The first problem is one of marketing. HP/Compaq is a screwed-up company, the merger of two wholy incompatible companies that could never work together properly. Put this together with the fact that they canceled Alpha, another great processor, and you can see that selling Itanium is more about politics than engineering. The next problem is pricing. For a single-chip solution, Itanium is awesome, if you don't count the fact that you could buy multiple Opterons for that price and achieve more performance with properly threaded code.
There are, of course, technical problems. Itanium is a heat monster. They didn't design it with power consumption and heat dissipation in mind. Did you know that the Itanium's top speed isn't limited by wire delays like it is in most other chips? No. It could actually run a lot faster, were it not for the fact that they can't get the heat off the chip fast enough. Another problem is the compilers. Static scheduling has its limitations, but the real limitation is that Itanium compilers can't manage to do even decent scheduling. It's too complicated. Much of Itanium's performance is theoretical. Given a small piece of C code, you can recode it in assembly and get it to run 10 times faster. If only the compilers were as smart as the assembly coder.
Itanium was a great idea. It's just being executed poorly, and the R&D is being put into the wrong place. The architecture is there. It's great. Now, get the price down, design it for lower heat dissipation, and get some people working on that damn compiler!
My guess on direction (Score:3, Insightful)
Re:Why? (Score:2, Insightful)
Re:AMD64 (Score:2)
Well, you won't have to worry about being cold in the winter....
Re:AMD64 (Score:2)
Re:AMD64 (Score:3, Insightful)
I don't see the point of writing a super-compiler that can schedule C code at compile time, when processors can do
Re:AMD64 (Score:5, Informative)
There is a market for Itanic in some traditional supercomputing applications but it is a relatively small market and never been a big growth market. I really doubt Intel and HP will ever recover the billions they've already sunk in to Itanic, let alone another $10 billion.
I imagine the people at AMD are dancing in the streets at this news because Intel and HP are going to keep throwing even more good money after bad trying to salvage this dog. Its money that they wont be investing in R&D in markets that really matter.
AMD can continue their push to dominate servers, workstations and desktops. If they could crack laptops, phones and embedded apps Intel would be in serious trouble.
Re:AMD64 (Score:2)
Flashnews: calling it "Itanic" just become a bit lamer.
Re:AMD64 (Score:4, Informative)
If any of you have ever put together a computer that has a bad part, its sometimes really hard to figure out what caused the problem. Systems that Itaniums usually go in have the error detection and error logging to exactly pinpoint where problems lie. This is the reason oracle DBs use these type of processors. It doesn't make sense for the common user to use Itanium, but companies like Amazon and Visa want these systems more for the reliability features than the speed.
Re:Ah, capitalism. (Score:2)
Is there really a $140 billion dollar opportunity here? Does Itanium really offer something so superior to other available platforms that its creators are justified in believing they can acquire a large fraction of the market?
Itanium, a high-end processor, was once expected to sweep the server world. But because of delays, initial performance issues and software incompatibilities...
No mention of cost. Itanium is expensive. This fact obviates most of the serv
Re:Ah, capitalism. (Score:3, Interesting)
Yes. Absolutely killer parallel performance.
For certain tasks (such as matrice operations), it can do one operation, simultaneously on 100 registers (the Itanium has around 300 registers), it's pretty specialised, but for certain tasks, it can be a speed demon.
A lot of the performance
Re:They can't not do this (Score:2)