Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
HP Intel The Almighty Buck Technology Hardware

Intel and HP Commit $10 billion to Boost Itanium 272

YesSir writes "Support for the high-end processor that has had difficulties catching on is coming in from its co-developers Intel and HP. 'The 10 billion investment is a statement that we want to accelerate as a unified body' said Tom Kilroy, general manager of Intel’s digital enterprise group."
This discussion has been archived. No new comments can be posted.

Intel and HP Commit $10 billion to Boost Itanium

Comments Filter:
  • by ackthpt ( 218170 ) * on Friday January 27, 2006 @12:38AM (#14576384) Homepage Journal
    Last Gasp for Big Iron?
    So as I'm reading this there's a big plug for AMD Opteron just below the article. This would appear to me to be the threat to the Itanium, the same which effectively has killed big iron -- inexpensive commodity hardware. Sink a few thousand into Opteron systems and run what you already have, or sink far larger amounts into some gobble-de-gook system which won't run, except under software emulation, what your multiprocessor system does. Sorry HP/Intel and everyone else dumping money down this rabbit hole, I think you've lost the plot. Today's super computers are parallel computing down with 64bit Gen x86 processors, like the AMD Opteron. The glue is in the software, not in big fat chunks of expensive silicon.

    if still not convinced, i might have a few meg of core to sell you

    • by dnoyeb ( 547705 ) on Friday January 27, 2006 @12:45AM (#14576424) Homepage Journal
      In the words of The Gambler. "You gotta know when to fold em."
      • In the words of The Gambler. "You gotta know when to fold em."

        It smacks of prior business arrangement HP, et al, agreed to back in days of yor, while Itanium was supposed to be "the next big thing", when Intel was telling everyone they wouldn't need the 64 bit CPU's AMD was gearing up to peddle. Intel's calling in all those promisory notes after making compilers and stuff available for so long. Having their druthers, I think everyone else would rather not.

      • by countach ( 534280 ) on Friday January 27, 2006 @12:52AM (#14576455)
        This aint time for Intel to fold em.

        This aint time for Intel to walk away.

        This is time for Intel to RUN RUN RUN!!!
      • by birder ( 61402 ) on Friday January 27, 2006 @09:04AM (#14578054) Homepage
        Just like when gambling, it's not the money already in the pot, that's already gone. It's how much you're willing to spend extra to get it. The pot odds don't look good to me.

        I've been a HPUX sysadmin for 9 years (help manage 16+ large servers). We started buying a number of Itanium boxes and have moved some apps to them. The IA64 chip is much faster than the PA-RISC, 40-60% improvements for us and overall cheaper to buy.

        However, the Opteron machines we have running Linux blow them away at less than half the price loaded up. I honestly don't see IA64 lasting another 2-3 years and I'm already making plans to migrate what we can from PA-RISC to Linux based machines instead of IA64.

        I really like HP-UX. It's not the most robust OS, but it's been rock solid for us over the years. Very, very, expensive like all closed Unix Vendors but for a large business it was money well spent at the time compared to Windows NT.
    • by Chas ( 5144 ) on Friday January 27, 2006 @12:55AM (#14576465) Homepage Journal
      This is more along the lines of post-mortem muscle contractions.

      I'm sure that SOMEONE out there is willing to pour money down the toilet for this platform. And they'll make HP/Intel very very happy.

      Then again, there's people who're into snorting drain cleaner too...
    • by Anonymous Coward
      I would have to agree with this, HP is now to tied into the chip to pull out. I think if they had the choice they would drop it but there have put so much money in and now they are going to dump more money into a CPU that is really quite far away from been what Intel said it would be.

      What Intel and HP need to do is admit that the chip is a waste and just let it die, like dumpy the waste man from Drawn Togehter you can hear the Itanium say "KILL ME"

    • by timeOday ( 582209 ) on Friday January 27, 2006 @01:21AM (#14576579)
      Last Gasp for Big Iron?
      No, they must be planning to force it into the mainstream. They'd never recoup $10e9 just selling big-iron processors. Maybe they think a smaller manufacturing process will finally make Itanium affordable, or that more investment in compilers will make it work better.
    • Last Gasp for Big Iron? [...]
      Today's super computers are parallel computing down with 64bit Gen x86 processors, like the AMD Opteron.

      Clusters are only really good for embarassingly parallel problems. The interconnects just can't be as fast as a local bus.

      How's this for an alternative: "Commodity Big Iron"?

      Why can't a supercomputer be based largely on off-the-shelf CPUs, RAM, Drives, etc.? I know important pieces are missing, but it should be possible to open this stuff up, and have it become a commodity

      • by ThePhilips ( 752041 ) on Friday January 27, 2006 @04:30AM (#14577143) Homepage Journal
        > "Clusters are only really good for embarassingly parallel problems."

        You are overall sort of right about that. It's just science isn't standing still and most of new algorithms are specifically invented to be parallelized.

        Most problems of physics are solved with matrices. And matrices are of course are easy to parallelize. And physics - is the only who are buying most of big iron anyway.

        Nowadays, most of the weather prediction tasks, astronomical tasks, optical tasks, micromeasurements tasks are also optimzed for clusters - not big iron. It's not about top performace - it's about price/performance ratio. For the same money people can buy cluster with e.g. 10 times more raw performance to run unoptimal (e.g. 2-3 times slower) algorithms - but task are done quicker. And cheaper. Yeah, clusters have higher latencies - but they are still dominated by batch jobs, not interactive jobs. Big Iron has better interconnects - but the redundant interconnects take lion share of such system costs.

        In fact, the main reason why this have happened (clusters took over big iron) is the RAM prices. In my versity times (early-mid 90s), we all were occupied with shared memory problem: RAM was very expensive. Now people go to general store, pick several 1GB nics, pick several GBs of RAM, and pay a nickel for all that.

        Ask anyone in Computer Science now, everyone started throwing RAM at latency problems of clusters. It does look bad on paper and in theory. But in practice it just works.

        P.S. On-topic. IA-64 has great performance. But again, on price/performance scale it loses immediately to Intel's own Xeons and AMD's Opterons. Intel constantly refuses to amend its Itanic focus from features to focus on more affordable prices. The story line was quite well covered by The Register. Check posted links.
    • I think you're very wrong on your last comment. You look at the computers currently nailing the top500 and it's Blue Gene. #1, #2 are both Blue Gene based machines. If you've seen the architecture of those machines, they're anything but generic 64bit x86. The density of these things is amazing, just look it up. You're looking at thousands of processors per rack.

      Also, you can get some pretty cool (ahem) machines running Itanium (SGI Altix for example) running 512 processors and 128Tb RAM in a single ima
    • The glue is in the software, not in big fat chunks of expensive silicon.

      The software doesn't necessarily work so well for efficiency. Even the Opteron systems need "big fat chunks of expensive silicon" to run more than eight physical processor chips (currently, 16 cores).
    • I changed the title to better reflect the truth.

      Big iron is just fine. The deal is that Intel had delusions as to what it meant and tried to apply PC standards to a world which lives at a much higher level.

      PowerPC archietecture is alive and well in true big iron. It will even find its way into IBM mainframe technology. Server farms are not big iron, they are glorified PCs with a little bit more reliability yet suffer from all the issues PCs normally do.

      The big iron at work here, Tandem, IBM Mainframe, an
    • What I'm thinking is that AMD must be having a big party. If Intel is spending another $10B, sending good money after bad, one should think about what they are NOT doing. What they are not doing is spending that $10B on x86. This can only make their up-and-coming competitor quite happy.
  • Alpha (Score:5, Funny)

    by linguae ( 763922 ) on Friday January 27, 2006 @12:38AM (#14576387)

    Too bad HP won't spend $$$ to bring back the Alpha.

    I miss architecture diversity....

    • Re:Alpha (Score:5, Interesting)

      by MADCOWbeserk ( 515545 ) on Friday January 27, 2006 @01:02AM (#14576497)
      Couldn't agree more. Alpha was a great platform leaps and bounds above any of it's contemporaries in terms of speed. They were running at 125mhz when pentium 66mhz came out and got more done per cycle. The Compaq DEC merger hurt it badly, then the HP Compaq merger killed it. Itanic has always been a ponderous mess. Had Alpha gotten one tenth the R/D budget that Itanium got it would be server king.. Itanium (please don't try to prove me wrong with benchmarks) gets wiped by Power and Sparc, will die a lame duck kicking and screaming death.

      Could Jesus microwave a burrito so hot that he himself could not eat it?
    • Re:Alpha (Score:3, Funny)

      by gbobeck ( 926553 )
      I also agree. I love my DEC Multia -- it does a great job of heating my room.

      More seriously, though... The Alpha was kicking ass and taking names back in the day. 64-bit and ran at 200 MHz when the Pentium was barely able to fart out 66 MHz.

      The funny thing is that most places just don't run Itanium chips - they use Xenon chips instead.
    • Re:Alpha (Score:5, Interesting)

      by evilviper ( 135110 ) on Friday January 27, 2006 @01:57AM (#14576707) Journal
      Too bad HP won't spend $$$ to bring back the Alpha.

      I miss architecture diversity....

      It seems to me, just about all the huge advantages that alternative architectures (like the Alpha) held over x86, have been washed away in the past few years.

      64-bit memory space. Insanely large cache. Very low-latency access to RAM. Incredible memory throughput. PCI-X/PCI Express slots on cheap motherboards. Seriously high-end graphics. DMA. SMP. Built-in 1000Mbps NICs. RAID. etc.

      What advantages could something like Alpha have over x86 now? A few years ago, I was anxious to jump ship to another platform, but with the introduction of the Opteron and kin, I'd say I'm quite happy with x86 now.

      The only feature I really want now is a new way to handle interrupts... Then simple things like copying CDs, or a little network traffic won't bring PCs to a crawl. Perhaps add a socket for an FPGA or other simple processor to specifically handle those tasks, like the math coprocessors of the old days.
      • Re:Alpha (Score:5, Interesting)

        by Quadraginta ( 902985 ) on Friday January 27, 2006 @02:06AM (#14576740)
        Well...my work (scientific computing) put a premium on sheer scalar speed, and for that the RISC architecture was great and the x86 CISC paradigm a drag. Once you learned how to write code in a certain way, DEC's compilers could make amazingly fast code out of it for the Alpha.

        In case you're wondering, no, parallel computing was never a good option. There's a large class of scientific problems that just don't work very well in parallel, because of large-wavelength correlations that make it painful in the extreme to write a parallel algorithm, if you can do it all.
      • Everything you've said about processors? SGI used to be the KING of 3d graphics processing. What happened? Cheapass PC hardware caught up, broke even, and eventually lapped SGI's technology.

        The Alpha still rocks. It just happened to take the rest of the industry better than a decade to catch up.

        Unfortunately, there's no modern Alpha to flog the x86 with.... and all SGI has to wag over the competition is gigs of texture memory (for the price of a good sized whitebox render farm).
      • Re:Alpha (Score:3, Interesting)

        by mcrbids ( 148650 )
        If you think that the X86 platform has "caught up" with all the others, you are dead wrong. X86/32 or x86/64 does dead last in terms of total processing on any particular task. X86 is designed to do a large variety of tasks. Given a narrower scope, X86 gets blown apart by the likes of the Cell processor, used in the latest Playstation 3 in terms of total processing power - at the cost of genericity.

        The Cell processor is highly optimized for graphical output, while the X86 is a "workhorse" number cruncher an
        • We aren't talking about general-purpose vs custom chips. We're talking about x86 vs. (eg.) Alpha. Multipurpose chip vs multipurpose chip.

          But, if you want to produce realtimee graphiccs for video games, Cell is the way to go...

          That's nice and all, but I don't want my x86 processor to render videogames... I want my videocard's ASIC to do that. Videocards are getting very impressive, particularly in comparison to consoles.

          Who knows, maybe in a few years, we'll have some variation on cell processors built r

      • Re:Alpha (Score:3, Interesting)

        by ooze ( 307871 )
        I can give you a list of advabntages of other architecture over x86.

        - your already mentioned interrupt handling
        - effectively using Registers for argument passing
        - no need for real mode switch to access firmware/Bios
        - thread switches in less than 50 cycles
        - a memory table lookup in less than 50 cycles, or occasionally even COW and dirty pages flushing without table lookup at all

        Just compare the virtual memory handling and the interrupt handling and the parameter passing the task switch sections in the linux
        • Even ignoring everything else good about the Alpha, the PALcode concept is one that should not be allowed to die. I laugh whenever I look at Xen code. It's a huge pile of hacks designed to make a brain-dead architecture act like a sane one. To implement virtualisation (true virtualisation, not just paravirtualisation) on Alpha all you needed to do was replace the PALcode with an image that added an extra layer of indirection. You could run VMS (ported from VAX) and Windows (ported from x86) on the same
      • >What advantages could something like Alpha have over x86 now?

        Have x86 caught in the SpecFP?
        RISCs or Itanium tended to spank x86s on FP computations due to better memory subsystem (this is no longer an advantage) and to better FPU (here I'm not sure that x86 have caught up).
        • Have x86 caught in the SpecFP?

          Well, it is pretty hard to wrestle useful information out of the spec.org website... From what I found, it looks like the SpecFP of the fastest Opteron is 75% of the fastest Itanium 2, with the fastest POWER being a bit better still.
    • Man, that was a sweet processor. I recall comparing my spanking new DEC Alphastation to the Cray down at San Diego Supercomputing Center in 1995, and there was just about no difference. That machine flew.

      Funny thing how Digital's hardware dominance seemed to just dry up and blow away, tho'. I seem to recall in the 80s and 90s it was the place to be if you were a hot and ambitious hardware hacker. Wonder what happened?
    • Maybe it's just a joke, but I think people need to give it up. I own an Alpha but haven't used it in a couple years. I am not going back to it either.

      Unfortunately, whether or not the tech was at fault, they aren't coming back. It costs an excessive amount of money to develop a chip and far more to develop and maintain the software that uses it to justify so many architectures. I mean, in something similar, I wish that Matrox was "coming back" because they have features I'd actually use, but the expense
  • tisk tisk (Score:5, Insightful)

    by sardonic2 ( 576701 ) on Friday January 27, 2006 @12:39AM (#14576393) Homepage
    seems just a bit too late. they should donate to help feed some starving children not starving platforms.
    • by ackthpt ( 218170 ) * on Friday January 27, 2006 @01:29AM (#14576605) Homepage Journal
      seems just a bit too late. they should donate to help feed some starving children not starving platforms.

      How do you know they aren't planning this as some method of helping bring an end to wars? If they get the pentagon buying Itanium equipped missiles, just think what they could do!

      AFGHANISTAN - YBN Today it was confirmed that Osama Bin-laden was killed as a Cruise Missile, manufactured by Strongbad Industries asploded near his hideout. The Cruise Missile was equipped with an HP computer guidance system which employed an Intel Itanium processor. The missile missed the target, but Mr. Bin-laden was struck in the head by the processor's heatsink and died later from the injury.
      • > The missile missed the target, but Mr. Bin-laden was struck in the
        > head by the processor's heatsink and died later from the injury.

        Wetting your pants on that one is such a fine way to start the day...

  • by Nihilist Hippie ( 905325 ) on Friday January 27, 2006 @12:42AM (#14576409)
    Disclaimer: I'm not hyping Northern Colorado as being "the next Silicon Valley". Intel is taking over the old Celestica plant next to the HP campus in Ft. Collins, Colorado, and AMD is looking to open up about 200 jobs in the same area (being Ft. Collins). Interesting move... http://circuitsassembly.com/cms/content/view/2709/ 94/ [circuitsassembly.com]
  • by ChrisGilliard ( 913445 ) <christopher.gill ... m minus caffeine> on Friday January 27, 2006 @12:43AM (#14576414) Homepage
    Itanium has been taking share from both IBM power and Sun Sparc.

    True but can they compete with the UltraSparc T1 [sun.com] (which has 32 threads compared to Intel Itanium's 2 threads)?
    • but can they compete
      Of course they can.

      Dual Core = 4 threads
      Hyperthreading = 8 threads
      Quad Processors = 32 threads

      Ta Da!
      • Last time I checked hyperthreading wasn't enabled on the Intel dual core chips. Even if it is enabled on the latest Intel chip, that is only 4 threads total. That's nowhere near the 32 threads in the UltraSparc T1. Sun's next generation is Rock which will have 64 threads in a single chip! By the time Intel gets to 8 threads per chip, Sun will be at 64.
        • Last time I checked hyperthreading wasn't enabled on the Intel dual core chips.

          I'd strongly suggest you to reflect upon the following words: "Intel Pentium Extreme Edition 840 'Smithfield core'"

    • by Anonymous Coward
      Itanium is so low volume, how could it possibly put a dent in SPARC and POWER? When a person looks for a new computer to buy, Opteron is on the short list, followed by PPC and SPARC. Itanium? Okay, for physicists, perhaps.

      The UltraSPARC T1, if Sun can market it well, is the ultimate webserver, database server, and J2EE CPU. I'm extremely interested to see how many T1 servers Sun sells.
    • Itanium has been taking market share from Power????

      "Sales of IBM's Unix systems, called the pSeries, grew 15% in the first quarter and 36% in the second quarter--far outpacing Sun and HP. The trend should continue in the fourth quarter--historically, industrywide Unix sales have spiked 25% during this period--and into 2006, when IBM introduces a new high-end chip called Power5+."
  • by IntelliAdmin ( 941633 ) * on Friday January 27, 2006 @12:45AM (#14576425) Homepage
    The chip was made to compete with "Big Iron" servers - the only problem is that it is marketed to the windows consumer market, and that is who looks at it when making purchasing decisions. AMD has really started to eat up this space, and if Intel does not start to turn this boat around fast they could really get hurt when 64bit CPUs are commonplace.
    • by PCM2 ( 4486 )
      The marketing certainly has been a problem. I attended this Itanium press event, and correcting the marketing is certainly high on the agenda for the Itanium Solutions Alliance.

      What they want you to know now is that Itanium is not, repeat not a competitor for Xeon, Opteron, or the x86 architecture. Itanium's market is in high-end "mission critical computing" and as a replacement for RISC chips (meaning Power and Sparc).

      Where once they pushed the 64-bitness of the chip, the x64 extensions have muddied the wa
      • Linux and BSD also run on Sparc...
        Microsoft have recently severely cut back their support for itanium.

        As for where you buy the server, sure you can buy an itanium box from many vendors, but the cpu will only come from intel. With sparc the processor and/or the entire system can come from Fujitsu or Sun, you actually have _MORE_ choice if you go with Sparc...
  • Short Intel now (Score:5, Insightful)

    by countach ( 534280 ) on Friday January 27, 2006 @12:45AM (#14576426)
    >"Itanium has been taking share from both IBM power and Sun Sparc."

    Uhh, it could hardly lose share could it? If it lost any share the product wouldn't exist. What, did they double their share from 1 to 2 users?

    Ten billion is an awful lot to throw away on this loser chip.

    I mean, few people actually WANT to run a different chip (and thus a different OS and versions of apps) in their data centre, compared to their desktops. They used to do it, because it was necessary. Now it isn't necessary, so people don't want to do it. Intel's only hope is to try and get people to use it EVERYWHERE, on their desktops too. But there aint no hope of that either.
  • The point is? (Score:3, Insightful)

    by Sensi ( 64510 ) on Friday January 27, 2006 @12:45AM (#14576427)
    What's the point of running "Big Iron" and/or Itanium if we have to deal with hacks/patches and headaches to run real world production applications like SharePoint, SQL and other Office collaboration suites?
  • by TubeSteak ( 669689 ) on Friday January 27, 2006 @12:52AM (#14576454) Journal
    http://www.digit-life.com/archive.shtml?2006/0125 [digit-life.com]
    an Intel spokesperson confirmed that the Montecito platform, which will premiere the company's next-generation 64-bit Itanium architecture, will dispense with executing all 32-bit instruction set applications on-die, prompting customers to opt instead for software-based emulation which Intel promises will be faster anyway.
    The rest of the article is quite interesting. They claim that 32bit software emulation will outperform by "[greater than a factor of three]" their old hardware implementation.

    Anyone want to tie this into their $10 billion push?
  • by dgrgich ( 179442 ) <drew@grgich . o rg> on Friday January 27, 2006 @12:56AM (#14576478)
    Intel and HP spend untold sums of cash developing and rolling out a chip that comparatively few use. Thus, the market has effectively told them that there is not a large need for this behemoth. So how do they respond? A pledge to spend $10 billion more? How does this make sense again?
    • A pledge to spend $10 billion more? How does this make sense again?

      That ain't hard at all understand. Are you familiar with the term "minimizing your losses?"

      Intel and HP clearly believe that in spending $10B they will generate more than $10B in revenue. In other words, if they spent no more money at all, they would lose $X, now they expect to lose $(X+10B-Y) where Y is some number larger than $10B.
    • by ceeam ( 39911 ) on Friday January 27, 2006 @03:17AM (#14576941)
      To rephrase what somebody else wrote here:

      1) Profit!
      2) ???
      3) Itanium.
  • Apple (Score:2, Interesting)

    by apt_user ( 812814 )
    Has anyone wondered what relationship Apple may now have with the Itanium? I understand they're liscensing some nice semiconductor IP from the now-defunct PowerPC G6 to Intel for future designs? Could this relationship be the breath of fresh air that the Itanic needs to float?

    "The history of science is cluttered with the relics of conceptual schemes that were once fervently believed and that have since been replaced by incompatible theories." -Thomas S. Kuhn

    • Re:Apple (Score:3, Informative)

      by be-fan ( 61476 )
      The G6 would've been a POWER5 derivative. The POWER5 is a massively out-of-order RISC. Itanium is an in-order VLIW. They share nothing in common. The IP would've been useless.
  • by Superfarstucker ( 621775 ) on Friday January 27, 2006 @01:37AM (#14576629)
    Sure, it is a huge sum of cash and perhaps the 'shareholders' might get more short term benefit out of investing the same sum of money into commodity microprocessor R&D but the itanium could eventually pay off in a big kind of way. It seems that most people posting here are just as impatient as shareholders when it comes to results, they want them NOW! Good things can't always manifest themselves in a short period of time and I think it is impressive that Intel & HP continue to invest money into something that has yet to produce any tangible benefits over existing architecture. I'm willing to bet that x86 isn't the omega to processor design ideology, and itanium may not be either, but Intel & HP seem to believe it is a step in the right direction. Very few people that post here have the knowledge necessary to even begin assessing whether such a design may ever pan out and it appears the jury is still out among those who have the capacity to decide. Meanwhile Apple continues to recieve gratuitous praise for releasing shiny white computers with chamfered corners. Maybe if Intel & HP invested 10 Bn into cosmetic processor design they would be recieved more favorably with the press.
    • Look, the only reason why Intel and HP keep sinking money into this dog is because they've already spent incredible amounts on it and have been telling everyone, for about a decade now, that it's the Future of Enterprise Computing.

      They seem unable or unwilling to face up to the fact that the Itanium is a complete albatross, and there's really nothing admirable about that.
  • by cyberjessy ( 444290 ) <jeswinpk@agilehead.com> on Friday January 27, 2006 @01:45AM (#14576662) Homepage
    In spite of all the negative publicity, Itanium is quite far from dead. The recent corrections in path make a lot of sense. What really put Itanium out of orbit was Intel's decision to use Itanium in even the small and medium systems. This meant lost marketing focus, and some lame architectural decisions for x86 compatibility. Itanium has nothing in common with x86 except its made by Intel.

    It seems the finally found the market:
    Last week Intel went back on x86 compatibility, only software emulation [com.com]. Makes sense, the market for Itanium is big iron. It is way to expensive for anything less. And the users better run 64-bit Itanium optimized code to get their money's worth.

    Microsoft trashed all Itanium plans for the small and mid segment. They will support Itanium only where it makes sense in their product line, just Windows Server, .Net Framework 64-bit and Sql Server 2005. (Not in Exchange Server, Biztalk Server etc. Earlier we even had Windows XP running on Itanium. Sigh!).

    Intel's Motherboards supporting both Xeon and Itanium have now been postponed to 2009. This makes sense too, Itanium customers won't be interested in saving a few thousand bucks on commodity motherboards.

    And finally 10 billion $ pumped in; good news. I'd think Itanium will be back, by 2008. Architecturally, it is nothing to laugh at atleast. It is just that it lacked everything else, platform-compiler-apps support.
  • I hope this works. (Score:4, Informative)

    by megabeck42 ( 45659 ) on Friday January 27, 2006 @01:52AM (#14576692)
    Truth be told, IA64 is a fantastically better architecture than IA32 or x86-64. Some of it's current caveats, for example, suboptimal software support and high costs, are not due to it's technical qualifications or drawbacks. Once the architecture reaches a critical mass and reasonable market acceptance, these issues should disappear. (more chips -> more people will target software for it, more chips produced in volume -> less cost per chip, etc.)

    It's other caveats, for example, poor compiler support, are issues that need to be considered carefully. I'd like to specifically address the poor compiler support. I am not concerned about this issue for the following reasons:

    1. Compilers can improve easily, with a recompile. If the architecture achieves a critical mass, then more people and organizations will justify the time and effort to improve compilers on the architecture. Not only can they improve, but taking advantage of such improvements would not require replacing hardware, which makes it an issue of time.

    2. The architecture is much more realistic about the guarantees that it's willing to make as a processor. One of the early complaints, was that initial generation of compilers for IA64 would generate, on average, 40% NOPs. It's important to consider a few details when regarding that statement.
    A. First, each clock cycle could allow the execution of up to 3 concurrent operations.
    B. Second, the architecture is not inserting extra NOPs transparently into the pipeline, as almost all modern processors do in the event of a pipeline data hazard. This fact can be viewed different ways.
    i. Most modern processors have to evaluate wether to insert a pipeline stall every single time that an instruction is executed. This is, essentially, wasted work because such a computation could be done by the assembler, however, it does spare the processor the burden of loading useless NOPs into the pipeline and the cache. On the other hand, minimizing the logic that a processor has to complete per cycle generally decreases the minimum amount of time necessary per clock (meaning that it could scale to higher clock speeds.)
    ii. The immediate question is, does reading all these NOPs out of memory cause a bigger hit to performance, than making the processor calculate the data hazards? Personally, I don't know. But, let's consider the idea for a moment. On both processors, let's assume that the instruction cache is fast enough to deliver data without wait states, assuming the cache has the data. When your processor is prefetching well, then the NOP issue shouldn't be a big issue. (Except for the fact that the NOPs will now be in the binary, making the binaries larger. I consider this a moot point given the inexpense of modern storage.) When your prefetcher can't anticipate correctly, though, I think the IA64 loses. Both IA64 and other modern architectures have branch predictors, so I suspect unanticipated branches which cause a pipeline flush (unavoidable) and unanticipated cache fills (unavoidable) will be mitigated roughly equally, But because the IA64 has longer instructions that aren't quite as dense, the IA64 will stall longer. Btw, I'm ignoring data stalls, to simplify my argument and because I don't think the architectural differences in the IA64 will significantly impact it. I'd enjoy being corrected on this point.
    The IA64 includes a predicate register, which stores the results of comparison instructions. Instructions in an IA64 'bundle' can be qualified to be executed conditionally, based on the condition of a certain bit in the predicate register. This allows the IA64 to avoid some branches. The compiler/assembler can pack a bundle which includes the appropriate two instructions, each qualified to execute for different states of the predicate register. Essentially, the processor is simultaneously issued the commands for both p
    • by boner ( 27505 )
      Let me disagree with you on a few points:

      ad 1: Compilers can improve easily, with a recompile. this remark I consider extremely naive and it really, really hurts your credibility. The fact that a compiler can be recompiled does not mean it also automatically improves its logic. The problem with all the compilers for Itanium is in the logic, not in the execution. Recompiling the compiler without improving the logic might give you a faster compiler, certainly not a better one.
      In order to improve compilers for
      • ad 1: Compilers can improve easily, with a recompile. this remark I consider extremely naive and it really, really hurts your credibility.

        Agreed. The point I was trying to make was that realizing the benefits of compiler improvements requires updating your software, not replacing the processor. Obviously, recompiling the same software isn't going to be an advantage.

        B huh? Are you mixing up RISC and VLIW (EPIC) designs?
        No, I'm not mixing them up. I was trying to compare their merits.

        Essentially,
        • Agreed. The point I was trying to make was that realizing the benefits of compiler improvements requires updating your software, not replacing the processor. Obviously, recompiling the same software isn't going to be an advantage.

          Ah... but you see. this is the problem. improving compiler technology is extremely hard. Of course, the big hope in VLIW and EPIC architectures was that compiler technology would improve by some huge factor. This hasn't really panned out. Most code that we run is highly data depe

  • by Shimmer ( 3036 ) on Friday January 27, 2006 @01:58AM (#14576710) Journal
    Something smells fishy to me. $10 billion is alot of money for a marketing campaign.

    Assuming that each Itanium chip retails for roughly $1,000, Intel/HP could simply give away 10,000,000 chips for the investment they're making. Do they really think that there will be enough demand for these chips between now and 2010 to make up for that kind of marketing expense?

    I have a hard time believing they will actually spend anything near this amount on marketing, even if the campaign is successful.
    • Something smells fishy to me. $10 billion is alot of money for a marketing campaign.

      You misunderstand the nature of the investment. $10 billion is the amount of money currently on the table from all of the members of the Itanium Solutions Alliance combined, in terms of commitments to produce Itanium hardware, software, and support options. Probably some fraction of that amount will be spent on marketing, but most of it is going to be spent on R&D and manufacturing.

    • Read the article. It's not for marketing, but for continuing research and development. They are making such a large investment, because they believe a 140 billion market is a stake, claim they are being successful at pushing out Sun and IBM but want to accelerate the rate, and also, with their Itanium Alliance, believe there is a lot of money to be made on top of Itanium software solutions....
  • Here's a quick bit of math for thought:
    Let's say that Intel contributes half, so USD 5 000 000 000.
    Let's say that Intel nets USD 5 000 per chip (probably WAY overestimating sales price and underestimating costs)
    Intel would need to sell 1 000 000 chips to make this additional investment break even.

    This excludes opportunity cost, cannibalism of existing Xeon sales (though, it's probably the other way around), and probably a host of other things.

    It looks like sketchy math to me. To me, it seems obvious
  • In other news .. flogging a dead horse, to cost 10 billion dollars.
  • by Devistater ( 593822 ) <devistater AT hotmail DOT com> on Friday January 27, 2006 @02:24AM (#14576786)
    For some reason I'm thinking that $10 billion is probably more than they've ever made on the Itanic.
  • Performance (Score:4, Interesting)

    by velco ( 521660 ) on Friday January 27, 2006 @02:37AM (#14576823)
    Itanium2 systems are among the top in transaction processing
    http://www.tpc.org/tpcc/results/tpcc_perf_results. asp?resulttype=all [tpc.org]
    and THE top one for clusters.

    It makes sense for such an inventmen to go to
      a) improving the fabrication facilities - achieving lower defect rates
            and reducing price;
      b) improving the fabrication process - aiming at higher clock rates

    Remember also the recent announcement that an Itanuim CPU will no longer contain essentially a whole IA-32 CPU.

    ~velco
  • ITER? (Score:3, Interesting)

    by Xoknit ( 181837 ) on Friday January 27, 2006 @02:48AM (#14576847)
    10 Billion? That means it is just as important to humanity as nuclear fusion? WTF?
  • AAAARRRRRGGHHHH!!! (Score:2, Interesting)

    by wjeff ( 161644 )
    This just makes me insane, I know it was already mentioned several times that people wish HP would put this kind of effort into reviving the Alpha. But to read about them putting this much money into a piece crap like that Itanium after the way they chucked out the Alpha, is expecially galling when you consider that in HP's own internal testing, Alpha EV8s and 9s consistently wipe the floor with even the latest Itaniums.
  • I'd have thought that it was possible to rearrange the deck chairs for a lot less than that. Maybe it includes the musicians' salaries as well, though.
  • Aw, jeez... (Score:4, Insightful)

    by NerveGas ( 168686 ) on Friday January 27, 2006 @03:17AM (#14576944)

        AMD is starting to kick Intel's pants in the most lucrative arena, small- and medium-sized servers. Instead of trying to compete technologically in that area (as opposed to just marketing), they're throwing good money after bad into a failing/failed architecture which only makes sense for a few highly-specialized applications. If it weren't for the fact that most holders of Intel stock know next to nothing about the industry, I would expect a cry for a change of leadership.

        Sure, there are a few supercomputing-type applications where the Itanium really, really shines - but they're sufficienty specialized that Intel just doesn't move a very large number of CPUs.

        Like I've said before, Intel is in a bind because of its own laziness and arrogance. Look at one of the primary advantages of the A64/Opteron architecture - the on-die memory controller. More memory bandwidth, lower latencies, and a memory subsystem that scales with the number of CPUs. Big-iron vendors proved that technology long before AMD decided to use it. Yet Intel has always enjoyed the superior manufacturing side of the business - if *anyone* could afford to have put those extra transistors on the die, it was Intel. Since they're almost always a step ahead of AMD in making smaller transistors, they had the *ability* to do something along those lines long before AMD did - but relied on the old tradition of more megahertz and lots of marketing. I don't think that this move is much different, they're putting their efforts in the wrong direction.

    steve
  • by syncrotic ( 828809 ) on Friday January 27, 2006 @03:36AM (#14576992)

    Recently an article [anandtech.com] was published on anandtech that puts the itanium in a new light: it's actually very efficient in terms of die area utilization. Combine this with Intel's recent announcement that they were scrapping the hardware x86 compatibility on the itanium, which takes up a fair bit of die space, and you have a very small core of the sort that's absolutely perfect for multi-core applications.

    Itanium needs a lot of cache to function well, for reasons that the aforementioned article describes, but it's not unreasonable to assume that intel's shared cache technology from Yonah will make its way into Itanium.

    This thing might be trying to compete with chips like the Ultrasparc T1.

  • You know... (Score:4, Funny)

    by Stan Vassilev ( 939229 ) on Friday January 27, 2006 @04:47AM (#14577194)
    there's a typo: Intel and HP commit 10 billion to booze and women, that's the title, I have no idea what this "Itanium" thing is and where it came from.
  • people dont buy it (Score:2, Insightful)

    by NynexNinja ( 379583 )
    The chips are way over priced and too under performance for people to spend the money. No marketing campaign can fix that.
  • What they are getting into Aerospace propulsion now?
  • by Theovon ( 109752 ) on Friday January 27, 2006 @08:14AM (#14577807)
    People like to talk about how Itanic, as it were, is a flop. It is, but not because it's not a good processor. Itanium is a very cool architecture with features long-time in coming. For instance, used properly, branch predication can be a HUGE boost to performance, and it's proven itself to be so when used properly on the Itanium.

    The first problem is one of marketing. HP/Compaq is a screwed-up company, the merger of two wholy incompatible companies that could never work together properly. Put this together with the fact that they canceled Alpha, another great processor, and you can see that selling Itanium is more about politics than engineering. The next problem is pricing. For a single-chip solution, Itanium is awesome, if you don't count the fact that you could buy multiple Opterons for that price and achieve more performance with properly threaded code.

    There are, of course, technical problems. Itanium is a heat monster. They didn't design it with power consumption and heat dissipation in mind. Did you know that the Itanium's top speed isn't limited by wire delays like it is in most other chips? No. It could actually run a lot faster, were it not for the fact that they can't get the heat off the chip fast enough. Another problem is the compilers. Static scheduling has its limitations, but the real limitation is that Itanium compilers can't manage to do even decent scheduling. It's too complicated. Much of Itanium's performance is theoretical. Given a small piece of C code, you can recode it in assembly and get it to run 10 times faster. If only the compilers were as smart as the assembly coder.

    Itanium was a great idea. It's just being executed poorly, and the R&D is being put into the wrong place. The architecture is there. It's great. Now, get the price down, design it for lower heat dissipation, and get some people working on that damn compiler!
  • by ebrandsberg ( 75344 ) on Friday January 27, 2006 @09:02AM (#14578041)
    Consider: One of the compiler issues has been the ability to schedule all four pipelines with instructions that are useful, instead of no-ops. Now, consider using a method like the T1 does, where you have four sets of VLIW threads, each with on average 3 instructions. You could get away with executing the four threads with 12 pipelines on average. In effect, you can take the no-ops from one set and fill them with instructions from another thread, and keep the pipelines chugging. If tied together properly, it would have binary compatibility with current Itanic code, make use of today's ineffecient compiler generated code better, and make the arch work much more effeciently with OS threads ala the Sun T1. Given that the overall core (not including x86 and cache) for the Itanic is fairly small, something like this could probably be done very effectively and push the Itanic ahead.

How many hardware guys does it take to change a light bulb? "Well the diagnostics say it's fine buddy, so it's a software problem."

Working...