Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Technology

Next-Generation Chip Fabs 256

PaulBu writes "As reported in EE Times, a new IBM $2.5B fab will be the first one to 'produce chips using all three of the sophisticated technologies on the industry's bleeding edge: low-k dielectrics, copper interconnect and silicon-on-insulator based transistors' on 300mm wafers. And it runs entirely on Linux! Quote from the article: 'The state of automation in Building 323 is such that 20,000 sensors are used to track wafer lots in front-opening unified pods that are transported from one tool to the next on rails using linear induction motors. The setup resembles an intricate monorail system tuned to millimeter-precision specs. A central control system monitors all stations and tracks wafer lots via 802.11 wireless communications.'"
This discussion has been archived. No new comments can be posted.

Next-Generation Chip Fabs

Comments Filter:
  • by havaloc ( 50551 ) on Tuesday August 20, 2002 @12:49PM (#4106060) Homepage
    For they will wreck havoc with your 802.11 control infrastructure.
    • I'd be more worried about wardrivers. A lapse in security could result in someone on the street loading the master control program and thinking it's to ruin a few thousand high end processor chips. Hell, could you imagine how bad it would be if someone maliciously turned off the error checking on the sensors and it went un-noticed for even a days worth of manufacturing.
      • Comment removed based on user account deletion
        • Yeah, but if you know where the building is, you could always wardrive with a high-gain directional attenna.
        • "Or even if they only implemented their own 802.11 network at very high signal power, causing IBM's traffic to be filtered as noise."

          I wouldn't be surprised if it's 802.11a. Most people with their 2.4 GHz 802.11b equipment can't connect to the 5 GHz 802.11a networks.

        • Actually the building itself probably acts as the EM shielding as most plants are made of reinforced concrete, 2.4Ghz signals either fail to penetrate the concrete or are reflected by the steel reinforcing bars. I doubt you could get a signal in the parking lot much less outside their secured fence.
      • i'm assuming you've never worked in manufacturing, but the machines used in most plants almost never run completely unattended. much of the time they have an operator who's job is to watch over the machine to make sure it's operating. in addition they (or someone else) is almost always responsible for taking a sample of the product after the machine has done it's duty and running it through various tests to ensure certain conditions on the parts they are creating. this helps to guarantee both that if a machine is not functioning normally, it's caught as soon as possible and second that those parts that were manufactured while the machine was misbehaving don't get sent on for further processing.

        any manufacturing company worth their salt (read in business) has these measures in place, IBM being one of these.

        (and yes, i work for IBM, but no longer on the manufacturing side)
      • A lapse in security could result in someone on the street loading the master control program and thinking it's to ruin a few thousand high end processor chips.

        That's easily fixed. Just load up TRON. He'll even watchdog the MCP too.
    • Watch out for Starbucks?
      This is the wrong thread for that.
      We do not belong.
    • No way. (Score:3, Funny)

      by Stoutlimb ( 143245 )
      What are the odds that a chip manufacturing plant this big has converted their entire warehouse building into a giant faraday cage?

      Hell, I would.
      • Convert? Our warehouse acts like one already, some of our materials have much better absorbtion than steel and concrete. It really causes trouble when you actaully want to deploy wireless there.
  • I thought ibm was implementing a new plan of getting out of anything hardware related and concentrating in proving "services" (ie the recent purchase of a major company, cant remember its name). Maybe im just confused.

    epicstruggle
    • I believe you're thinking of their specific move away from hard drives - not hardware in general.

    • I thought ibm was implementing a new plan of getting out of anything hardware related and concentrating in proving "services" (ie the recent purchase of a major company, cant remember its name). Maybe im just confused.

      No, they're doing MS one better. Software being a service is just so 90s. In the coming century, hardware itself will be a service.

      IBM knew that they couldn't come up with this hardware plan alone, so they bought a phone company. Remember when you had to rent your phone and it was illegal to connect a phone that they didn't own to their lines? I mean, forget about activiating your OS. Can you see an automatic deduction from checking every time you boot up?

      Wait, then why is IBM pushing Linux? If they were really going with a pay-per-boot plan, they'd be pushing MS. Either they didn't think this plan through all the way, or I'm reading it incorrectly.

    • While it's true that IBM has been pushing "services" really hard, they've still got their fingers in a lot of pies. IBM has annual revenues of over 85 billion dollars, and makes around two billion dollars in profit every three months. When you're running a ship that big, you don't put all of your eggs in one basket. You stay in as many profitable markets as you can without losing focus.

      Because they've been out of the limelight for a while, people seem to forget just how huge and diversified IBM really is. IBM successfully competes with Microsoft, Oracle, Intel, Sun, HP, and EDS all at once. Occasionally they'll ditch a division (like storage) because there's no longer any profit in it. However if there's money to be made in a tech market, you can bet that IBM will be there.

  • *snicker* (Score:2, Insightful)

    by Ratface ( 21117 )
    "Hartswick said Linux was evaluated against a Windows-based system and performed flawlessly for three months, whereas the Windows-based system failed after six or seven days."

    It's points like this which the Linux evangelists out there should be adding to their scrapbooks.

    Interesting to note that their network is based on 1Ghz processors though - perhaps a way of reducing an ageing inventory??

    • Interesting to note that their network is based on 1Ghz processors though - perhaps a way of reducing an ageing inventory??

      That may very well be part of the motivation. Another thought occurs to me; one of the selling-points of Linux over Windows is that it performs better on older hardware. Why pay more for un-needed processing power, after all?

      • Probably because that's the low end in processors these days. I recently bought a new motherboard and CPU, and I didn't really want the fastest thing around, so I got a 1 GHz AMD Duron for a measly $40. The motherboard cost twice that! (Of course this means I should be able to easily upgrade later.) I could have gotten something slower like a 950 MHz Duron or even 850 MHz, but I probably would have only saved $3. Plus, my understanding is the 1GHz Durons use the new Morgan core rather than the old Spitfire one, and so use a smaller process which uses less power.
    • 1Ghz processors (Score:3, Informative)

      by wiredog ( 43288 )
      I've worked in motion control, although nothing that big, and 1 GHz processors are overkill for that application. Heck, we got decent results with 486-50s.
  • by jukal ( 523582 ) on Tuesday August 20, 2002 @12:50PM (#4106071) Journal
    " Hartswick said Linux was evaluated against a Windows-based system and performed flawlessly for three months, whereas the Windows-based system failed after six or seven days. "

    "An internally developed master software system called SiView controls all manufacturing operations. An IBM spokesperson said the manufacturing execution system is being licensed to others for fab control.

    As for the intended output of Building 323, Bijan Davari, vice president for technology and emerging products, said the company has "spent $500 million on process development alone in order to maintain our technology leadership, and we are experiencing a significant recovery via intellectual-property licensing and alliances. Our value proposition is that we are one to two years ahead of the best of the best."

  • by unicron ( 20286 )
    I would like to hope that this will drive down chip costs to the consumer, but the ironic/funny thing is is that I fear it will jack them through the roof for 6 months so they can pay for the damn lab.
    • Re:Uhh (Score:5, Informative)

      by Zathrus ( 232140 ) on Tuesday August 20, 2002 @01:12PM (#4106219) Homepage
      Uh... do you have any idea how much fabs cost? Six years ago a state-of-the-art fab, which was designed to manufacture nothing smaller than 0.15 micron transistors (and 0.25 was top notch at the time) cost nearly $1.5B.

      Once in full production the fab paid for itself in under 9 months. Amazing what happens when fabbing lots (a lot is 12 or 24 wafers, at least where I worked) that have a street value of $250,000.

      Chip costs won't rise. They'll continue to fall, just as they always have. Building a fab is indeed a large investment, but if you have the money to invest then it's one that'll pay for itself in a very short amount of time.

      Frankly, $2.5B for a 65 nm (aka 0.065 micron) fab is a good value. Sure, if they're starting off with 150 nm or 130 nm equipment they'll have to replace nearly everything to go down to 90 or 65 nm, but that's probably less than a billion per cycle. Equipment is no big deal -- the building itself is a huge deal. Getting all the tolerances tight enough for 65 nm work costs a LOT of money.
      • Yep, the bad thing with Moore's law: Fabs cost double every 18 month. :-)
      • Re:Uhh (Score:3, Informative)

        by Cougar1 ( 256626 )
        Chip costs won't rise. They'll continue to fall, just as they always have. Building a fab is indeed a large investment, but if you have the money to invest then it's one that'll pay for itself in a very short amount of time.

        Uh, this assumes you have good products in high demand and can keep the fab running continuously at or near full capacity. A fab running below half capacity can bleed red ink pretty fast! Unfortunately, there's quite a bit of overcapacity in the semiconductor industry at the moment (mostly due to rapid expansion by foundries in Taiwan and elsewhere in Asia). This is one reason why semiconductor stocks have been in the toilet for the last year or so. IBM's Fab will only make this worse. Although IBM's advanced processing technology definitely gives them an advantage, so it may be their competitors rather than IBM that feels the pinch.

        Equipment is no big deal -- the building itself is a huge deal. Getting all the tolerances tight enough for 65 nm work costs a LOT of money.

        Think again, equipment prices are HUGE, especially when you're talking state of the art 300mm tools! They account for the greater portion of that $2.5B price tag. Lithography tools alone run $15-25M each and a big production fab like this probably has 20-30, so you're already at $0.5B with just one step of the process. Now add in Ion implanters, Plasma etch systems, CVD equipment, diffusion furnaces, Sputtering systems, chemical mechanical polish tools, electroplating equipment, and wet clean hoods, not to mention all the analytical equipment (SEMs, Elipsometers, particle counters, Quantox systems, CV plotters etc...) needed to ensure everything is functioning properly.
      • Re:Uhh (Score:2, Informative)

        by kpk7161 ( 602165 )
        side note: I work at the 200mm fab across the street.

        Actually, the equipment is a big deal, the building was the small thing. The building has been here for over 15 years and its been empty for at least 5.

        As for "Getting all the tolerances tight enough for 65nm work costs a LOT of money." The fab is actually only a class 100 cleanroom. Each tool is a micro-environment with its own air handlers creating a class .1 "cleanroom". That is why the tools are so expensive. As for the fab paying for itself, it's highly unlikely that 323 will be able to generate enough revenue this year to offset depreciation. $2.5B is what IBM chipped in(no pun intended). Toshiba, Sony, and Siemens are our 'partners' in this venture.
  • "This is the first fab whose IT infrastructure is all Linux-based, controlled by some 1,700 1-GHz microprocessors able to access some 600 terabytes of data."

    I need one of these setups in my garage ;-)

  • Only mm? (Score:2, Funny)

    by gerf ( 532474 )

    The setup resembles an intricate monorail system tuned to millimeter-precision specs

    Um, just millimeter? You'd think where chips have components measured in nanometers, that you'd need just a bit more than millimeter precision. Oops, that transistor's off a bit again! i wonder why? :P

    • The tracks are for transport - not for positioning.

      It's not clear which method they're using, but either the whole wafer is imobile during the lithography step, or that there are precise adjustments made after they're moved by the liner motors.
    • Re:Only mm? (Score:2, Informative)

      by apirkle ( 40268 )
      Um, just millimeter? You'd think where chips have components measured in nanometers, that you'd need just a bit more than millimeter precision. Oops, that transistor's off a bit again! i wonder why? :P

      They're referring to the system that shuttles containers of wafers around the fab, moving them from machine to machine. Robots run around on rails, dropping down to pick up a sealed container of wafers and whisk it away to the next stage in the manufacturing process.

      Once a wafer is loaded into a stepper for printing, rest assured that it is aligned very precisely.
    • Re:Only mm? (Score:5, Informative)

      by Zathrus ( 232140 ) on Tuesday August 20, 2002 @01:36PM (#4106354) Homepage
      As others have pointed out, the system is for moving wafers, not loading them into the machines. This is nothing new -- I worked at Texas Instruments several years ago and they had a rail system moving lots around the fabs, keyed to barcode scanners and a Unix backend (we used Solaris on oodles of Sparc 5's).

      Honestly, it's not clear from the article if the rail system does end-to-end transport, or if it's just a lot shuttle. At TI it was just a shuttle - you'd ask for the next lot to be processed for a particular machine and the system would retrieve the lot and move the tray to you. A technician would pick the basket up off the rail and then use vacuum wands to move the wafers into the loading mechanism for the machine. Once processing was done, vacuum wand the wafers back into the basket and place it back on the track.

      This process is error prone -- TI would only hire technicians with at least a high school diploma, but it's still human intensive and distractions can (and did) cause problems. Grab the wafer by the wrong side? Toast. Vacuum seal break while moving the wafer? Shatter. Drop the basket? Many shatters. Accidentilly forget which wafers have been processed already (many of the machines could only load 5 or 10 wafers, and a lot was 24 wafers)? Bad things happen when you double-dope or double-etch wafers.

      If IBM's new automation system is end-to-end, meaning that the rail system somehow automatically loads and unloads the wafers to/from machines then that's a real advancement. It would allow you to eliminate 80% of the humans from inside the fab, and humans are one of the primary causes of particles. When you start talking about 65 nm processes, you have to seriously consider eliminating humans as much as possible from the environment. Or at least having them wear self-contained suits -- hair, skin, and clothing all shed humongous particles at a frightening rate (to a silicon wafer that is). And don't even think about being a smoker.
      • "humans are one of the primary causes of particles"

        Reminds me of my days as an army instructor yelling at a young recruits because his rifle was full of communism.

        Ahhhh, sweet sweet memories.

      • Re:Only mm? (Score:4, Insightful)

        by apirkle ( 40268 ) on Tuesday August 20, 2002 @03:42PM (#4107270)
        A technician would pick the basket up off the rail and then use vacuum wands to move the wafers into the loading mechanism for the machine. Once processing was done, vacuum wand the wafers back into the basket and place it back on the track.

        You must have been in one of the older fabs. There are two industry standard automated wafer carrier pods used these days: SMIF and FOUP. SMIF (Standard Mechanical InterFace) is used for 200mm wafers, and FOUP (Front Opening Unified Pod) is used for 300mm wafers. The pods are sealed from their environment and are not opened by fab technicians under normal circumstances. The overhead tracks run directly to each machine in the fab, and each fab tool loads the wafers directly from the pod without human intervention.

        A major benefit of all this is that the wafers never enter the cleanroom air - they only encounter the air in the pod, and the air in whatever tools they enter. As a result, the air in the cleanroom doesn't have to meet such a high spec, which leads to big savings on air scrubbers.

        Accidentilly forget which wafers have been processed already (many of the machines could only load 5 or 10 wafers, and a lot was 24 wafers)? Bad things happen when you double-dope or double-etch wafers.

        This is the reason behind the wireless control system. Old fabs use paper-based flow logging, meaning that each wafer lot has a paper attached to show where it has been and where it has to go. Did I mention that this is special (read: expensive) cleanroom paper, because regular paper flakes off lots of particles that are a no-no in the cleanroom environment? In modern fabs, the SMIF and FOUP pods have electronic tags that carry all the information needed to process the wafer lot - the recipe for which machines it has to go to, what to do when it gets to the machine, notes by technicians, etc etc.
        • Re:Only mm? (Score:2, Insightful)

          by Zathrus ( 232140 )
          You must have been in one of the older fabs

          Not at the time... but they're old now. DM4 (ok, it was old), DM5/DP1 (both brand new at the time, DM5 wasn't even finished when I started). They certainly didn't have the sealed pods at the time, but DP1 was the first TI fab to use the robotic rail system.

          FWIW, I was in the automation group, so we got pretty wide exposure to most of the fab systems.

          The control system may work with the pods now, but it didn't then. Our control system was paperless as well, even in DM4, but that doesn't help when a technician miscounts and winds up putting one wafer into the PVD twice.

          The only cleanroom paper was carried by engineers and maintenance workers. I guess the tech foreman had a pad too, but I don't recall the regular workers having them. All the recipes were online -- that was one of the primary things we automated to eliminate mistakes.
        • It sounds like they are using the automation hardware for 300mm that Intel is using. The only diffrence is they are using silicon on insulator and a diffrent OS to control the automation. The rest of the FAB is about 2-3 fabs behind intel. Intel D1C, RP1 are all fully automated 300mm fabs and D1D is currently finishing up construction with tool instalation and qualification currently in progress.
          A photo of an intel 300mm clean room showing the overhead delivery vehicles and the load ports of the processing tools can be seen here;
          http://www.intel.com/jobs/logictech/
          The tools themselves are on the other side of the walls.
      • by Anonymous Coward
        The automation system is tool-to-tool movement of wafers and the goal is indeed to try to eliminate the need for almost all of the human operators.

        All of the 300mm manufacturing equipment is linked into a fabwide automation network through a series of standards so that each individual wafer in the fab is tracked through each of 400 processing steps. At any moment the system knows exactly where every wafer is, what processes it is gone through so far, and where it needs to go next. Then a master scheduling program acts to efficently move the wafers to the next available tool. The goal is to improve the cycle time of moving the wafers through the fab as well as reducing labor costs. It's a pretty slick system and looks damn cool. It's also frightening when you realize that a single cassette of 25 wafers near the end of line is worth well over $1 million and they are speeding around overhead.

        Also, although IBM is leading in automation implementation right now slmost all of the other 300mm fabs worldwide are putting in similar systems.
  • What about someone sitting in the parking lot with a laptop and an antenna, who hops onto the network and sends fake data to the control system, screwing up all the chips?
  • A central control system monitors all stations and tracks wafer lots via 802.11 wireless communications

    Well I sure hope they do not have a microwave oven in the breakroom :-D
  • Millimeter (Score:2, Funny)

    by handorf ( 29768 )
    "tuned to millimeter-precision specs"

    Umm... since when is a millimeter a big unit of measurement? My CAR DOOR is built to millimeter precision specs. The engine had bloody well better be .001mm specs.

    Silly author... don't quote units when they're meaningless.
  • by Ralph Wiggam ( 22354 ) on Tuesday August 20, 2002 @01:00PM (#4106144) Homepage
    They spent 2.5 BILLION bucks on this fab and the only thing they could think of naming it was "Building 323". That's so weak. How about SupaFab? Fab:TNG? Absolutely Fab-ulous? MegaFab2k2? It's not like this is a super secret government base like Area 51. Come on IBM, have some flair.

    -B
  • Good thing they just laid off 1000 people at their Essex Junction, VT fab.
  • by Anonymous Coward
    There'd be an episode about Lucy at the chip fab plant, and the conveyor belt would get out of control, and she'd ruin millions of dollars in chips. It'd be hilarious.
  • ...20,000 sensors are used to track wafer lots in front-opening unified pods that are transported from one tool to the next on rails using linear induction motors. The setup resembles an intricate monorail system tuned to millimeter-precision specs. A central control system monitors all stations...

    Anyone remember the Denver airport baggage handling system fiasco?

  • So what sort of chips are they planning to manufacture with such bleeding edge technologies? Is IBM trying to squeeze into the PC processor market or is this for more custom jobs?
    • by entrox ( 266621 ) <slashdot@@@entrox...org> on Tuesday August 20, 2002 @01:22PM (#4106282) Homepage
      According to rumours, IBM will unveil a PPC-based desktop processor - something like a Power4 Lite - on October 15th. Some people speculate that Apple will ditch Motorola in favour of IBM and get the new breed of processors from them, since Motorola is lagging behind and doesn't seem to like having Apple as customer (apparently they got burnt when Jobs killed the clone market).

      So perhaps they will fab the next-generation (G5?) processor for Apple there. I at least hope so :)
      • Sorry to reply to myself, but I forgot to include a link: Micro processor Forum [mdronline.com].

        Quote:

        Peter Sandon, Senior Processor Architect, Power PC Organization,
        IBM Microelectronics IBM is disclosing the technical details of a new 64-bit PowerPC microprocessor designed for desktops and entry-level servers. Based on the award winning Power4 design, this processor is an 8-way superscalar design that fully supports Symmetric MultiProcessing. The processor is further enhanced by a vector processing unit implementing over 160 specialized vector instructions and implements a system interface capable of up to 6.4GB/s.
      • Much more likely is that they will use it to shring the size of the Power4 die down to something sane. With 2 cores and the huge amout of l3 cache they place on one die the Power4 is I believe the biggest core currently being manufactured.
  • I read a few days ago about Intel's plan to use "Strained Silicon" in their 90 nm process. Here's the link [anandtech.com]
    Quote from the article:

    Simply put, you want transistors to be able to pass as much current as possible when they're switched on and to pass no current when they're switched off. Unfortunately we don't live in a perfect world and transistors don't always behave as they should. Technologies such as Silicon on Insulator (SOI) help stop current from flowing when it shouldn't (leakage current) and technologies such as Strained Silicon help increase the amount of current that's allowed to flow when it's needed (drive current).

    I saw no mention of IBM doing this so I wondered, is this patented by Intel? Even so, if you are setting about to make the most advanced FAB, it would seem that this technology should be licensed.

    • Patented by Intel?? Are you kidding? Since when did Intel ever invent any groundbreaking technologies? Every time I look at slashdot, it seems I see another story about new great new technology invented by IBM (and patented too--actually using the patent system for what it was intended). I've never seen any useful invention by Intel. Intel just takes other peoples' inventions and uses them.
      • I know, I know, you are not supposed to feed the trolls, but...

        How about:
        - The world's first microprocessor (the 4004 in 1971)
        - The world's first general purpose microprocessor (the 8080 in 1974)
        - The PCI bus
        - USB
        - The ethernet standard (along with Xerox and DEC)
        - The first math-coprocessor (the 8087 in 1976) that was used as the basis for the IEEE floating point standard in 1985

        And if you look at the hot technologies today, Intel is involved in most of them too (3GIO, SATA, etc).
        • Ok... what do any of these have to do with chip fabrication technologies? Such as copper, SOI, etc.? I'll give you the 4004, but that was a long time ago, and everything since is pretty unrelated to basic silicon fabrication technology.

          USB isn't much of an invention, either; it's just intel's answer to Firewire, which is technically far superior. Intel made USB 1) to avoid the royalties on firewire, and 2) to make something which required a host processor (made by Intel of course), unlike firewire which is peer-to-peer.

          I'd be willing to bet that Xerox and/or DEC did most of the technical work on ethernet. DEC had a long history of technological achievements (i.e. Alpha processor).

          PCI was nothing new either. New to the Intel-based PC world with its crappy ISA bus, yes, but to the rest of the computing world? No.
          That was just Intel realizing they needed to push a decent bus standard so PCs wouldn't stay crippled by ISA. Smart? Yes. Revolutionary, patentable invention with no prior art? No.
          • Ok- you want chip fabrication technologies?

            -Intel chairs the EUV (extreme ultra-violet) lithography consortium
            -MOS, HMOS, CHMOS, and CMOS processes were all developed by Intel
            -Nitrided gate oxide technology (solve the hot electron effect)
            -Clock multiplying (integrating a Phase Locked Loop on the chip)
            -Intel was the first to use 300 mm wafers and the 130 nm process
            -Intel developed the worlds fastest 20 nm transistor in 2001

            USB was designed as a low cost replacement for legacy ports, and it has been very successful at that. It wasn't until USB 2.0 that it was designed to compete directly with Firewire. And the PCI bus beat out several other replacements of the ISA bus (VESA local bus anyone?). I would call any technology that is used as widespread as the PCI bus and has remained competitive for over 10 years a significant contribution.
  • And sales of brightly colored chalk skyrocket.
  • by nomadicGeek ( 453231 ) on Tuesday August 20, 2002 @01:22PM (#4106283)
    Let's start off by saying that I like Linux and I think that it is great. It sounds like IBM did some fantastic things at this plant and I applaud the innovation.

    The Windows system fails after 6 or 7 day? I work with Industrial controls all the time. As I write this, I am working on an NT based server that monitors chemical production. It has only been rebooted 4 times in the last year (I'm waiting for a backup to complete so I can change tapes hence the time to cruise by /.). The reboots are due more to external factors than the box needing it. Reliability is not an issue in the Windows based systems that I build.

    If the Windows based system failed after 6 or 7 days then they f'ed something up. There are a lot of things that you can blame on Bill Gates but I don't think that is one of them.

    I think that it is great that they are using Linux. I would like to see a lot more of this type of thing. I'd love to take a look at what they have done, but the crap about the Windows system failing is FUD. It smells just as bad coming from the Linux crowd as it does coming from MS.
    • by SirSlud ( 67381 ) on Tuesday August 20, 2002 @01:41PM (#4106380) Homepage
      There is no *best*. Only your setup, your software, your thing. There's nothing to say that their software doesn't hit some bits in Windows that your software doesn't, and thats what causes it to crash. Or they exploit various weaknesses in Windows that your software doesn't.

      I dont think theres any intended *this is always better than that if you set it up properly* claim being made here, just the simple fact that the MS install stood for 6 days, and the Linux for 3 months. If I were in charge of the money, I'd go Linux. If the MS had stood for 3 months, and Linux gone down after 6 days, I'd go with Windows.

      4 reboots a year aint bad, but we regularly push over a year (FreeBSD, if youre curious):

      2:37PM up 385 days, 10:18, 1 user, load averages: 0.75, 0.73, 0.79

      4 reboots to me sounds like alot, but then again, we're doing different things on our boxen now, arn't we, so different behaviour can be expected? :)
      • If MS had stood for 3 months and Linux had gone down after 6 days, Slashdot would be making a whole different noise. They would be saying that the guys at IBM did something wrong, and that if they'd changed such and such a setting, they could have gotten better reliability out of the Linux system. You're right, there is no *best* system configuration: there are only very good ones for a certain situation. Conversely, there is no *worst* system configuration: there are only very bad ones for a certain situation. It may well have been the case that the Linux system was set up very well to handle the load that IBM needed it to perform, whereas Windows had not been configured to handle said load, and was thus a very bad configuration for that situation.

        What he was saying is that had IBM set up Windows properly, they could have gotten much better reliability than 6-7 days. In order for that much of a difference to occur, they had to have made a mistake.
        • >If MS had stood for 3 months and Linux had gone down after 6 days, Slashdot would be making a whole different noise

          No, had that been the case, the story wouldn't have been posted. ;)

          More seriously tho, IBM has a vested interest in Linux, plus probably has more internal native *nix expertise than Windows. If thats the case, they still chose the better OS for them.

          Anybody can make any OS stand up for awhile; I think the point is that the market winner is the first one you can get to meet your performance and uptime requirements, not neccessarily or esotarically the best OS given a level config/admin playing field. People have to make decisions based on what they have, not what you or I they think they should have.
          • I agree. I think they made the correct decision in choosing Linux, because they obviously didn't have too much trouble getting it to work very well for their purposes. They surely do have many experts in house, which helps, and that should factor in their decision. If their experts had been better at Windows, the decision might have been just a little bit more difficult.

            The point is that IBM's results, and their decision based on those results, should not be grounds for people to claim that Linux is 15 times more reliable than Windows in all cases. That is not what the article says, and that's just not true.

            If they got Linux working very well very quickly, it is the best choice for that application, regardless of whether or not it is actually the best OS for the job. The same would be true if the positions had been reversed. Big companies need to make unbiased decisions based on performance, not stereotypes and superstitions.

        • But it didn't, did it?

          Anyway, if it did, it would have been fixed.

          IBM knows as almost no other how to apply 'Use the Source Luke' UTSL and fix things. You can't UTSL Windows.

          • Anyway, if it did, it would have been fixed.

            No, if it did, they would have used Windows. The point is that they want to be able to use the best option for the job, and that includes the speed with which they can set up the system properly.
            • "No, if it did, they would have used Windows. The point is that they want to be able to use the best option for the job, and that includes the speed with which they can set up the system properly."

              You're assuming fixing Linux would have taken more time than fixing the windows harddisk thrashing that they encountered?
      • I don't know, but I could not afford any Windows system in my company. It's not like we can't pay once for a license, the thing is all the machines run *unattended* 4000 miles away from our headquaters.

        We used Red Hat, and we REMOTELY reinstalled the entire OS for a Slackware Linux without the need to even ask our ISP to hardboot the machine even once.

        After they that reboot, we expect the machine to NEVER come down and to only require some patches (security).

        After the reboot, the machine's been up for half a year (load average is about 0.9 so it's not completely quiet).

        I couldn't have trusted Windows for that. I know I'd rather be using Windows as desktop (provided i have a *nix nearby i can login) though gnome is fine. But I am SURE I couldt use Windows at our servers.

        Sure, .Net will force us to use Windows at some point and we'll have to comply (or suffer), but that's a monopoly "choice" we'll have to follow, as the goverments are deep in the monopolists money (free markets are no use for them...and the consumer are just things that vote and that lie to listen to lies...at least in my country).

        I hope we can keep using Linux for the longest possible.
    • I used to use windows for relatively simple things: Word processing, Web browsing (Opera), e-mail/usenet (Netscape), Music listening (WinAmp), and Music writing (Impulse Tracker under DOS).

      If my computer was on for more than about 30 hours, it would crash the second I would try to do something. If I was using my computer for a period of more than 10 hours, its lack of memory management would grind my entire system to a halt, to the point that the next time I would open up Opera, it would take approximately 4 minutes to load up.

      Granted, it wasn't a state-of-the-art computer, but it sucks that my processor efficiency was inversely proportional to the length of time that my computer had been up and running, and that usually around a day or so after I had turned on my computer, the computer would decide that it can't do anymore and crash.

      Now, I use Linux for pretty much everything that I used windows for. Word Processing, Web browsing (Opera), Mail/News (Mozilla), Playing music (XMMS) and Writing Music (Impulse Tracker 3). Furthermore, I'm hosting a webserver, ftp server and I'm looking to get an ssh server up and running soon. My box has not been rebooted for the last 15 days, and not a single thing has crashed, slowed down or showed any slight problem to do with doing the things I want it to.

      Just because your experience with Windows products has been relatively positive doesn't make your case the rule rather than the exception. I've heard plenty of similar problems with many different people. My example is just one of many stories I've heard of people who have tried Windows and can't keep their boxes on long enough despite the things they do being simply mundane every-day things, nothing really resource-draining at all...
      • I have Win98 running on an older box on our home network serving as a glorified file server. It's just sitting at the password prompt. I've stripped off just about everything I can from the machine and still have it run Win98.

        And it'll lock up. Not as much lately, but that's probably because we haven't been accessing the hard drive on it as much lately. I keep meaning to find some ultra-basic OS to run on it but I haven't had the time or the energy.

        My new computer which has Windows XP is much much more reliable but I have to reboot it at least once a day, not because it locked up/BSOD'd(though I sometimes get that) but because stuff will stop working. Sound will become staticky(Yes I have updated drivers) in games, web browsing will stop working(both IE and mozilla) after a while though other net apps will be working fine. When I was still on dial-up, if I dialed in after disconnecting, the connection would simply not work until I rebooted.
    • Reliability is not an issue in the Windows based systems that I build.
      Maybe you don't push the boxes as hard as they do?
    • It didn't read like FUD to me. It was a simple statement of fact. I've seen Windows do the yo-yo thing, and I've seen it do a passable impression of BSD style uptime. Same for Linux. Who knows why it went down? Nothing is said about the cause of the issues. Perhaps the developers were from a *nix background, and thus did a better job on the Linux version because it was closer to what they were used to. Perhaps the Windows Boxen had a device driver conflict that no one resolved. Perhaps they had to hack together some custom driver and happened to stumble across an interface that was easier to code in Linux than Windows.

      Or then again they could have just made the mistake of applying W2K Service Pack 3, in which case they're hosed no matter what they do. That patch killed every box we tried it on. Stay away from Service Pack Three! Stay Away!!!!!!
      • > Or then again they could have just made the mistake of applying W2K Service Pack 3, in which case they're hosed no matter what they do. That patch killed every box we tried it on. Stay away from Service Pack Three! Stay Away!!!!!!

        I keep hearing this, but I haven't had any issues on the couple of machines I've installed it on....
        how did it kill the boxes?, what were the symptoms? Is it only W2k server that's a problem, or does it affect Pro as well?

        I'm interested to know, because I have avoided putting it on any important machines so far...
        • Haven't tried it on Server, just Pro.

          If EVERYONE had massive problems, Microsoft wouldn't have released the patch, as they would have had the same issues. I have a feeling this is going to be one of those feast or famine patches. It'll either work or cause massive pain.

          Problems I had at home:

          Frequent freezes (Let's leave the MS bashing aside for now)

          Spontaneous reboots.

          Inability to access the RAID controller on the I-Will Motherboard (System would pause for 20 seconds waiting for data from drive, and then give me a "drive not responding" error before corrupting the file)

          Frequent crashes of explorer.exe (Of the kind I only saw in crufty Win98 installs before)

          Frequent Blue Screens when connected to the Internet.

          Uninstalling at home fixed most the problems, but I may need to do a full install and format to get my old stability back.

          Problems on my work computer:

          Forced to kill explorer.exe in the Task Manager to unfreeze system.

          Even after uninstalling SP 3, I still have Explorer crashing on shutdown, requiring about a dozen clicks to get through all the individual error messages that pop up.

          New memory leaks requiring at least a reboot a day, on a system that previously went a couple weeks between reboots.

          Problems our network admin had on a test system:

          Network connection issues.

          Mysterious freezes he didn't have time to track down.

          All in all, I'd say leave this one alone for a while. Three out of three installs had issues on systems that were running just fine before the install, and most the problems went away after the Service Packs were uninstalled. (I suspect the uninstall is less than 100% effective.)

          Most the issues seemed to center around network connectivity and the explorer.exe windows shell. If you're running lightstep as your shell and no network connections you should be fine. ;)

          All the issues I described were ones that did not exist before the Service Pack.

          If you don't need a specific fix in the Service Pack, I wouldn't install it.

          Good Luck.
    • Computer systems in an ASIC fab cannot be compared with monitoring PCs in a chemical fab.

      ASIC design and test files are huge and on the equipment interfaces for testers, they deal with very high data throughputs.

      Plus, at 4 reboots per year, that is an average uptime of less than 100 days, that would be a really bad performance in unix world.

    • Reliability is not an issue in the Windows based systems that I build.

      For most of us, reliability is very much an issue in anything we build.
      • I guess I should have stated that a little more clearly. Reliability is certainly a design goal.

        What I was trying to say is that I have windows systems running which do not suffer from reliability problems. They meet the needs that they were intended to meet.

        I've found that in most manufacturing facilities that the mechanical equipment is the limiting factor in a line's uptime. Even a Windows system is more reliable :)

        I also take into account that various parts of the system will be down from time to time and try to insure that it will not interrupt the process. I can buffer data in the embedded controls in case the PC needs to be rebooted or locks up, I also have to assume that other parts of the system such as the network will go down as well.

        Over the last 2 years I have noticed that the PC related problems have been reduced greatly. Windows 2000 was a big improvement. I've also gotten better at figuring out what is causing problems and eliminating the problem.
    • The reboots are due more to external factors than the box needing it. Reliability is not an issue in the Windows based systems that I build.
      Well, it sounds like you build solid NT systems. I never doubted that it was possible. But.

      I often hear NT people claim that their boxes can be made as reliable and as secure as a box using any other platform, provided you do everything the right way. I actually believe this. The problem is, there's so much more that you have to do right. Every complexity, every hidden feature is an open invitation to Captain Murphy.

      Saying that a complex computer system is a reliable as a simple computer system, provided you take all the right steps -- that's like saying raw Nitroglycerine is just as safe as Plastique, provided you don't drop it.

  • Monorail? (Score:2, Funny)

    by GuntherAEPi ( 254349 )
    "The setup resembles an intricate monorail system tuned to millimeter-precision specs."

    That's right, a monorail, just like the ones in Ogdenville, North Haverbrook and Brockway!
  • 1>Design the ultimatle processor (or GPU)
    2>Park in parking lot
    4>Hack Wireless Infrastructure (will they turn on WEP?)
    4>Remove finished product from dumpster
    5>Party with you hardwarez

    SD

    Try uploading some AMD designs if you really want to mess with Intel.
  • AIX I assume would scale a hell of alot better due to the combination of supperior hardware architecture of IBM's Unix machines as well as its more mature OS that was designed from the ground up to handle many processors and carry on very large loads. I assume a chip manufacturing plant uses a hell of alot of computing resources like a wharehouse, which many fortune 1000 companies still use mainframes for. A web server is different and a lower end machine with linux might be appropriate unless its a very big e-commerce site. This seems to be linux's killer app right now.

    I think this continuation of recommending linux rather then AIX Unix might hurt IBM Unix sales in the long run like it has with Sun. A true Unix server is hell of alot more expensive then an intel Linux one which would hurt IBM's bottom's line. At least if I was at IBM's marketing department I would only recommend Linux for those on a budget or who have only little to moderate computing needs and AIX for anything else.

  • by garf ( 12900 )
    How about a community CPU? All interested parties to gather around IBM's fab plant, as people hack through the wireless network they get to add their own designed parts of the CPU...

    Hours later the wafer pops out...

If I want your opinion, I'll ask you to fill out the necessary form.

Working...