Forgot your password?
typodupeerror
Supercomputing IBM Software Hardware Linux

Year of the Mainframe? Not Quite, Say Linux Grids 222

Posted by CowboyNeal
from the big-iron-not-so-big dept.
OSS_ilation writes "IBM touted 2006 as a resurgence year for the mainframe, but not so fast. At R.L. Polk and Co., one of the oldest automobile analytics firms in the U.S., an aging mainframe couldn't cut it, so the IT staff looked elsewhere. Their search led to a grid computing environment — more specifically, a grid computing environment running Linux on more than 120 Dell servers. The mainframe's still there, apparently, but after an internal comparison showed the Linux grid outperforming the mainframe by 70% with a 65% reduction in hardware costs, Polk seemed content banishing the big box to a dark, lonely corner for more medial tasks."
This discussion has been archived. No new comments can be posted.

Year of the Mainframe? Not Quite, Say Linux Grids

Comments Filter:
  • As an admittedly non-initiate in linux (I run osx), this seems very much what linux is
    good for, rather than for a desktop os, where difficulty of setup would be a severe
    handicap. I've always believed that open-source suffers from the in-house-tool
    mentality, which assumes the end user is extremely sophistacted. As an engineer,
    I can testify to my lack of desire to make the UI more than bare-bones.

    Maxim
    • Re:Linux Niche (Score:5, Insightful)

      by Alioth (221270) <no@spam> on Friday January 05, 2007 @08:43AM (#17471998) Journal
      The difficulty of desktop Linux is really a myth these days. I recently set up Fedora Core 6 on a laptop. Setting up FC6 as a desktop is now trivially easy. It roughly consisted of inserting a CD-ROM, booting it, clicking OK and Next a few times then feeding it disks until it finished.

      Installing extra software was equally trivial. There is a GUI to start off the Applications menu for installing more software. It downloads and installs the software all as one step. No need to download it, run a separate installer or scroll through pages of impeneterable EULA.

      To add extra applications to this GUI application installer - mainly multimedia applications - all it required was clicking on a link on Livna's web page to add the Livna repository. (Like Mac OS X, you're asked for the administrative password on application install).

      Installing Fedora Core and extra applications and extra application repositories is actaully easier than doing the same on Windows, and about the equivalent difficulty of doing the same on Mac OS X.

      For third-party applications, there is Autopackage: http://autopackage.org/ [autopackage.org] - which provides a distro-independent method of installing applications. It's reminiscent of things like the Mac OS X application installer (for apps you can't simply drag to the Applications folder) or the InstallShield types of installers for Windows. Except unlike InstallShield installers, it has the ability to resolve and fetch dependencies (ever tried to install Microsoft BizTalk? Complex and unweildy because you must manually install several dependencies, each with their own dependencies. Autopackage does away with this dependency hell).
      • Re: (Score:3, Insightful)

        by PhotoGuy (189467)
        The difficulty of desktop Linux is really a myth these days. I recently set up Fedora Core 6 on a laptop. Setting up FC6 as a desktop is now trivially easy. It roughly consisted of inserting a CD-ROM, booting it, clicking OK and Next a few times then feeding it disks until it finished.

        And then you want to get your sound working on your newer laptop? Well, go find the brand new beta development source code for your driver and compile that up (oh yeah, install the compiler and dev kits first). Do I want

        • by jedidiah (1196)

          > And then you want to get your sound working on your newer laptop? Well,
          > go find the brand new beta development source code for your driver and

          Oddly enough, the last 3 laptops I've tried this on have been
          absolutely no trouble at all. 2 of them weren't even purchased
          with Linux in mind. They just managed to work "perfectly and out
          of the box" just out of sheer luck.

          > Okay, where do I set the wireless password? I know I saw that
          > somewhere before. Oh, the Dlink-chip-du-jour i
        • Re:Linux Niche (Score:4, Informative)

          by hanssprudel (323035) on Friday January 05, 2007 @11:08AM (#17473456)
          I just recently installed Ubuntu (Edgy Eft) on a brand new laptop. I found no previous testimonials or guides about the model I chose, but googling seemed to indicate that all the components had drivers. While I did have a couple of issues that made installation not quite as painless and grandparent, your post severely understates how far Linux has gotten.

          And then you want to get your sound working on your newer laptop?

          Worked with ALSA out of the box.

          Okay, where do I set the wireless password? I know I saw that somewhere before.

          Using Network Manager, there is a wireless icon in the top right of the window with a list of accessible networks. Selecting an encrypted one brings up a prompt for a password (the first time you use it).

          Oh, the Dlink-chip-du-jour isn't supported out of the box, I have to go find some more development drivers for it, if I can.

          Unfortunately, some hardware manufacturers give no Linux support at all, but in fact almost all wireless adapters work. Go with Centrino, and you will be fine.

          Hmmmm, how do I suspend this and hibernate it properly?

          Both worked perfectly out of the box.

          Hmmm, where did my scrolling regions go on my trackpad?

          They were enabled and working out of the box.

          Now, time for a presentation; install openoffice, that works fine, good. Okay, now to switch to external monitor. Hmmm, Fn-Monitor doesn't work.

          The hotkey for switching to external monitor worked out of the box, with all three modes (internal, external, both) working.

          To this I can add (in response to others) that both my iPod and my Camera worked straight out of the box, as did Internet access over my bluetooth phone. The only thing I have run into which didn't work was an HP scanner - it turns out that scanners are a real quagmire with no uniform drivers and that HP give lousy support, a little Googling told me this and that an Epson would have worked...
        • Yes I remember ALSA Hell which is especially true if your hardware is relatively new... Once I had to "buy" OSS drivers for a desktop as I could not get the recommended ALSA drivers to work. For wireless though the easiest method is to just connect the desktop/laptop to to a wireless bridge via ethernet.

          After that, it's finding the suitable players, codecs, etc., so you can listen / watch streaming audio/video. Then comes the installation of 3D graphics drivers which usually also needs to be re-installed

        • by Alioth (221270)
          Even on newer laptops, I just put the Fedora Core disks in and it just works.

          Note that if you install Windows XP on a new laptop, you have exactly the same problems - except not even the onboard ethernet NIC is supported, so you have trouble even downloading drivers to support your wireless, video, sound and chipset.

          Most people are insulated from installing Windows because it comes pre-installed. XP is generally *harder* to install from scratch and get working than a good Linux distro. I've never had to man
      • Photoshop? (Score:3, Insightful)

        by amyhughes (569088)
        How easy is it to install Photoshop on Linux? MS Office? iTunes? Logic? Vienna Symphonic Instruments?

        Okay, so if I don't want to use the most popular online music store, never google for a tutorial on how to accomplish ___ with my graphics tools, don't like books, and don't need to exchange files with people who work for a living, there's always GIMP, OO and some programmerware media app I could use, and why would I want to compose music for orchestra on my computer?
      • Re: (Score:3, Insightful)

        by Mung Victim (821757)
        The difficulty of desktop Linux is really a myth these days

        Yeah, bollocks is it.

        It's a myth until you want to use an iPod or a digital camera, surely two of the most popular consumer devices today after mobile phones. I have tried and failed to get both working on my desktop Linux system. If I can't do it, there's no way my Mum could. In the end I just bought a MacBook, and put my Linux machine in a cupboard.

        Yes, I know that both of these things can be made to work, but honestly, most people just
        • by swillden (191260) *

          It's a myth until you want to use an iPod or a digital camera, surely two of the most popular consumer devices today after mobile phones. I have tried and failed to get both working on my desktop Linux system. If I can't do it, there's no way my Mum could. In the end I just bought a MacBook, and put my Linux machine in a cupboard.

          For your iPod, use Amarok. It works very nicely with iPods, as well as being one of the best music players on any platform.

          As for your digital camera, well, every one I've ever tried just worked, but apparently you have an obscure one that doesn't. Someday Linux will get popular enough that hardware vendors support it, but until then there's some pain that's simply unavoidable, particularly when vendors refuse to follow the established standards (for cameras, those are PTP and USB storage).

        • Re: (Score:2, Interesting)

          by kfg (145172)
          It's a myth until you want to use an iPod or a digital camera. . .

          Why didn't you purchase a music player/camera that handles files as it should; as a mass storage device?

          Don't get me wrong, I understand your point, and even agree with it to an extent, but I have a valid point too. The root issue is really bad commercial interests combined with bad consumerism.

          On the flip side, and a better example I think, I am in the process of setting up a small recording studio. I have my choice of going computer based,
          • by dhasenan (758719)
            He has no point, actually. For iPods, there's gtkpod and a few other utilities. Amarok and Rhythmbox (default music apps for KDE and GNOME respectively) have some support for iPods, as well.
        • by Alioth (221270)
          I don't have an iPod so I can't comment on that, but I just plug my digital camera into my Fedora Core workstation and it appears. It couldn't be simpler. Visitors who've shown up with cameras - I plug it in, and there it is. I've found cameras to be utterly trouble free.
        • Any decent digital camera works like this:

          Plug it in with a USB cable, or put the card in a card reader and plug the card reader in with a USB cable.

          My four year old daughter can do it.

          My cheap and nasty music player works like this:

          Remove cover, plug it into the USB port.

          My four year old daughter can do that too.

          From other people's comments an iPod works just was well as my unbranded piece of cheap junk.

          Having read the parent, does anyone who has moderated recently regret rating this comment [slashdot.org]

      • by Builder (103701)
        Yay you! You setup a desktop that will be obsolete and unsupported in 12 months. Is this really something we should be encouraging users to do?

        12 months from today you'll either have to futz with your setup (something most users won't want to do) or stop receiving patches and updates.

        This is one of the main reasons I'm moving my servers to Solaris.
      • No kidding. Anyone who complains that desktop Linux is too dificult hasn't tried Knoppix or Ubuntu (for example) recently.

        Case in point: I was recently installing Win2K (I despise XP for a number of reasons, none of which are relevant here) on a new, relatively high-end computer for my wife recently, and found that Windows didn't detect a good portion of her hardware (graphics card, sound card, etc.). My 2-3 year old Knoppix CD, on the other hand, had no problem detecting and setting up all of the hard
    • Re: (Score:3, Insightful)

      by William_Lee (834197)
      As an admittedly non-initiate in linux (I run osx), this seems very much what linux is good for, rather than for a desktop os, where difficulty of setup would be a severe handicap.

      You should really try looking at a modern linux distro before making a blanket statement about the difficulty of setup for a desktop machine. I've installed Ubuntu and OpenSUSE at home recently, and as long as the hardware matches up ok (which it often times does, at least on desktops), there is little manual configuration to
      • I agree with you, except for the GP's comment about wireless. Linux and wireless just don't work together, unless you're lucky enough to have one of the few well-supported chipsets (thankfully some of the ones used in laptops are OK, e.g. Centrino). But if you go down to Best Buy and pick up a random Wifi card and expect it to work, welcome to the house of pain. If you're lucky, they'll be a driver for it (like the acx100/111 series), but you'll need to find and download the right firmware...there's no "plu
    • by canuck57 (662392)

      I've always believed that open-source suffers from the in-house-tool mentality, which assumes the end user is extremely sophistacted.

      You really should try all three of Red Hat, Suse and Ubuntu. Pick one, they are getting to be quite comparable to Windows on the desktop and certainly more secure and stable.

      But more to the original post. Imagine if a corporation ever got their collective butts out of the FUD and had everyone use the same version of Linux and made all workstations part of a giant grid.

  • by Ksempac (934247) on Friday January 05, 2007 @08:45AM (#17472020)
    So a NEW system outperforms an OLD system. I fail to see how this is a news.

    If they had compared a NEW mainframe with the NEW grid, then we would have been able to draw some conclusions about which one is better. But saying "We bought a new system, its better than the old one" proves nothing.
    • the consulting group or whomwever spun up the new project wanted a paticular result so they aimed for it.

      Most likely they didn't know how to program the mainframe to get the results they wanted but they did know how to use the solution they came up with

      or

      they knew how to do the mainframe side to the fullest potential of the machine but that wasn't cool enough so they redefined what good results were.
      • Still, it's got to say something for the mainframe if 120 new Dell servers, running as a grid, offer only a 70% performance improvement.
        • by kv9 (697238)

          Still, it's got to say something for the mainframe if 120 new Dell servers, running as a grid, offer only a 70% performance improvement.

          it says it's 65% cheaper. how's that?

          • by TheLink (130905)
            How old is the mainframe? The first mention of that model I see is at least in early 2002.

            How easy is it to get a new server that's 70% faster than a 2002 server and 65% cheaper (were they using 2002 prices?)?

            Of course it's likely that IBM wants to charge people _high_prices_ every year for using a mainframe, and if that's still the case, then I wouldn't recommend using a mainframe - they aren't fast in processing (they never were- just usually had more IO) and aren't even that reliable compared to other te
    • by Ingolfke (515826) on Friday January 05, 2007 @09:01AM (#17472124) Journal
      I agree that this isn't a good comparison of grid computing against modern mainframes... but I think that's more the fault of the post, not the article. I thought the article was still interesting though. It was interesting to learn a bit more about grid computing in a specific implementation and to see that companies are choosing alternatives to mainframes for massive processing tasks.
    • Re: (Score:3, Interesting)

      by scdeimos (632778)
      I agree. I'd be very disappointed if a 118-CPU RHEL Grid computer system with probably more than 200GB of RAM couldn't out-perform a 2-CPU system with 16GB running OS/390. (The IBM 2066-002 in its standard config only has 2GB I think.) Although I'm a little disappointed that it's only out-performing it by 70% (maybe they're using 4,200rpm 2.5" drives):
      Internal tests have showed speed improvements in data-file processing of up to 70% over what the mainframe could provide.
      • by spookymonster (238226) on Friday January 05, 2007 @11:35AM (#17473948)
        We still have a 2066 in our shop. According to my power charts, the 2066 rates approximately 77 MIPS. If the Dells are giving a 70% performance increase, that means roughly 130 MIPS, or 1.1 MIPS per server.

        In comparison, our standard model mainframe (a 2084) kicks up about 1600 MPS. Assuming the performance numbers for the Dell grid were to scale (the safe money says it doesn't), that translates into almost 1450 Dells. Keep in mind, that's not even a top of the line mainframe...

        Let's not even start on hardware maintenance (which would you rather do: hot swap a power supply on 1 system, or 25?), network overhead, shared DASD, coupling facilities and RRS (think: Beowulf clusters).
    • by Archtech (159117) on Friday January 05, 2007 @09:59AM (#17472518)
      Yes, you have put your finger on the glaring weakness in this story. Once you see that it was an OLD mainframe versus a PRESENT-DAY Linux grid, you realise that no useful conclusions can be drawn. (Although, as others have noted, the narrowness of the margin achieved suggests that the mainframe would win easily in a fair contest).

      These "old-versus-new" comparisons are the stock-in-trade of marketing and PR departments, which are perpetually issuing press releases bragging that the latest Foowhatzit Humdinger 24-processor with thousands of GB of storage outperformed someone's 10-year old VAX or AS/400. To Slashdotters, that's a subdued "Wow!" (that they would attempt such barefaced trickery, that is) and on to something potentially interesting. But to the broad masses who know nothing about computers, it is quite impressive. PHB readers habitually skip over all the "techie details" anyway, so they probably come away with the desired message: "We need Foowhatzit Humdingers, and we need 'em now!"

      People with arts degrees are big on quoting Mies van der Rohe's "God is in the details". Perhaps it's time they realised that "God is in the numbers" too.
      • Re: (Score:3, Interesting)

        by R2.0 (532027)
        It's not just in IT - people in ALL industries want "new and shiny" over "old an and boring".

        I recently had a request to install a new type of medical irradiator (products, not people)in lieu of an older model. The new one doesn't use a radioactive source, and instead uses xray tubes. It was the cat's ass - no radiation safety officer required, no NRC hassles, and another part of he company did an ROI and the results were great. But when I looked at the specs, the cycle time was slower, it had 1/2 the ca
      • it was an OLD mainframe versus a PRESENT-DAY Linux grid

        The 2066-002 was released in 2002, it's hardly an "old" mainframe. I think their biggest advantage was in getting rid of everything they had, and starting from scratch. They could have done this on the mainframe too, and probably would have seen similar gains. From the description of their job load, it sounds like a typical data-processing environment (take huge amounts of raw data, sort/filter/categorize and store it), which is what mainframes were designed for. I'll bet they could have just written a

      • by k12linux (627320)

        PHB readers habitually skip over all the "techie details" anyway, so they probably come away with the desired message: "We need Foowhatzit Humdingers, and we need 'em now!"

        Even simpler... PHB: "This is almost 2x faster for under 1/2 the price? Buy it!" It happens more in IT I think because, face it, when is the last time a non-tech item doubled it's performance for half price in only a few years? You can bet the robotic welding arm the PHB bought recently doesn't have a 70% performance gain at 65% cost

    • by arivanov (12034) on Friday January 05, 2007 @10:08AM (#17472608) Homepage
      Besides, performance has never been the strong point of a mainframe. In fact most mainframes performance is laughable (a while ago IBM had to ask Seti@Home to remove the results for the early Z series because they were comparable with a 386SX. The primary selling points of a mainframe are the resource control and reliability.

      Does the grid mentioned in the article offer the same level of PHB friendly resource control (CPU, IO, etc) for multiple concurrently running applications? Doubt it.

      Does the grid mentioned in the article offer the same level of reliability and reproducibility of the result? I have some doubts. Most mainframes have 2+ CPUs doing the same task and either flagging a fault on differences or deciding who is right using a "voting" system. This is done on a per instruction basis and cannot be directly simulated in a grid. At best you can do per-task/procedure result comparison which is not the same as it will flag errors considerably later and has higher probability of overall error when using the same number of components.

      Someone is either comparing apples and oranges, or being a fanboy or not knowing what mainframe is for or all of these at the same time.
  • So a brand new grid beat out a 20 year old mainframe. At a computationally-intensive task. I'm shocked.
  • by ggruschow (78300) on Friday January 05, 2007 @08:54AM (#17472072)
    The mainframe is many years old and they only managed to beat it by up to 70% with 120 machines? Either that thing is awesome or they suck with their grid.
    • Re: (Score:3, Insightful)

      by Weedlekin (836313)
      The fact that they mention RedHat's ownership of JBoss as having been the deciding factor in their OS selection points to this being a cluster running distributed Enterprise Java Beans, which means it will probably compare poorly in terms of efficiency with their old mainframe applications that were likely written in heavily optimised FORTRAN, which would account for their having to feed data to it in batches. This together with an observed inability on the part of EJB programmers (as distinct from other ty
  • by r00t (33219) on Friday January 05, 2007 @08:55AM (#17472080) Journal
    You use the mainframe when you want error recovery at every step of the way. One of them even runs two CPU pipelines in lockstep so that a failing CPU can be safely isolated without crashing the app that was running on it.

    The mainframe also gives you nice IO and super-efficient virtualization.

    Workload doesn't need all that? Gee, maybe it's not a workload for the mainframe.
    • by mwvdlee (775178) on Friday January 05, 2007 @09:12AM (#17472194) Homepage
      You are thinking of the old Tandem machines, I think they're called Himalaya now, or whatever. Those are failsafe machines which are supposed to have zero downtime on hardware problems.

      The Mainframe discussed in the topic is an IBM one, most likely a predecessor of the current zSeries machines (OS/390).

      So Linux beat it. I guess they just had tasks which weren't fit for large scale processing behemoths like mainframes anyway. I dare bet the Linux grid would be a lot slower if it had to batch processes a few hundred MB worth of data. And despite all the claims about Linux stability, mainframes boast far superious uptime (a few minutes of scheduled downtime a year and no unscheduled downtime; everything can be hotswapped, including CPU's and memory). Although the increase of real-time processing decreases the need for mainframes a bit, the ever increasing processing load still makes them invaluable to large companies.
      • Re: (Score:3, Informative)

        by Ken Hall (40554)
        These features have been in the IBM mainframes for 15 years. I haven't seen a hardware failure take down a zSeries box in over ten.

        On a somewhat related note, I wonder how much more floor space those 200 servers take up, and how much cooling they consume, compared to an IBM z9. It's about the size of a large refrigerator. Unless they're using blades, we're talking maybe 10x the floor space.
        • by dpilot (134227)
          Not to mention the admin time. Tools can automate the software admin for 200 boxes to a good extent, but you're also talking about 200 boxes of commodity-class hardware. The quality standards are lower than mainframe-class hardware, and you've got enough pieces that mtbf starts to factor in. I've heard that part of google's value is that they keep running with a goodly number of dead boxes in the cluster, just to reduce the physical admin load.
        • by Xzzy (111297)
          we're talking maybe 10x the floor space.

          You can fit 200 1U machines into 5 racks. According to TFA these guys have 49 4U machines in their production grid. Still comes in at 5 racks, so cut your estimate in half.

          They do belch out a lot of heat, but a standard server room A/C unit should be able to handle it.. assuming a bunch of other stuff isn't already putting a load on it.
      • IBM zSeries also have two execution units in each processor unit which execute in lockstep. If results are different the processor repeats the execution. If failure continues the processor will defer the instructions to another processor unit and disable the failing processor unit. This reliability, superior I/O throughput, and a tried-and-tested system is the advantage of the mainframe.
      • by afabbro (33948)
        You are thinking of the old Tandem machines, I think they're called Himalaya now, or whatever. Those are failsafe machines which are supposed to have zero downtime on hardware problems. The Mainframe discussed in the topic is an IBM one, most likely a predecessor of the current zSeries machines (OS/390).

        No, he's thinking of standard z-series IBM mainframes. They behave as he described. HP's NonStop (prev. Tandem) is just a different operating environment with someone different HA characteristics.

        • by mwvdlee (775178)
          AFAIK, zSeries will detect problems in the hardware, then move the load to other hardware when it detects a problem, dumping the process that was using the hardware during the problem. NonStops are supposed to keep the process running on the redundant hardware so real-time transactions should never suffer from hardware failure. It's quite rare to find an application which requires that level of reliability, though, especially considering the cost.
      • by rasilon (18267)
        Tandem got bought by Compaq and became "Compaq Himalaya", which was then bough by HP, and is now "HP NonStop". But IBM do the lockstep processors too, as do a couple of other companies.
  • by gelfling (6534) on Friday January 05, 2007 @08:55AM (#17472086) Homepage Journal
    a 2066-002 is midway up the 'Baby Freeway' z800 mainframe line. It has 2 CP's and benchmarks 1.0-1.2x the performance of a 9672-R36 itself a 4-5 year old model in the middle of the pack.
    • Not only is it an apples and oranges comparison between an old mainframe and a new custom-built grid, but the software was completely different.

      According to this the original software was probably poorly designed:
      > the mainframe took on the persona of a lumbering behemoth. This was especially the case when the IT staff had to accommodate new
      > business requirements such as a car dealership adding a new type of vehicle to its inventory. Each update required a
      > major rework of the program
      Hmmm, massive
  • by jimstapleton (999106) on Friday January 05, 2007 @08:59AM (#17472114) Journal
    what we need is "multiframes"

    Consider an virual operating system, that can run on one or more other operating systems. This operating system is actually a set of nodes, one node per machine (or one node per CPU), with command nodes and worker nodes.

    Command nodes distribute the workload and exist for redundancy. If one goes down, all others have a backup of it's data and state, and the next most senior node takes over.

    Worker nodes then take the tasks and interface with the users via a standard shell.

    Files can be distributed amongst the nodes for speed and redundancy, and if a node that needs a file doesn't have it, ant can request the file and temporarily have it locally. Each node will have a list of what files exist, and where they exist.

    UI tasks are written to run solely on the machine of the user, but data crunching tasks are written to be split between nodes.

    Thus, a person just goes to his or her machine, and interacts with it like a normal machine, except, rather than having a logon for his machine, he or she will have a logon for the multiframe.

    Also, because of this setup, a multifram could work on top of multiple operating systems (say an office that is 50% windows for the normal users, and then 35% Linux for the devs, 10% FreeBSD for other devs, 5% HPUX/Sun for some server, and all machines coudl contribute to the multiframe.

    The multifram could also have recorded statistics of uptimes and drops for various nodes, performance statistics for load balancing, etc.

    The caveat to this system is that it would need some pretty heavy networking, even if optimised, and there could be latency issues. Still, I like this idea better than a mainframe.
    • by Ken Hall (40554)
      This is sort of what we've been looking at for a while. We have a Linux grid, and there's a project now to hook our mainframe, running Linux, into it. The work units are all Java, so the biggest headache has been to get the vendors to port the parts of the management software that are in C.
      • So instead of an OS/hardware loc, you have an application-environment lock (java). My thought is to get rid of even that.
    • The caveat to this system is that it would need some pretty heavy networking, even if optimised, and there could be latency issues. Still, I like this idea better than a mainframe.

      And this caveat kills the deal. The problem has always been that networks simply can't compete with the throughput of native devices. Consider this:

      • Mainframe: 255 ESCON channels with 16MB/s (that's 128 Mbit/s) bandwidth each. Aggregate IO bandwith: 4.08 GB/s, sustained transfer rate.
      • PC: Ethernet - Even if you're lucky e
  • by 16Chapel (998683) on Friday January 05, 2007 @09:06AM (#17472154)
    It sounds like a Linux grid is an excellent solution here - however, it also sounds like their software is not exactly performing perfectly:

    This was especially the case when the IT staff had to accommodate new business requirements such as a car dealership adding a new type of vehicle to its inventory. Each update required a major rework of the program

    Really?

    Frankly that sounds like the software is in severe need of reworking! If their machines are 20 years old that's bad enough, but if they have 20 year-old software that needs to be rewritten every time a new type of car is added, it's time for a redesign.
    • Good lord, I have to agree. That's inexcusably crappy design for thirty years ago, much less now. No damn wonder they 'beat' the old machine. They really beat the old, crappy coding.

      Wonder if they would have done as well against a well-designed application?
    • >software that needs to be rewritten every time a new type of car is added

      You call it poor design; they call it... job security?

    • by geekoid (135745)
      There is nothing wrong with a 20 year old mainframe.
      It is very clear that there applications were crappy, and that the 'mainframe' was blamed. I strongly suspect a rewrite of the apps for that same mainframe would have had greater increases in speed.

      Also, they replaced it with a cluster not a grid.

      We use a 30 year old mainframe to do millions of transaction daily, with no problems.

      I am a fan of clusters and grids, but this is an example of people change technology due to ignorance, not any technical need.
  • by dbneeley (1043856) on Friday January 05, 2007 @09:16AM (#17472214)
    As others have pointed out, the comment left a great deal out.

    For example, any mainframe that can be replaced by 120 PC compute nodes isn't well utilized and/or is completely outmoded.

    I had a chat with a gentleman once who participated in a replacement of multiple PC servers with a mainframe--but it entailed replacing 7,000 servers with a relatively high-end machine.

    The result was that power and real estate savings alone paid for the mainframe--which had more capacity for future expansion as needed.

    As always, proper implementation of the right equipment for the job is always crucial--and a shallow analysis that doesn't cover all the variables is simply misleading at best.

  • "IBM touted 2006 as a resurgence year for the mainframe, but not so fast."

    IBM are also heavily investing in Grids, particularly with their support of the Globus Alliance Toolkit (see http://www.globus.org/ [globus.org])

    Crazy? No, they are aiming at different targets. Mainframes are controlled by individual companies, grids are hoped to eventually be the equivalent of TCP - ubiquitous, reliable and cheaply available everywhere. That means your next Windows Vista T1000, Ubuntu Beam-me-up (TM) and self-aware toaster w

  • It sounds to me like a mainframe is still probably the best fit for this organization. Few solutions can match the efficiency, streamlined-goodness of an IBM mainframe. Where I work, a city government, we run two fairly beefy iSeries (AS/400s), one that runs accounting, utility billing and operation, and income tax operations, and another that runs public safety operations. I love them. No down time - ours are brought down about once a year, and usually that is because the power is out and our generator
  • Forgive me if I'm not up on the latest jargon, but what's the difference between a grid and a cluster?
    • From these guys' results, it sounds like a grid is a badly implemented computational cluster. You also get redundant clusters and load balancing clusters.
    • by Builder (103701)
      In the areas I've worked, we usually use cluster to mean a number of machines that share the load of a task in a way that allows for failover. We normally put a lot of thought into the design of the cluster and all machines do one task.

      A big plus side to clusters is the fact that each machine in the cluster normally knows enough about its role to operate in the absence of all of the others.

      I've only used grids for computationally intensive tasks. Failover and recovery is something that we just got for free
    • by krz99 (842286)
      Nowadays, in the CS research community, the most widely used definition of a grid is A Three Point Checklist [anl.gov] by Ian Foster, stating that:
      1. There's no central control over resources.
      2. System uses open standards.
      3. System provides non-trivial quality of service.

      Here, at least the first point is not fulfilled. So yes, they've built a cluster. A cluster like hundreds of others, used since the early 90s. It's 2007, isn't it? I'm impressed!

  • ... a long time ago, in a galaxy ...

    I was a vendor SE who had occasion to visit R. L. Polk. There are customers who are "bleeding edge" customers, always looking for ways that the latest and greatest technology can give them an advantage in their business operations, and there are customers who are "junkyard" customers, who see everything as an expense, and only have the cheapest, oldest junk on the floors in their data centers.

    Cost is the only metric for such customers, of whom R. L. Polk was one such (a
  • Not A Big Deal (Score:3, Informative)

    by FJ (18034) on Friday January 05, 2007 @12:00PM (#17474344)
    It really depends on what your workload is and what you are trying to accomplish. I've seen Linux on the mainframe be a horrible thing and I've seen it be a pretty cool thing that worked wonderfully. If you are trying to do heavy math processing on a mainframe then it probably won't get you the bang for your money. On the other hand, heavy IO will probably work very well. You also get the benefit of being able to run hundreds (or even thousands) of Linux guests on one single server. That conserves physical space, electricity, software license costs, and the hardware is extremely reliable (which is part of the reason it is so expensive). It also makes disaster recovery much more straight forward.

    Even IBM will tell you that there are some applications that you should not run on a zSeries processor. I've been in meetings where IBM has said that some types of workload will not perform well on a zSeries processor and you should consider Intel or some other platform.

    There is no "one size fits all". Anyone who says there is "one size" is probably selling something.
  • From the article:
    And what of that old mainframe? It's still around, but Isiminger wouldn't say exactly what it was up to. It operates in a "reduced capacity," he said.

    LOL... anyone else think they just created a HTTP server farm to frontend the data, using WebSphere, MQSeries, and/or DB2 Universal as the backend (all still running on the mainframe)?
  • This is news? (Score:3, Insightful)

    by twbecker (315312) on Friday January 05, 2007 @12:45PM (#17475074)
    Comparing a Linux grid system with a mainframe is comparing apples and oranges. The mainframe's strength has never been raw computing power. Mainframes have practically zero downtime and massive I/O capabilities. If you can swap a Linux array in for a mainframe and have results this good, you were using the mainframe for a task to which it wasn't suited to begin with.
  • I'd like to see what the difference there is.
  • by SecurityGuy (217807) on Friday January 05, 2007 @01:05PM (#17475452)
    I know Grid is the buzzword of the day, but this isn't a grid. It's a cluster, or perhaps a beowulf, but it is not a grid. Buying a bunch of identical boxes and installing identical software on them doesn't make a grid.

    One of the key features of a grid is that it "coordinates features that are not subject to centralized control". (What Is The Grid [anl.gov], Ian Foster, ANL). Grids by definition cross organizational or management boundaries. You can't buy a grid any more than you can buy an Internet. You can buy a network. You can buy a cluster. You can't buy a grid.
  • No real substance.

    Could even be a failure being spun to look like a success- I mean look at this:

    "And what of that old mainframe? It's still around, but Isiminger wouldn't say exactly what it was up to. It operates in a "reduced capacity," he said"

    "reduced capacity", that smells like BS talk. Does that mean it's still doing some of its old tasks, or most of its old tasks?

    Of course they might not want to get rid of the mainframe that they paid so much money for (though there is a market for 2nd hand mainfram
  • It sounds like R.L Polk had the wrong architecture for the job. It doesn't sound like they dealt with lots of transactions. It sounds like they were doing some analytics on a large database (I could be wrong about this, but that's the impression I got from the article). If the latter is they case they need large quantities of bulk processing power. That's where clusters are much better. I'm surprised, however, that it takes fewer people to administer the Linux boxes than the mainframe. They must have

There is no distinction between any AI program and some existent game.

Working...