Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Businesses IBM Supercomputing Technology

Why OldTech Keeps Kicking 339

Hugh Pickens writes "In 1991 Stewart Alsop, the editor of InfoWorld, predicted that the last mainframe computer would be unplugged by 1996. Just last month, IBM introduced the latest version of its mainframe, and technologies from the golden age of big-box computing continue to be vital components in modern infrastructure. The New York Times explores why old technology is still around, using radio and the mainframe as perfect examples. 'The mainframe is the classic survivor technology, and it owes its longevity to sound business decisions. I.B.M. overhauled the insides of the mainframe, using low-cost microprocessors as the computing engine. The company invested and updated the mainframe software, so that banks, corporations and government agencies could still rely on the mainframe as the rock-solid reliable and secure computer for vital transactions and data, while allowing it to take on new chores like running Web-based programs.'"
This discussion has been archived. No new comments can be posted.

Why OldTech Keeps Kicking

Comments Filter:
  • by Sloppy ( 14984 ) on Wednesday March 26, 2008 @02:09PM (#22871770) Homepage Journal

    I.B.M. overhauled the insides of the mainframe
    Uh, did they replace the insides with something old, or something new? Duuhhh.
    • by 427_ci_505 ( 1009677 ) on Wednesday March 26, 2008 @02:12PM (#22871812)
      It might be new tech, but the mainframe is still an old concept.

      ...Duuhhh?
      • by omeomi ( 675045 ) on Wednesday March 26, 2008 @02:18PM (#22871880) Homepage
        As is the radio. I'll never understand why people think Television should have killed off the radio. Radio is still around for one major reason: It's hard (and usually illegal) to watch TV while driving. If anything is going to kill radio, it's the advent of the podcast, which in a lot of ways is close enough to the function of radio to be a real threat.
        • by ericspinder ( 146776 ) on Wednesday March 26, 2008 @02:28PM (#22872024) Journal

          As is the radio. I'll never understand why people think Television should have killed off the radio.

          A better analogy would be to see mainframes as movie theaters, and PCs as televisions.

          • Re: (Score:3, Funny)

            by Artuir ( 1226648 )
            Being American, I require all of my analogies to be in libraries of congress vs. nascar track time (as others before me have likely stated.) Thanks in advance!
        • Re: (Score:3, Insightful)

          by Grave ( 8234 )
          What car do you have that by merely pressing a few buttons (or turning some knobs), you can listen to podcasts without any extra technology? The beauty of radio is that it is always there, and it's always updating (ignoring the repetitive nature of music these days). World War III starts, your radio will tell you (unless you're dead already). Natural disaster or severe weather happens, your radio will tell you. Podcasts can't do that.

          Radio may some day transform from the traditional AM/FM we've come to
          • by omeomi ( 675045 ) on Wednesday March 26, 2008 @03:13PM (#22872596) Homepage
            What car do you have that by merely pressing a few buttons (or turning some knobs), you can listen to podcasts without any extra technology?

            I don't know if it exists yet or not, but it can't be too far off. I can already download podcasts to my iTouch directly over wifi. I would imagine it wouldn't be too hard to make a car radio that did the same thing. You could even make it detect when it's entered a location with a wifi connection, such as the garage, and start downloading new episodes.

            Of course, some lame-ass company is probably going to patent this idea, and we'll have to wait until the stupid patent expires before we can actually use it...
    • by langelgjm ( 860756 ) on Wednesday March 26, 2008 @02:12PM (#22871820) Journal
      I think the point is that the idea of the mainframe is old, and many of the naysayers predicted that once smaller computers became affordable, they would replace the centralized mainframe model.
      • by MozeeToby ( 1163751 ) on Wednesday March 26, 2008 @02:33PM (#22872070)
        So shouldn't the article be about how poor our prediction skills are rather than about how we cling to old tech? In the mainframes case, we cling to it because the concept was updated and still represents the most economically efficient solution to the problem.

        The article may as well be asking "Why do personal automobiles keep kicking?". Because they work, and they solve they still solve the problems that they are meant to solve. And when a new problem crops up, (fuel prices/pollution) the solution isn't to get rid of the car, it is to redesign it to address the new concerns; just like IBM and other companies did with mainframes.
        • by Darinbob ( 1142669 ) on Wednesday March 26, 2008 @06:42PM (#22875192)
          I don't think the concept of the mainframe has actually been updated. The models have been updated though, but the concept is mostly the same.

          A mainframe is not just a CPU and it's not designed to be a power house of MIPS or FLOPS (or heaven forbid some naive notion of clock speed). Instead a mainframe is an I/O power house. They're designed to handle aggregated data from many different sources and process them efficiently. There are lots of peripheral processors to handle I/O independently of the main processor and each other. The concept of a special purpose computing machine designed for secure, reliable, I/O heavy transaction based processing is still around; and since mainframes do this job cheaper than the alternatives, they're still around.

          There was essentially no reason to declare the mainframe "dead" in the first place. Though declaring certain types or models dead makes sense. The original prognostication seemed a bit like noticing that computers were getting faster with more bandwidth while forgetting that mainframes were allowed to improve as well.
          • Re: (Score:3, Interesting)

            by afidel ( 530433 )
            Meh, the big UNIX boxes had plenty of I/O processors and bandwidth. The great reason to keep the mainframe around is JCL, because of JCL you can be assured that the job will complete in a given amount of time. Banks don't really care how fast a transaction completes, just that it will post by their deadline. It's best case vs average case vs worst case, UNIX and PC based servers can excel at the first two but absolutely suck at the last one, and that's why the mainframe is still around.
    • by JoeD ( 12073 ) on Wednesday March 26, 2008 @02:25PM (#22871970) Homepage
      No. They used something borrowed, and something blue.
  • by Anonymous Coward on Wednesday March 26, 2008 @02:09PM (#22871792)
    Because the people who are used to that tech haven't kicked (the bucket).
    Basic psychology. People stick with what they're used to, even if it doesn't always make the most sense.
    • Re: (Score:2, Insightful)

      by hardburn ( 141468 )

      People stick with what they're used to, even if it doesn't always make the most sense.

      Legacy mainframes do make sense, though. Even if they're old and the people who know how to program them are retiring/dieing off, they do have 20+ years of debugging behind the code. Many of these systems run highly mission critical banking systems. If some of them fail, worldwide economic collapse is a real possibility. It's worth being very conservative in this case. Even if the going rate for COBOL programmers ends

    • by gclef ( 96311 ) on Wednesday March 26, 2008 @02:53PM (#22872340)
      Or because it works. This is something lots of technologists keep missing. It doesn't matter if the tech is old. If it works and serves it's purpose, the argument to replace it has to be really compelling. "It's old" is not a compelling argument.
      • Re: (Score:3, Insightful)

        Unless you like new stuff, which is a problem when a software engineer (who either loves new stuff or hates his job basically :P) tries to predict the actions of a business person (who likes his technology to work and be cheap, not cutting edge).

        If software engineers ran businesses mainframes would be gone because they are old and not cool anymore. But software engineers don't run businesses (if they did they'd be business people) and so they're still around, which is a good thing in my book (mainframe mode
  • by downix ( 84795 ) on Wednesday March 26, 2008 @02:12PM (#22871822) Homepage
    Look at the inability of people to drive using joysticks, instead sticking to the classic wheel arrangement. I've seen drive by wire setups using joysticks, they work well, but people just can't get into them.
    • Really it's not the joystick that annoys me with those systems (although it is by nature less acurate). It's the whole drive by wire system that I have a problem with. What is the point? They have already designed systems that can make the amount of turn in a steering wheel affect the turn of the wheels on the road in varying amounts while still having a mechanical linkage. The benefit of this is that everything on the car could break and you can still steer the damned thing.
    • I don't see the connection. In what way is a joystick any more useful or practical than a steering wheel?
      • Re: (Score:3, Insightful)

        by downix ( 84795 )
        standard rack and pinion steering system is 120 lbs
        drive by wire system using a joystick is 25 lbs.

        Such changes all added throughout a car can dramatically improve fuel efficiency.
        • Crashes decrease fuel efficiency, however. Just because something is lighter does not make it better. Joysticks are far less intuative than wheels for turning. They make perfect sense for planes, which require more dimensions of travel and it's not that important if you're off by a degree or two in the long run. A steering wheel is far superior when it comes to traveling through 1 dimension (sideways).

          Now here's a question for you. Why not drive-by-wire with a steering wheel? There's plenty of examples of it working, I had a steering wheel peripheral for my PS1 not too long ago. If you want to reduce weight without sacrificing utility then duplicate the old interface with new technology, don't re-invent the interface (unless that's what needs to be improved, and steering wheels are a perfectly good interface in my book).

          There's very rarely just two options :P.
          • Much of the Prius is drive by wire (especially the throttle)--but they kept the old model. When I first got my prius I thought simulating the "creep" in an automatic transmission on drive by wire was stupid. I now think it makes sense--as that's what we're really used to. In other ways, keeping existing models but changing the implementation is good design.

            I'm not sure I see what's wrong with the steering wheel as an input device for turning a car. However, there's no real reason why the wheel could ju
        • Re: (Score:3, Insightful)

          by oyenstikker ( 536040 )
          You still need hardware to turn the wheel. Motors attached to the steering knuckle would increase unsprung weight, so you don't want that. You'll still have two tie rods, and you need something to move them. A couple of motors or a rack and pinion with a power steering unit. Assuming that you just need a small switch of some sort (as the power steering unit is doing most of the work), you've only really cut out the steering column and steering wheel (But you aren't cutting out the weight of the airbag compo
          • Re: (Score:3, Insightful)

            by asuffield ( 111848 )

            All of that said, I personally do not want a car that I can't steer when the car is turned off (when I am working on the car), and I would be quite scared to drive a car that I can't steer when the alternator, computer, or power steering unit dies at 80 mph.

            And yet you seem quite happy with the idea of driving a car that you can't steer when the complex mechanical contraption shatters at 80 mph.

            People have this idea in their heads that things with electricity can break while things without electricity can't

            • by quanticle ( 843097 ) on Wednesday March 26, 2008 @06:20PM (#22875002) Homepage

              People have this idea in their heads that things with electricity can break while things without electricity can't.

              Its not that things with electricity break while things without electricity don't, its that things with software break while things without software don't. Software, because of its discrete nature, is inherently harder to judge safe. A bridge rated for 10,000 pounds will easily carry 1000, but a piece of software that works with input 10,000 cannot automatically be guaranteed to work with input 1000. Any "drive by wire" system will need software (at least for the motor controllers that transform the steering wheel input into steering motion), and therefore consumers are understandably leery of it.

              The other consideration is tactile feedback. A mechanical steering system provides lots of tactile feedback, since you're directly connected to the steering system via a mechanical linkage. Therefore, if there's something wrong you're liable to feel it (i.e. the car pulls to one side, or becomes difficult to steer), allowing you to detect problems before they become catastrophic. Without that mechanical linkage, you're dependent on the software designers to judge how much feedback the system provides. If there's a problem that the designers haven't anticipated, the system will not warn you, and small anomalies will grow to catastrophic proportions simply because the warning signs were filtered out from the driver's perception.

              Worse yet, the two problems are interrelated. Increasing the amount of tactile feedback increases the amount of software needed, since you've got two output devices (steering wheel for tactile feedback, and steering mechanism for actual steering) and you need code to modulate output to both of them. This necessarily increases code complexity, making the job of making sure the code is bug-free even more difficult.

              Finally, for those who are going to make an analogy with fighter jets' fly-by-wire systems, I must remind you that an aircraft has far more room to maneuver. And, even then, there were problems with the early fly-by-wire systems. The F-14, for example, had some serious issues with the flight control systems becoming confused and adjusting the wings inappropriately, leading to stalling and loss of control. These issues were eventually worked out, but the process took years. This is OK for a highly specialized system where your operators are specially selected and highly trained, but it is definitely not appropriate for any consumer grade system.

        • Re: (Score:3, Insightful)

          by TigerNut ( 718742 )
          On what vehicle are you basing those weights? Rack and pinion systems are lightweight compared to the recirculating-ball or worm-and-sector steering arrangements because they replace the drag link that goes laterally across the car, which means there is less redundant mass in the steering arrangement.

          Any manual steering arrangement can be made lighter than a power assisted system and more efficient (with respect to fuel mileage) than a power-assisted system simply because the steering then doesn't impose

  • by glindsey ( 73730 ) on Wednesday March 26, 2008 @02:13PM (#22871826)

    I DON'T SEE WHAT THE BIG PROBLEM IS. I
    HAVE BEEN POSTING FROM MY COMMODORE 64 F
    OR TWENTY YEARS NOW AND IT IS WORKING JU
    ST FINE FOR ME!


    The damned lameness filter has just managed to destroy my joke. Thanks a lot, filter.
  • because it works! (Score:4, Informative)

    by wizardforce ( 1005805 ) on Wednesday March 26, 2008 @02:16PM (#22871866) Journal

    The New York Times explores why old technology is still around
    simple, because it still works. Using radio as an example, it works just fine for what we need it for and we really haven't found a suitable replacement [light based communication for example] same for mainframes, there are niches that still must be filled with "older" technologies until we find something that makes the older tech not worth using.
    • Re:because it works! (Score:4, Interesting)

      by Itninja ( 937614 ) on Wednesday March 26, 2008 @02:35PM (#22872096) Homepage
      Well, just because it works doesn't mean it works well. Take a look at the Seattle School Districts' dinosaur VAX systems [nwsource.com]. Sure they work, but verrrry slowly. And what's more, maintenance is a nightmare and scalability in not an option. I agree that we should avoid trying to reinvent the wheel, but I think updating a wagon wheel with steel belted radial tire is sometimes a good idea.
      • by jellomizer ( 103300 ) on Wednesday March 26, 2008 @03:15PM (#22872620)
        There is a huge cost in upgrading that Vax system..
        There are Hundreds of Thousands if not millions of dollars of man hours put into that system, and programs. Replaceing them with a new system could lead to a huge mistake. Being that this is a school district. I doubt that anyone is willing to put the job on the line with such a migration. And being a unioned job I doubt that they will hire consultants to do it for them. They are stuck between two political brick walls.
      • Your unusual sig. (Score:3, Insightful)

        by RoverDaddy ( 869116 )
        I can so say what you find unusual about your sig, as this post (don't count my sig that follows) has that oddity too. Huzzah! I admit it's off topic, so mod accordingly. Sigh.
    • by Bombula ( 670389 )
      For what it does - provide a secure computing platform - one could argue that the older it gets, the better it becomes. Could your typical script kiddies and ID theives who normally 'hack' by downloading the how-tos for exploits from somewere.ru actually hack a 70s-era mainframe? Yeah, good luck with that.
    • by sm62704 ( 957197 )
      But some replacements go backwards, and there are some technologies that just died. I wrote an article a few years ago titled Useful Dead Technologies [kuro5hin.org] that highlighted some of them, albeit tongue in cheek. Lo and behold, two of them I mentioned, volume control knobs and flat cotton shoelaces, have come back in vogue.

      When the tornado ripped through my neighborhood [wikipedia.org] in 2006, I was out of power for a week. I sorely missed the gravity furnace with its power pile I'd had a few years earlier; the gas only fails wh
  • because it works (Score:3, Insightful)

    by gEvil (beta) ( 945888 ) on Wednesday March 26, 2008 @02:16PM (#22871868)
    Some things are just good ideas that work well. That's all there is to it. Sure, something more refined may come along one day, but it will need to be significantly better and offer a lot more. Otherwise, tried and true technology will hang around. Pretty simple, really.
    • Re: (Score:3, Insightful)

      by techpawn ( 969834 )
      There's also the argument of "cost to keep working vs. Cost of upgrade"

      Many times I've seen historic pieces of IT Architecture in place because the cost to upgrade/train/retain/etc was a lot higher than dusting HAL every few thousand miles.
      If the vendor is going o keep supporting it why abandon it?
  • by wiredog ( 43288 ) on Wednesday March 26, 2008 @02:18PM (#22871874) Journal
    Why PCs Crash, and Mainframes Don't [byte.com]

    When a PC crashes, even the system administrator might not hear about it, much less the vendors who made the system, the OS, and the application software. The user shrugs, reboots, and keeps right on working. When a mainframe crashes, however, it's a major catastrophe. It's General Motors calling up IBM to demand answers.


    Ten years gone, and still relevant.

    Damn I miss Byte.

    • Re: (Score:3, Funny)

      The same issue had an article on "DLL disasters - DLL conflicts are a common cause of crashes".

      Ten years gone, and still relevant.
      • SxS binding fixes that problem, for almost a decade now. Vista and vs2005/8 now force SxS down developer throats, whether you like it or not. DLL hell is virtually over.
    • Also, from that article:

      Still, it will be interesting to see how stable NT remains as it grows fatter.
    • Well the more critical point is that IBM has long been in the service industry. They produced mainframes because mainframes meant extremely lucrative long-term support contracts. You may not have a Big Blue man hanging out with your mainframe like you did in bygone days, but the essential nature of IBM's business model is the same.
    • by Ungrounded Lightning ( 62228 ) on Wednesday March 26, 2008 @03:57PM (#22873122) Journal
      Mainframes are about three things:
        - Reliability
        - Availability
        - Capacity (including compatibility across upgrades)
      in that order.

      Reliability is the absolute must. Dropping pennies through the cracks adds up to big bucks in lost coinage and much BIGGER bucks in legal trouble from the people whose pennies got lost. Consistently total the bill wrong and you face class action suits, too.

      Mainframes don't make errors, period. The internal components DO make errors, and the mainframe fixes the errors so the result is correct (though it may be delayed by milliseconds when a bit drops internally). They do this a number of ways: Error detection/bus-logging/stop-fix-restart, redundant components and voting, redundant components and comparison (see "error detection..."), error correcting codes to name just a few.

      Redundant collections of less reliable machines don't cut it. Businesses solve the "distributed update problem" by avoiding it: Transactions are processed on a single, ultra-reliable, server. The data is backed up (offsite and often dynamically via a network) so that, in case of disaster, they can switch to ANOTHER single, ultra-reliable, server. But spreading the work over multiple flakey machines is not an option. (They know how to do it with people. But they don't want to go there with computers when there's a better option.)

        - Availability is right up there.

      Drop the real-time logging of phone calls for a reboot and a baby-bell's ong-distance phone lines are free. That's in the million bux and hour range. But it's a drop in the bucket compared to the cost of an outage in the trading support systems of a major brokerage.

        - Capacity must continue to be "enough" as a business grows.

      Throttling a growing business because the IT department can't crunch the extra transactions kills shareholder value. And this includes compatibility: Thrashing the applications and inducing delays and bugs, just to port to a machine of the necessary capacity, also isn't an option. A business-critical legacy application has to "just work" if the system must be upgraded for higher capacity. The source may be long lost and the programmer long dead, so even recompilation (or reASSEMBLY) may not be practical options. (Even if the source code ISN'T lost it may be in a language that's no longer supported and/or with no experts available.)

      ===

      Makers of non-mainframe computers and their components and operating systems still haven't "gotten it" on these issues. The hardware designs are almost totally composed of "single points of failure" and flake out from time to time. OS crashes are a way of life (especially with the "dominant desktop OS" - which is what business decision-makers see).

      The chip makers blew it with things like Weitek's floating-point accelerator that didn't do denormals and Intel's Pentium bug. (Those little numbers are VERY important for things like interest calculations.) In particular, Intel could have recovered from that by immediately replacing the chips with the fixed ones and giving business customers priority. Instead they fought it and claimed that the errors didn't matter for anybody but the users of "high-end games". GAMES? What does THAT look like to a guy in a business suit in the executive suite of a fortune 500 corporation?

      Imperfect computers can work for the desktops that support the imperfect people who handle the day-to-day operation. The infrastructure is already in place for distributing the load across them and recovering from their errors. And they can work for the core of a network - where protocols can repeat dropped packets and machines can route around failed peers and cables. But like the EDGE of a network (where a customer's lines funnel through a single box, which must have telephone-switch-like reliability), the core of corporations' information processing is already built on and optimized for near-perfectly-operating machines. Despite their cost they're FAR cheaper and less risky than switching to, and running on, something less.
  • by FranTaylor ( 164577 ) on Wednesday March 26, 2008 @02:20PM (#22871902)
    The x86 architecture

    The QWERTY keyboard

    SATA (yes, folks, a serial version of the old IBM AT bus!)

    Drive letters, DOS devices

    Does anyone actually use the tar program for its original purpose anymore?
    • in a word... YES
    • Re: (Score:3, Insightful)

      I can think of a couple major backup applications (netbackup) still use tar when you get down to the tape level there really isn't any good reason to replace it.
    • Missing option: CowboyNeal
    • by Jim Hall ( 2985 )

      Drive letters, DOS devices

      I'll take partial credit for that. You are welcome. [freedos.org]

      :-)

    • Does anyone actually use the tar program for its original purpose anymore?

      What, to stick files together? Yeah, I use it all the time.
    • Re: (Score:3, Funny)

      Does anyone actually use the tar program for its original purpose anymore?
      Sometimes, but I generally skip the feathers.
    • by marcus ( 1916 )
      That's ancient tech.

      How about a bottle or a bucket?

      Try an even older and more generic container, a sack.

      Old tech hangs around because it does it's job and has not been improved upon in any meaningful fashion by later tech.

      Incandescent lights might actually exit the stage soon...
      • Re: (Score:3, Funny)

        by Pope ( 17780 )
        There's a hole in my bucket, you insensitive clod, you insensitive clod!
  • by Intron ( 870560 ) on Wednesday March 26, 2008 @02:21PM (#22871906)
    "mainframe sales are a tiny fraction of the personal computer market"

    I'm pretty sure that mainframe sales are 0% of the personal computer market.
  • Irony (Score:5, Funny)

    by Dog-Cow ( 21281 ) on Wednesday March 26, 2008 @02:23PM (#22871936)
    Does noone else see the irony in a newspaper exploring the reasoning behind "old" technology being used in modern environments?
    • Does noone else see the irony in a newspaper exploring the reasoning behind "old" technology being used in modern environments?

      in a story posted on their website, discussed on Slashdot where a bunch of people who remember the old technology are surrounded by a bunch of young people who were born after the remote control and personal computer became ubiquitous. :-P

      Utterly. Cheers
      • I was born after the TV remote but I didn't have a PC till I was in highschool. Does that mean I'm approaching (gasp) middle age?
  • by CaptainPatent ( 1087643 ) on Wednesday March 26, 2008 @02:24PM (#22871942) Journal
    With a "Bill Gates" 640k view of the world, of course we wouldn't need mainframe computers. Desktops now have more than enough power to run even the largest server applications of 1991 hands down and it's easy to see where that statement came from.

    The problem with the vision is that Stewart Alsop didn't take into account the growing complexity of computer programs. We have plenty of (in comparison to the software of 1991) inefficient applications that require ridiculous amounts of computer power to serve and process everything we need done. We have complex server applications like gigantic databases and games and video servers that couldn't exist in the 1991 world.

    The mainframe of yesteryear may now fit into the physical space of today's desktop... or smaller, but that doesn't mean there won't be a need for a bigger and faster one to take its place. That's as true now as it was then.
    • Apples and oranges (Score:4, Insightful)

      by PCM2 ( 4486 ) on Wednesday March 26, 2008 @03:18PM (#22872666) Homepage
      Actually, no matter how fast your PC is, PCs and mainframes are engineered for different things. Many mainframe-class machines specialize in transaction processing and are designed for total I/O speed, rather than chip clock speed. People also pay the big bucks for mainframes not because they are fast but because they never, ever crash nor require downtime. Don't let Apple calling a G4 Mac a "supercomputer" confuse you -- a mainframe is still highly specialized equipment, and I doubt there's any application that you personally might need to run that would require one. On the other hand, no matter how fast desktop chips get, it seems unlikely to me that major Wall Street banks would ever switch from mainframes to PC-class hardware for financial transaction processing.
  • by SerpentMage ( 13390 ) on Wednesday March 26, 2008 @02:24PM (#22871952)
    I was at a conference and at a BOF where I raised this question and technology. One person said that at the end of the day Microsoft will be replaced by Google apps.

    I said, yeah sure Microsoft will be replaced like IBM and the mainframe will be replaced. He then went on and explained to me on how the mainframe is dead. I looked at him and laughed because there are still oodles of people using the mainframe and there will be oodles of people using Microsoft.

    It is not that Google apps will replace, but will complement Microsoft, like the mainframe compliments Microsoft. Where the real understanding begins is when you know what to use when...
  • by Reality Master 201 ( 578873 ) on Wednesday March 26, 2008 @02:26PM (#22871974) Journal
    First, mainframes have many reliability and redundancy features that aren't found or aren't common in other hardware. If you spend the money, you can get 100% uptime guarantees.

    Second, there's a lot of software written for the mainframe that works. It does important stuff, and what it does is probably not exceedingly well documented, and porting all that shit to something new is a massive, risky, expensive task.

    Why mess with what works, particularly if the vendor seems to be willing to keep the product line going? There's no pressing reason to move, apart from people's prejudices about the mainframe, and the benefits really don't come close to outweighing the costs/risks.
  • Advantages count (Score:4, Interesting)

    by NorbrookC ( 674063 ) on Wednesday March 26, 2008 @02:30PM (#22872040) Journal

    FTA: First, it seems, there is a core technology requirement: there must be some enduring advantage in the old technology that is not entirely supplanted by the new.

    This is what keeps a lot of "old" technology going. Over the past 30 years, I've seen the predicted demises of printed books, keyboard-entry word processing, land-line phone systems, and so on. Yet, each of them seems to still be chugging along. e-books are here, but, as it turns out they have lacks when it comes to the readability and portability, as well as being usable in many environments. Keyboard entry word processing was supposed to have been supplanted long since by voice recognition technology, which is another technology which always seems to be "5 or 10 years away". Cell phones were supposed to supplant all land-line phones, but it turns out there are places you can't get a signal, and you can also do a lot of other things with that land line that you can't do with a cell. Each of these supposed supplantive technologies turned out to have issues that the "old" tech didn't have. It doesn't mean that the new wasn't useful, but in terms of supplanting the old, it didn't happen.

  • I keep waiting for people to stop using the wheel and come up with a more efficient solution. It can't last forever, ya know...
  • IBM understand the mathematics of computing. They know what has to be made to work in order to make the mathematics work for you, not against you.

    Systems (all systems, not just computers) have built in mathematics, if you choose one type of system over another without understanding those maths it can cost you a serious bundle. The evidence I've seen in the IT industry generally is that most developers and systems engineers don't understand those maths... Or at least, they don't understand how it applies to
  • by apodyopsis ( 1048476 ) on Wednesday March 26, 2008 @02:37PM (#22872122)
    I used to make CD players for one of the tech giants, as such I was in China alot. When I say "make" I'll be more specific - I wrote the firmware.

    I remember vividly a conversation with one of the chinese project managers. I was discussing the build quality of a new CD player for the US markets. It had that brown cardboard like PCB that the racks leap off if you wave a soldering iron in the general vicinity. The PCBS, the unit front, the enfire casework was glued together with a hot glue gun. The radio tuning circuit was wire wrapped around a pencil and then "frozen" in place with dripped wax whilst the software was expected to adapt to mask any tolerance issues. The manager and his team gave it a projected life span of 18 months, then the consumer would be back to buy another, he was really enthusiastic about the repeat business.

    *That* is why old tech survives because it was built to last, not with built in obsolescence. And no, I never brought a CD player from my employer ever again.

    • Re: (Score:3, Insightful)

      by jagilbertvt ( 447707 )
      I imagine you no longer work for them because they went out of business because of such shoddy products. I know I generally don't buy products from the same manufacturer if the first one fails in a short time period. I certainly hope others have the same sense..

      Though, most likely they're still in business selling cheap/shoddy products to OEMs.
  • This should clearly be tagged: getoffmylawn

    Sheldon
  • by bugs2squash ( 1132591 ) on Wednesday March 26, 2008 @02:39PM (#22872152)
    I keep seeing new ways to do the same old things; perform a credit transaction, store a health record, track inventory etc. Many of these requirements have changed little for decades if not centuries, and new requirements like enhanced security are easily accomodated in a centralized environment.

    The original systems created to satisfy these requirements were lightweight and efficient to run on the machinery of the time and easily managed by virtue of being centralized. By contrast, many new solutions are bloated and hard to manage because of their de-centralised nature and the need to use whatever networking protocol was simplest to implement regardless of its suitability for the task. God forbid that anyone has to look at a terminal font to get information from a system - if it's not in Times new Roman then it's just not proper information.

    The sole purpose for the replacement of the older systems seems to have been "because we wanted a GUI" to make it un-neccessary to train our users or because companies thought that they could axe experienced network admins and terminal equipment that they perceived to be 'locking them' to a vendor. Now I see that in many cases the management of large systems has been "de-skilled" and involves such a cocktail of technologies that nobody knows quite how it all hangs together (least of all how secure it all is).

    Best just throw in more resources to make the IT problem go away, at least it's spread over several bills so it seems easier to pay for...
  • I'm no dinosaur, but I'm old enough to appreciate some of the advantages of old tech. Example: While I value the portability of mp3's (my PDA has a bunch of them on it), I'm somewhat sad that a lot of younger people seem to think they can compete with what I hear when I get home and crank up my 30-year-old, high-end stereo system. A lot of today's music is so squashed down and distorted to get the high volume levels that even really good tunes wind up sounding like crap. And how many of those mp3 files

    • Uh, the loudness war and the resulting lack of dynamic range has nothing to do with MP3. It is possible to take a properly mixed analog recording and compress it into a small MP3 file.
  • Fortran was introduced in the early fifties, but is still alive and kicking. Fortran 2003 even has object orientation. I think that Fortran is a good example as it shows that "old tech" can survive if it is allowed to improve, i.e. transform into "new tech". So, could it be more of a naming problem and that we don't have any "old tech" around after all?
  • so we still need mainframes. IBM JCL lives forever as well.
  • by esocid ( 946821 ) on Wednesday March 26, 2008 @02:52PM (#22872322) Journal
    Coming from a person who has worked a lot on cars, I would prefer to work on an older car any day. Why? Simply put, there are fewer points of failure. When your car doesn't run right, what do you check? In older models you have things to check which are mostly mechanical. In newer models you have some mechanical and some electronic, which leaves a lot of things to investigate and can end up being a humongous hassle. (*begin short rant* for example what idiot thought it was a good idea to electronic fuel pumps inside the gas tank whereas mechanical fuel pumps are connected to the engine *end short rant*) There may be small variations in advancements in the mechanical parts, but those are tried and true and have been implemented since probably the 50s. The tried and true old technology is relatively more simple than the newer technology and easier to fix as long as it can serve the same function. This may be slightly different for older electronic technology, but I would figure that the comparison to cars would work just fine.
    • Re: (Score:3, Interesting)

      by stokessd ( 89903 )
      A portion of the nightmare of newer cars is the EPA and manufacturer locking you out of the control system. You as a consumer have very little visibility into the ECU. It's like trying to fix an old car and only being allowed to raise the hood 6 inches to work.

      I've got an aftermarket ECU on my hobby car and it allows me to see exactly what's going in in terms of engine management and current performance. It's got real-time feedback of emissions fueling and timing. I can data log them all as well as cont
  • "The mainframe survived its near-death experience and continues to thrive because customers didn't care about the underlying technology,"

    That is the answer right there. Not every user is irrationally neophilic. If a technology is the best choice either in function or in cost with respect to the needs of some user, then it will continue to be used.
  • by br00tus ( 528477 ) on Wednesday March 26, 2008 @02:55PM (#22872374)
    Gawker.com regularly makes fun of how the New York Times approaches a question the reporter knows little about and comes away with a convoluted answer. The article asks "Why Old Technologies Are Still Kicking". The best answer they come up with is "there must be some enduring advantage in the old technology that is not entirely supplanted by the new". There is an enduring advantage, although they don't go into what it is, and put it in a misleading way actually. It's cheap. Some of these companies have been putting business logic and programs into these systems since the 1950s. The cost of moving them from 370 to 390 to zSeries is minimal, as is replacing parts that break down etc. And it works. Sometimes better than modern machines - some of these machines have uptime of decades. High availability is not a new concept for them.


    What would be the cost of hiring on top of the existing mainframe admins and developers a team to migrate this stuff to Windows or UNIX? Remember some of this code is written by people who not only have left the company but may have died. Then you have to hire new developers and administrators for the UNIX/Windows systems. Change always creates the potential for problems, so expect a higher percentage of disruptions to the business as you're doling out all this money. If IBM is making it easy for you to keep what you have going, and also allows Linux, web etc. capability, why spend all that money to transition? The answer is that a lot of times companies don't. I worked at a Fortune 100 company that still had plenty of IBM mainframes. They even had a lot of their printing handled by the mainframes, although there were Windows and UNIX gateways into the print queue.

    • Re: (Score:3, Insightful)

      by asuffield ( 111848 )

      The article asks "Why Old Technologies Are Still Kicking". The best answer they come up with is "there must be some enduring advantage in the old technology that is not entirely supplanted by the new".

      And that's so stunningly inevitable that the whole article is a poor joke. They've started by using misleading labels for the things under consideration - if you fix that, the idiocy of their opinion becomes obvious.

      They aren't comparing "old technology" to "new technology". They are comparing "technologies th

  • by kick_in_the_eye ( 539123 ) on Wednesday March 26, 2008 @02:59PM (#22872426) Homepage
    PC's have been around for over 25 years. Is that not old? They constantly evolve.

    Mainframes constantly evolve.

    Mainframes went 64 bit before the PC ever did. Virtualisation is just gaining ground on the PC.

    Mainframes have had that for decades with Domains and LPAR's.

    Whats old technology, a PC server farm with dedicated server per app, and maybe 10 concurrent users, or a mainframe running many applications with thousands of users, and terrabytes of i/o throughput.

  • by Animats ( 122034 ) on Wednesday March 26, 2008 @03:56PM (#22873094) Homepage

    Mainframes are still around because the engineering is better.

    There's no secret about how to do this. It wouldn't even add much cost to servers to do it right. Here's what's needed.

    • All the hardware must self-check. CPUs need checking hardware. Mainframe CPUs have had this since the Univac I. All memory, including the memory in peripherals, needs to have parity, if not ECC. Connections to peripherals must have checking. All faults must be logged and automatically analyzed. CPU designers are wondering what to do with all those extra transistors. That's what.
    • Peripherals have to go through an MMU to get to memory; they can't write in the wrong place. IBM mainframes have done this since 1970. The PC world is still using a DMA architecture from the PDP-11 era, and it's time to upgrade.
    • The OS has to be a microkernel, and it can't change much. The amount of trusted code must be minimized. IBM's VM has been stable for decades now, even though the applications have changed drastically. The QNX kernel changes little from year to year; Internet support, from IP up through Firefox, was added without kernel changes. This is incompatible with Microsoft's business model, and the UNIX/Linux crowd doesn't get it. So we're stuck there.
    • Additional hardware support for debugging is helpful. Unisys mainframes at one time had hardware which logged the last 64 branches, and on a crash, that was dumped.
    • All crash dumps are analyzed, at least by a program. Why did it fail? Someone has to find out and fix it. We need tools that take in crash dumps from server farms and try to classify them, so that similar ones are grouped together, prioritized, and sent to the correct maintenance programmer.

    Once you have all that fault isolation, you know which component broke. This produces ongoing pressure for better components. It empowers customers to be effective hardasses about components breaking. With proper fault isolation and logging, you know what broke, you know when it broke, you know if others like it broke, and you probably know why it broke. So you know exactly which vendor needs the clue stick applied. There's none of this "reinstall the operating system and maybe it will go away" crap.

    • Re: (Score:3, Informative)

      by CodeBuster ( 516420 )

      There's no secret about how to do this. It wouldn't even add much cost to servers to do it right.

      That is the very problem: COST. The users who want this type of functionality buy the mainframe and run Linux and Windows sessions inside of virtual machines on the mainframe which might run thousands of these sessions (I know that you already know that, but I am repeating it for the sake of completeness). The margins on PC "workstation" and "server" hardware and software are so thin (ask Dell or HP about how thin the margins are on PCs these days) that almost ANY additional cost, particularly one that uns

  • by AxelTorvalds ( 544851 ) on Wednesday March 26, 2008 @04:47PM (#22873854)
    There is a much more simple reason mainframes are still around. It has nothing to do with old or new technology, when IBM sells a mainframe they are selling an idea.

    "Mainframe" is servicable, supported, robust, high performance, and reliable. You're buying that when you buy "mainframe," it just so happens that IBM packages that in a larger sized computer. Technology is a fairly small part of that idea. To make "mainframe" go away you have to convince the world that the idea is no good, but it's really really really good.

  • by ScottBob ( 244972 ) on Thursday March 27, 2008 @02:59AM (#22879008)
    Some industries have a tendancy to hang on to old tech because of regulatory compliance and how difficult it is to get new systems approved. Case in point: Nuclear power plants. They have control systems that still rely on old tech, even though much has been improved over the ages, simply because they rely on fail-safes and redundancies that are governed by processes and procedures that were developed and put into place many moons ago, which they had to go through great lengths to get approved by regulatory bodies like the nuclear regulatory commission. In order to upgrade a system even in a small sector of a nuclear plant means thorough scrutiny and a whole lot of red tape to get through before approval, which is very costly and time consuming.

    Which is why nuclear power plants still rely on mainframe computers, analog control systems and those big bulky institutional green control panels in the control room with lots of blinking lights, dials, knobs and buttons that look like mid 50's science fiction movies. (Nobody wants to stare at that all day- they'll go stir-crazy.)

    Contrast that to one coal burning behemoth I visited that had a fiber networked distributed control system running on a modern server system, with a number of large flat screen panels in a modern operations center that looked more like a TV news studio, displaying the status of all the systems; and changes can be initiated with a couple keystrokes or even through a GUI.

    The problem with the old systems at nuclear power plants is that many of the people who know them are of retirement age. As one guy who was tasked with maintaining the control systems in one nuke plant's repair shop told me, "Everyone in here is a grandfather". The younger people fresh out of engineering school who are taking their place were schooled on the modern systems like what's at the coal burning plant. There is a crisis going on because a lot of the old-timers are being forced into early retirement (taking their body of knowledge with them) faster than their replacements can learn from them.

The use of money is all the advantage there is to having money. -- B. Franklin

Working...