Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Windows Microsoft Operating Systems Software

Microsoft Windows, On a Mainframe 422

coondoggie writes with an excerpt from Network World: "Software that for the first time lets users run native copies of the Windows operating systems on a mainframe will be introduced Friday by data center automation vendor Mantissa. The company's z/VOS software is a CMS application that runs on IBM's z/VM and creates a foundation for Intel-based operating systems. Users only need a desktop appliance running Microsoft's Remote Desktop Connection (RDC) client, which is the same technology used to attach to Windows running on Terminal Server or Citrix-based servers. Users will be able to connect to their virtual and fully functional Windows environments without any knowledge that the operating system and the applications are executing on the mainframe and not the desktop."
This discussion has been archived. No new comments can be posted.

Microsoft Windows, On a Mainframe

Comments Filter:
  • by flyingfsck ( 986395 ) on Wednesday March 04, 2009 @06:56PM (#27070843)
    There are still people who haven't heard of Zimbra and Citadel? One can replace dozens of Exchange servers with a single Citadel server, without the need for a mainframe.
  • Easy answer (Score:5, Informative)

    by betterunixthanunix ( 980855 ) on Wednesday March 04, 2009 @06:58PM (#27070869)
    BIG customers. A lot of large corporations need to run Windows Server for things like Exchange, and to a lesser extent .NET. Those same large customers are attracted to mainframes, which offer very high availability and reliability, and can consolidate hundreds (or even thousands) of rack mounts into a single refrigerator sized system, drawing only 10kW~ in the process. $2M/year for a mainframe and mainframe operators could be justified in some cases if the cost of electricity and personnel needed to maintain a large, commodity server based datacenter is added up (this depends on the workloads; the commodity servers will also win sometimes).
  • Really? (Score:3, Informative)

    by DoofusOfDeath ( 636671 ) on Wednesday March 04, 2009 @07:01PM (#27070903)

    Users will be able to connect to their virtual and fully functional Windows environments without any knowledge that the operating system and the applications are executing on the mainframe and not the desktop.

    When a bunch of people are sharing a network, and sharing computer resources, one person's performance is at the mercy of other people. That's not so often true when it's all running on your own desktop.

  • by betterunixthanunix ( 980855 ) on Wednesday March 04, 2009 @07:10PM (#27071009)
    Mainframes are not dead, just overshadowed. New mainframes are still being installed, old mainframes are still being upgraded, and a single mainframe can compete with thousands of rack mounts for typical business workloads. We are not talking about reverting back to IBM terminals, we are talking about systems that act as servers -- refrigerator sized systems that can perform a billion business transactions in a 24 hour period, with power requirements in the 10kW range and diminished cooling requirements. Beyond just the practicality in large businesses, there is also the matter of reliability -- mainframes can be configured to double check every machine language instruction, which is important for certain applications (erroneous results from CPUs do happen from time to time, especially are the CPU temperature increases; imagine a system that is controlling satellites having a "hiccup" like that).
  • Re:In other news... (Score:4, Informative)

    by Lcf34 ( 715209 ) on Wednesday March 04, 2009 @07:17PM (#27071111)

    Guaranteed to take up 90% of cycles and 75% of RAM, regardless of mainframe resources. Slow and buggy, get the new version with VirtualDriveLightAlwaysOnPlus, which gives the user the feel of working on a real Windows workstation with NortonAV installed.

    You might kid, but following a recent SEP deployment in my company with (more or less) default config applied, we seen 10 to 15% avg CPU use increase on the ESX cluster and... backup taking double time. So, well, we sticked back to Trend, and will probably be happy to do so for a while.

  • by ptx0 ( 1471517 ) on Wednesday March 04, 2009 @07:34PM (#27071319)
    Over RDP? Riiiight.
  • Unisys (Score:3, Informative)

    by ThrowAwaySociety ( 1351793 ) on Wednesday March 04, 2009 @07:35PM (#27071341)

    Hasn't Unisys been pushing Windows for mainframes for years now? Since Win2K?

    link [unisys.com]

  • Re:Why not VMware? (Score:4, Informative)

    by Major Blud ( 789630 ) on Wednesday March 04, 2009 @07:46PM (#27071435) Homepage

    I'd mod you up if I had points.

    I work in a fairly large ESX shop with about 300 guest VM's on five host. If you just price the hardware, I'm sure it's below the $100,000 mark....including the iSCSI array. I'd imagine that a Z-Series mainframe capable of handling 300 VM's probably cost twice that. If you have to replace a part, it's not cheap to get IBM onsite to replace it for you since doing it yourself isn't really an option.

    "But mainframes are more reliable"....is this really the case, and at what cost? With stuff like VMotion and LiveMotion, you can lose an entire host and your guest VM's are migrated to another. With good equipment, this would rarely happen anyway (a lot of x86 servers are built with redundant parts nowadays, you know).

    I remember reading on ArsTechnica about a 2 years ago that there are currently only about 10,000 Z-Series installs worldwide. That doesn't mean there is much of a current market for this, and I'm sure that after you factor in licensing, hardware, and support, migrating to something like this would cost a small fortune.

  • by Guy Harris ( 3803 ) <guy@alum.mit.edu> on Wednesday March 04, 2009 @08:02PM (#27071559)

    IBM was pushing Windows NT 3.51 on the mainframe back in the 1990's.

    Hitachi != IBM, and DEC != IBM. The original article says:

    Hitachi and Digital Equipment (DEC ) today announced that they are cooperating on software technology that will move the Windows NT operating system onto mainframe-class computers, another sign that Microsoft's most powerful operating system is set to move deep into high-end computing territory. The joint work on server systems using either Intel or Alpha processors signals an ongoing decentralization of computing power from the once almighty mainframe.

    so they were not talking about NT on S/3xx - they were talking about NT on Intel (probably Itanium, possibly x86) and Alpha, all of which I think existed at the time.

  • Re:Easy answer (Score:3, Informative)

    by LWATCDR ( 28044 ) on Wednesday March 04, 2009 @08:03PM (#27071563) Homepage Journal

    They scary thing is that this really isn't going to be virtualization. It will be emulation. I can promise you that they don't us X86 Cpus.

  • erm, no (Score:3, Informative)

    by symbolset ( 646467 ) on Wednesday March 04, 2009 @08:14PM (#27071659) Journal

    You can run Windows in a VM under Linux KVM already. With over 100 virtual desktops per core you can serve a city's worth of Windows virtual desktops (about 100k) out of one rack of HP blade servers on a Linux cluster, with proper management and decent performance for everybody. You still need thin clients, but the kind of hardware required for that is so minimal people are paying to have it hauled away.

    You can do the same thing with Linux virtual desktops too, without the hassle of malware.

  • Re:WHY???? (Score:5, Informative)

    by TheRaven64 ( 641858 ) on Wednesday March 04, 2009 @08:39PM (#27071943) Journal

    Ugh. Of all of the news stories about NetBSD on a toaster, you had to link to one that puts `Linux' in the headline even though the story has nothing to do with Linux.

    As one of the comments said, NetBSD is not Linux. Not everything related to Free Software is about Linux.

  • Re:Big investment (Score:3, Informative)

    by AHuxley ( 892839 ) on Wednesday March 04, 2009 @08:41PM (#27071967) Journal
    Ongoing per core, per seat per product, per viewer fees?
    Built to last is just built to milk.
    Did you enjoy getting taken for a decade or so on the desktop?
    Did you enjoy getting taken on the internet?
    Did you enjoy getting taken on the xbox?
    Did you enjoy getting taken via the music services?
    Well bend over, MS wants to take you on the mainframe too.
  • One word: BANDWIDTH! (Score:1, Informative)

    by Anonymous Coward on Wednesday March 04, 2009 @08:42PM (#27071973)

    A mainframe has relatively low CPU horsepower, but the backplane can pump fuck-loads of data per second (yeah, thats a real unit!) compared to crappy PC-tech servers that can barely handle shit loads of data.

  • by Daniel Boisvert ( 143499 ) on Wednesday March 04, 2009 @09:03PM (#27072179)

    Whats so special/magical about a mainframe?

    The I/O. On a mainframe, you can run a query and generate large datasets so fast it'll blow your mind (in 2002-ish, say tens of gigabytes). On the mainframe it's no big deal, and you can run queries like that all day and never have any idea how much data you're moving around until you try to move it somewhere else and wonder why it's taking so long.

    Our mainframes serve ancient text based interfaces thru terminal emulator apps, and it doesn't look all that impressive either. What is it about a mainframe that enables such a large amount of computing power to be condensed into a refridgerator sized package? Or are some folks around here exagerrating considerably?

    The mainframe isn't about looking pretty, it's about getting work done, and the folks touting their benefits generally aren't exaggerating. Mainframes aren't generally designed for CPU-heavy tasks, although they certainly can be clustered pretty impressively if you really need lots of CPU. The biggest advantage is that you can really use the CPU's you've got. There are service processors to offload things like memory management, encryption, I/O, virtualization overhead, etc. There are really really fast I/O channels. You typically attach them to really really fast disk and tape. These things together allow you to move a lot of data around very quickly, and get a lot of work done.

    Additionally, lots of large companies have lots of man-hours invested in systems that run their businesses. I've seen attempts to reimplement some of the beasts to get them off the mainframe, and they typically don't go well. I've also seen assembly code written in the late 1960's still running in production more than 35 years later. The underlying hardware had been upgraded many times, but IBM made sure the old stuff would still work.

    Things like this are worth a lot of money to a certain class of purchaser.

  • by Nefarious Wheel ( 628136 ) on Wednesday March 04, 2009 @09:07PM (#27072221) Journal

    Whats so special/magical about a mainframe?

    Mainframes have followed Moore's Law just like the rest of the chip vendors. You buy a new mainframe, you get new chips.

    But the main difference is essentially their slightly different design philosophy. Reliability is built into the price, for one thing -- part of the reason it costs more is that conservative design - not the most cost effective in terms of power -- as you often lose power per component from the "underclocking" attitude that a focus on reliability will engender (and they're tested to buggery before delivery, too). You also get a much higher standard of module connectivity and far more robust power supplies and inbuilt hardware redundancy.

    They also tend to support and address much more memory than you'll see on the smaller servers.

    The other main point in favour of mainframes is their orientation toward massive IO. Really massive IO. With the scale out design of i86 processors a lot of IO happens between network cards; on mainframes a lot of that interprocessor data flow happens on the backplane, and significant investment in optimising data channels means you're paying for that IO more than raw computation. The network interfaces on mainframes are pretty massive too, and can support fairly impressive tube bandwidth.

    Mainframes using the IBM architecture for a long time have been represented in the TPCC transaction processing top ten, although the trend lately at the very high end is to run AIX on top of P5 architecture. Have a look, it's illuminating, and Red Hat gets a look in too. You can see the numbers at: http://www.tpc.org/tpcc/ [tpc.org] .

  • Re:Reliability. (Score:5, Informative)

    by PCM2 ( 4486 ) on Wednesday March 04, 2009 @09:52PM (#27072651) Homepage

    A well built mainframe combined with a suitable power supply (e.g. backup generator etc) has up-times measured in YEARS.

    Worth noting that this is not the same thing as that old legend about the Novell NetWare server that got sealed up in a room for years and ran fine. That was just luck. Mainframes, on the other hand, are designed to have uptimes measured in years. Typically, every single component is redundant and the system is designed for failover in the event of a hardware outage. In a transaction-processing environment, a mainframe can detect things like RAM and CPU failure in the middle of a transaction and fail over to a different processor module or addressing space without a hitch. Try that on your Linux box.

    Mainframes tend to be designed with support for transaction processing baked into the OS, software, and the hardware, which is what makes them attractive to financial institutions who really, really, really need their transactions to process quickly and reliably 100 percent of the time.

    Another thing to consider: VMware's Virtual Infrastructure products are essentially trying to recreate a computing environment that is new to the world of commodity x86/x64 hardware, but that existed on mainframes at least as far back as the 1970s. What makes VMware's achievements so remarkable is that the x86 hardware was never meant to do this sort of thing. Mainframes, on the other hand, were designed for it. That makes it a lot more efficient and reliable on the mainframe.

    The bottom line is that a mainframe is not just an old-fashioned idea of what a server should be. Think of them instead as purpose-built, industrial-grade hardware. Think about power tools, then think about the equipment you'd find in a factory. That's the difference.

  • Re:Why not VMware? (Score:3, Informative)

    by Major Blud ( 789630 ) on Wednesday March 04, 2009 @09:52PM (#27072657) Homepage

    The part about spending taxpayers money is spot on. I used to do work with state revenue agencies. In every case, with every agency I went to, they had a Z-Series installed and were trying to move to x86 hardware. The main reason they had the big iron to begin with was to support legacy software that had been in production for the past 20+ years. The two main reasons for wanting to abandon it were:

    1. The cost of maintaining said software cost more on an annual basis than it was to rewrite from scratch using RAD tools.
    2. Cost of operating said mainframe was also in the same ballpark. IBM would charge these guys for CPU cycles used every month.

    Also, COBOL programmers are reaching retirement age and aren't that easy to come by nowadays ;-)

  • by Guy Harris ( 3803 ) <guy@alum.mit.edu> on Thursday March 05, 2009 @12:39AM (#27073921)

    hah In all due fairness though I agree, alot of greed and artificial isolation to specific hardware for sales purposes were done in the past with Sun and various other manufacturers.

    SunOS 4.x, back in the late '80's, ran on three different instruction set architectures (68k, SPARC, i386) with a reasonably good design for portability (e.g., most of the work of dealing with the MMU was isolated in a layer with MMU-dependent implementations of standard APIs used by the rest of the VM code), and there was a never-released port to S/3xx as well.

    NT kernel infrastructure was made by an ex-VMS guy, so that's probably why.

    VMS was, I think, rather VAX-oriented in the lower layers, so, unless the idea was that Cutler knew what not to do from his VMS experience, I'm not sure that was the reason why NT was designed for portability. It might more have been that he was an ex-Mica [computer-refuge.org] guy, although that was somewhat Prism [computer-refuge.org]-oriented.

  • by grotgrot ( 451123 ) on Thursday March 05, 2009 @03:50AM (#27074777)

    To use a car analogy, a mainframe is like a big rig truck. Sure your Toyota can go faster, but a big rig will do far better at getting 40 tons of timber from one location to another. (Ever try to move 40 tons of lumber using a Ferrari?)

    In terms of hardware, there are a lot more processors in a mainframe. Each I/O channel (and there will be a lot of them) typically has its own separate processor customized for getting results without bothering the main general purpose processors. On your nearest Linux box do some networking and disk access while watching the output from vmstat 1 and looking at the in(terrupt) column. Each interrupt (except a few used for task switching time slices) is I/O devices causing the main processor to have to pay attention to them instead of getting work done. The mainframe I/O processors can do high level work such as looking for database records that match certain criteria. There will also be separate processors for networking, encryption etc.

    Mainframes are managed differently. If you bought a several hundred thousand dollar big rig truck, you wouldn't leave it sitting in your driveway for weeks on end. You'd be finding as much work for it do as possible. The same applies to mainframes. The goal is to use them - get the cpu and I/O usage close to 100% since any less means you are wasting capacity. Contrast with desktops and Unix/Windows servers where beyond occasional spikes you would get nervous of high cpu and I/O consumption and buy more hardware to spread the load.

    Because downtime would be expensive (remember you are trying to use 100% capacity of the mainframe so if it is down that is work going undone) the whole system has significantly more fault tolerance built in. This ranges from the software, including the ability to upgrade the operating system without a reboot, to the hardware where components and systems are duplicated, sometimes even having physically separated systems (up to a few miles) with high speed optical interconnects running in lockstep. They also have backwards compatibility that would make Intel seem an amateur.

  • Re:Stability? Hah! (Score:3, Informative)

    by Kent Recal ( 714863 ) on Thursday March 05, 2009 @07:51AM (#27075729)

    Let's see. A fully fledged Z10 has 256 cores.
    I think The bigger problem will be RAM. The Z10 maxes out at 1.5T, so maybe 10 instances if you turn aero off.

  • Re:Reliability. (Score:3, Informative)

    by bored ( 40072 ) on Thursday March 05, 2009 @11:46AM (#27077679)

    It will arrive in a lovely wooden crate and sometime after morning coffee he will unpack it, walk over the the Z Series, open the door, slide it into place, connect the cooling hoses and close the door. He will then walk to the maintenance terminal, type in the secret code, and your Z Series now has 64 more processors.

    More like the guy acts like he is messing with the hardware and when you turn around he types the secret key into the maintenance terminal. http://publib.boulder.ibm.com/infocenter/eserver/v1r2/index.jsp?topic=/eicaz/eicazzcod.htm [ibm.com]

"Spock, did you see the looks on their faces?" "Yes, Captain, a sort of vacant contentment."

Working...