Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Windows Microsoft Operating Systems Software

Microsoft Windows, On a Mainframe 422

coondoggie writes with an excerpt from Network World: "Software that for the first time lets users run native copies of the Windows operating systems on a mainframe will be introduced Friday by data center automation vendor Mantissa. The company's z/VOS software is a CMS application that runs on IBM's z/VM and creates a foundation for Intel-based operating systems. Users only need a desktop appliance running Microsoft's Remote Desktop Connection (RDC) client, which is the same technology used to attach to Windows running on Terminal Server or Citrix-based servers. Users will be able to connect to their virtual and fully functional Windows environments without any knowledge that the operating system and the applications are executing on the mainframe and not the desktop."
This discussion has been archived. No new comments can be posted.

Microsoft Windows, On a Mainframe

Comments Filter:
  • by betterunixthanunix ( 980855 ) on Wednesday March 04, 2009 @06:52PM (#27070783)
    The most common use of virtualization is running Exchange. Many companies just cannot break the Exchange "habit," even when they migrate to Linux servers. Being able to run Exchange on a mainframe would be a boon to many of these businesses, especially given the high level of reliability a mainframe provides. In a tough economy, even the high price of a mainframe might be attractive if it means eliminating a large number of rack mounts and personnel devoted to keeping Exchange online (as well as all the other servers typically found in large corporations).
  • Big investment (Score:5, Interesting)

    by mc1138 ( 718275 ) on Wednesday March 04, 2009 @06:59PM (#27070871) Homepage
    Unlike the current server model that recommends that a server be replaced every 3-5 years, mainframes were built to last. Now, jump that to present day, lots of institutions that got into computing early still have their systems lying around often times either under utilized or not used at all. It would cost more to remove them in many cases than many companies want to undertake. Combine that with the prevalence of the windows operating system and you've just created a way to continue to use a machine that might not even be totally paid for, rather than just have it take up empty space.
  • by Kaz Kylheku ( 1484 ) on Wednesday March 04, 2009 @07:07PM (#27070979) Homepage

    How about actually recompiling Windows into native code running on that mainframe. Now that would be impressive. Especially if it was big endian, and with unusual word sizes, not matching the ``everything is an 80386'' programming model underneath Windows.

  • Re:Easy answer (Score:1, Interesting)

    by Anonymous Coward on Wednesday March 04, 2009 @07:11PM (#27071029)

    I recall the TPC benchmarks, where Sun used to claim the benefits of big-iron servers, and Microsoft would claim the cost-benefits of commodity server farms. How times change when MS gets a big-iron server, and Sun runs Linux on commodity server farms :)

  • by cplusplus ( 782679 ) on Wednesday March 04, 2009 @07:12PM (#27071039) Journal
    Do you keep your money in a bank? Have you ever used a credit card? Shopped at a supermarket? Almost any kind of company that runs a massive billing system or deals with huge inventories uses mainframes to process data and generate reports. I used to think they were dead, too, but there's still a large market for "big iron".
  • by Ken Hall ( 40554 ) on Wednesday March 04, 2009 @07:17PM (#27071117)

    I've seen reports of people trying this using QEMU under zSeries Linux, under zVM. Wouldn't surprise me if that's about all the Mantissa product is:
    Something like QEMU natively compiled under CMS.

    Since it's emulation, and zVM isn't really designed for CPU-intensive tasks (like emulation), and the instruction sets are so different,
    the performance was hideous. Like 12 hours to install Windows XP, or somesuch.

    The funny part is that (very deep) under the covers, the zSeries processor is a modified PowerPC running microcode. I think I'll wait for IBM
    to develop x86 microcode so one of those new "special purpose engines" they're selling can run Windows "natively". THEN, with zVM as a simple
    resource manager, you might have something that's useful.

  • by Guy Harris ( 3803 ) <guy@alum.mit.edu> on Wednesday March 04, 2009 @07:59PM (#27071551)

    How about actually recompiling Windows into native code running on that mainframe. Now that would be impressive. Especially if it was big endian, and with unusual word sizes,

    I don't think you'll get anything from IBM these days with what people would generally consider "unusual word sizes", unless they still have a few 709x's in a warehouse from the late '50's or early '60's. S/3xx was, from Day One, a 32-bit-word (originally with only 24 bits of that used in addressing, then with an option to expand to 31 bits), 8-bit-byte-addressible architecture long before the 80386 existed.

    Big-endian might be more work, although I think that, for example, Connectix's/Microsoft's Virtual PC for Mac did both interpretation and binary-to-binary translation of x86 code to PPC code.

  • by I.M.O.G. ( 811163 ) <spamisyummy@gmail.com> on Wednesday March 04, 2009 @08:04PM (#27071571) Homepage

    At the risk of asking a stupid question, I'm going to put this out there anyway... Whats so special/magical about a mainframe? I'm 26 and been an IT professional for 5 years, so I'm green when it comes to mainframe systems. I work for a fortune 500 with mainframes serving various business systems, but I always pictured them as old, clunky, dusty systems that were expensive and we're still milking them along.

    Now a lot of people here are stating how a mainframe the size of a fridge can replace thousands of rackmount servers, and it doesn't jive with what I'm familiar with. Our mainframes serve ancient text based interfaces thru terminal emulator apps, and it doesn't look all that impressive either. What is it about a mainframe that enables such a large amount of computing power to be condensed into a refridgerator sized package? Or are some folks around here exagerrating considerably?

  • Re:Unisys (Score:3, Interesting)

    by Guy Harris ( 3803 ) <guy@alum.mit.edu> on Wednesday March 04, 2009 @08:10PM (#27071637)

    Hasn't Unisys been pushing Windows for mainframes for years now? Since Win2K?

    link [unisys.com]

    Some of the mainframes in question are apparently built out of "Intel" processors (presumably either x86-64 or Itanium); the others appear to have proprietary Unisys chips implementing the 36-bit Univac 11xx architecture but probably also have Intel chips to run Windows. What's impressive about those is that they're apparently running the old OS for the 36-bit Univac processors on the Intel systems ("This revolutionary server features the OS 2200 operating system running on Intel(R) processors"), which probably involved at least as much work (probably via binary-to-binary translation + instruction interpretation) as the stuff the people at Mantissa have done (also probably via binary-to-binary translation + instruction interpretation, but the Mantissa people are presumably just emulating one 8-bit-byte-oriented architecture on another, not emulating a 36-bit word-oriented architecture on an 8-bit-byte-oriented architecture).

  • by Anonymous Coward on Wednesday March 04, 2009 @08:11PM (#27071641)

    Mainframes are designed for a certain type of processing (batch processing, server). Windows has almost the opposite operating conditions (desktop interactive use). I doubt it would run very well.

    Back in the early 90's I got to play with one of the first Sun E10000 machines ever made. It was a beast with something like 64 processors and over 2 TB of drive space (was a lot back then). I ran a bunch of tests on it. My own software, various benchmarks, etc. It was freaking dog-ass slow for normal desktop type applications. I couldn't believe how much that thing cost and it ran like a piece of shit compared to standard desktops at the time. I mean overall it had more power with all the processors but one standard desktop CPU at the time could handle what 4 or 5 of those slow-ass SPARC processors could. It's because the machine was designed to be a database server or to handle remote interfaces like for SAP. It had a high-bandwidth back-plane and other crap like that which made it good as a database server. It made an awful machine for desktop-type tasks as I imagine a mainframe would.

  • by Doc Ruby ( 173196 ) on Wednesday March 04, 2009 @08:19PM (#27071723) Homepage Journal

    I once worked for a big insurance corp that used one of its two IBM supercomputers to run Lotus Notes (Domino). As George Clinton says, "the bigger the headache, the bigger the pill".

  • Re:Sigh... (Score:3, Interesting)

    by TheRaven64 ( 641858 ) on Wednesday March 04, 2009 @08:37PM (#27071923) Journal
    The VMS architecture for Linux? To go with the UNIX architecture for Windows?

    Do you mean the VAX architecture? Not sure about Linux, but it's still well-supported by OpenBSD - last but one release added support for some of the weirder frame buffers found in microvaxen.

  • by abkaiser ( 744418 ) on Wednesday March 04, 2009 @08:57PM (#27072113) Homepage
    The IBM iSeries Integrated xSeries Server scoffs at this late entry. That's what they call the "IxS". Also used to be called the "IFS". Earlier versions ran Windows 2000 Server. A little limited in the old days in terms of CPU, but they're pretty nice today. Drive speed, however, has always been phenomenal.
  • Floating Point (Score:5, Interesting)

    by BBCWatcher ( 900486 ) on Wednesday March 04, 2009 @09:25PM (#27072385)
    Sorry, you're quite wrong in multiple ways. The first way you're wrong is that, if Mantissa's z/VOS runs X86 software, it runs X86 software. That would include IEEE floating point, Windows Solitaire, whatever. The second way is that mainframes have always been able to execute IEEE floating point in software, but they (also) in hardware implement IBM floating point. (Thus programmers generally used the hardware implementation in their applications, and why not? But nothing prevented them from running IEEE floating point calculations.) The third way you're wrong is that IBM's System z9 was the first machine in the world to implement IEEE754(r) decimal floating point in microcode. Today the only CPUs in the world that implement IEEE754(r) fully in hardware are POWER6 and System z10. And it looks like it'll stay that way: Intel and IBM just disagree about this aspect of CPU design.
  • Re:Reliability. (Score:5, Interesting)

    by FlyingGuy ( 989135 ) <.flyingguy. .at. .gmail.com.> on Wednesday March 04, 2009 @09:25PM (#27072389)

    Actually the raised floors were not a requirement. It was just a hell of a lot neater for running all the cables.

    and yes, an IBM Z Series. Need more horsepower? Wonder down the hall, find your IBM Engineer ( yes they all come with one ) and tell him, well actually he will tell you, that we need another CPU/Memory block. It will arrive in a lovely wooden crate and sometime after morning coffee he will unpack it, walk over the the Z Series, open the door, slide it into place, connect the cooling hoses and close the door. He will then walk to the maintenance terminal, type in the secret code, and your Z Series now has 64 more processors. All of this without anyone ever knowing it happened, well except for the nervous nelly of a CIO who jsut had to watch.

  • by pz ( 113803 ) on Wednesday March 04, 2009 @09:35PM (#27072465) Journal

    There are two main differences between the mainframe philosophy and the commodity server philosophy. Both have their proponents, and both have their advantages.

    First, in a mainframe, you have redundant everything. CPUs, disks, powersupplies, even backplanes. Everything. And everything can be hot-swapped. Everything. Even the power supplies. Even the CPUs. Want to upgrade to the newest versions of the processors? Not a problem, unplug the old, plug in the new (just not all at once, naturally). Is there a problem with a bank of RAM? Replace it. Hot. The idea is that with a mainframe, it will never, ever go down. Ever, unless the owner wants it turned off. The design point is for vendors where the time for a reboot cycle means a loss of millions of dollars. Like, say, a stock market exchange.

    The second difference is bandwidth to the I/O systems. Mainframe systems are what IBM invented optical links for. To the disks. Optical links! When you see the old, classic photos of mainframes with cabinet-upon-cabinet, those are mostly disk systems. Modern mainframes use advances in technology to squeeze that down into much smaller systems. But bandwidth is what it's all about. Massive bandwidth to the I/O systems, massive bandwidth to the memory systems, and massive bandwidth to the CPUs. Wide, wide paths.

  • Imagine this... (Score:5, Interesting)

    by tlambert ( 566799 ) on Wednesday March 04, 2009 @09:42PM (#27072553)

    Imagine this...

    Your desktop is always out there somewhere, it's always booted, no matter where you go you get at it, and it's exactly the way it was the last time you used it, so you don't have to open a bunch of apps and change window sizes and locations to get things back to your baseline usable system state.

    If your computer explodes, you get a new one, fire up the client, and you are exactly where you were before it exploded, including the cursor being in the middle of the word "amazing" in the document you were typing at the time.

    If you go on vacation, you don't bring a laptop with you, you fire up the desktop in the hotel, and you're back on your own desktop, exactly where it was the last time you left off, with that email you were reading still on the screen.

    If your battery dies or the local power goes out, you don't lose 2 hours of work.

    If the mainframe it's running on starts on fire, the VM checkpoint image is reloaded on another mainframe half the world away, the IP address set is failed over, and after a hiccup measured in seconds, you are back to typing as if nothing had happened. For a slightly higher service level agreement, the VM is already mirrored on several servers (just swapped out most of the time on the non-primary), and there's no hiccup.

    Everything's backed up without you have to run the backup locally.

    The antivirus software runs on a VM that's not the VM being examined, so there's no way that malware can disable, remove, or oterwise get around it, since it's not running on the infected VM itself: goodbye Godel's theorem and the halting problem standing in the way of solving that problem, which, if we are honest, is never going to be completely solved on a non-hardware partitioned desktop or laptop. ...bottom line: there's a lot to recommend this approach to computing.

    -- Terry

  • by Anonymous Coward on Wednesday March 04, 2009 @09:59PM (#27072731)

    At the risk of asking a stupid question, I'm going to put this out there anyway... Whats so special/magical about a mainframe? I'm 26 and been an IT professional for 5 years, so I'm green when it comes to mainframe systems. I work for a fortune 500 with mainframes serving various business systems, but I always pictured them as old, clunky, dusty systems that were expensive and we're still milking them along.

    I was very much in the same boat recently. Why don't you ask your mainframe administrators? It's a little confusing at first because it differs tremendously from the modern open systems culture, but I learned a lot. They were doing things many, many years ago that are being reinvented today. Those ancient text based interfaces put modern text based interfaces to shame. Spend more time studying the next mainframe terminal screen you come across, and try to think of the last time you used a remote, rich, curses interface to an app, or to configure a server. In many ways, how we manage servers today is very crude in comparison. Even web 2.0 is modern rehash of centralized computing from ages ago. Before all this heavy client side scripting, I think mainframes had much better remote application interfaces, and even today useful remote (and local) interfaces aren't as common as they ought to be. Go look up "panels" in z/OS.

  • Unusual Word Sizes (Score:4, Interesting)

    by BBCWatcher ( 900486 ) on Wednesday March 04, 2009 @10:05PM (#27072793)

    Good point. The first comment about "unusual word sizes" was really pretty funny, because the commenter quite obviously has little understanding of computing history. It was the IBM System/360 (the ancestor to today's IBM System z mainframe) that defined the 8-bit byte and 32-bit word as industry standards, influencing CPU architectures (including Intel's) right to the present day. Otherwise we'd probably have multiples of 6 or possibly 7 bits as our foundational standard for computing. (And there was a lot of pressure during the System/360's design to cheapen up the hardware and slice off a bit or two.)

    Perhaps the original commenter would like to open up a command line in Microsoft Windows Vista and count the default number of columns. That number is 80. Why 80? Because, coincidentally about 80 years ago, someone at IBM decided that tabulating cards should be 80 columns wide, and IBM's cards were more popular than Remington's. Yes, Grasshopper, Microsoft Windows has an "unusual" column width that persists to this day.

  • Re:Unisys (Score:3, Interesting)

    by afidel ( 530433 ) on Wednesday March 04, 2009 @10:21PM (#27072941)
    Unisys was an Itanium shop but the low, low cost of 6 core Xeon's and their tremendous performance advantage means that almost all of their sales since the new models came out have been in that direction. I think things will be interesting for them in the next generation since Intel will have Tukwila socket compatible with Beckton so they should be able to support trays of either CPU architecture on a common board.
  • by Anonymous Coward on Wednesday March 04, 2009 @10:46PM (#27073151)

    I worked on virtualizing Linux on zSeries for a while and I can tell you that this GUI based approach on the mainframe will not be cost effective. We had some users that insisted on running KDE over VNC on some of our z/VM guests. At "idle," the CPU overhead was tremendous (compared to a non-gui Linux server). I always assumed this is because there is no graphics card and the framebuffer is in software. I imagine that the Windows users, who tend to be lost without a GUI will find the mainframe to be a low performance (for the types of applications that typically run on windows) cost-ineffective solution.

  • Price/performance? (Score:4, Interesting)

    by Locke2005 ( 849178 ) on Wednesday March 04, 2009 @10:52PM (#27073199)
    Emulating a $500 PC Server on a $500,000 mainframe... yeah, that sounds real cost-effective! If you run this simultaneously in 1000 virtual machines, do you need 1000 Windows licenses? How many people do you know that have spent years staring at their mainframe, muttering "What a nice piece of iron! If only we could run Windows on it!"... that haven't yet been committed to a mental institution? I really don't think the potential market for this justifies the development costs, guys.
  • by BBCWatcher ( 900486 ) on Wednesday March 04, 2009 @10:54PM (#27073209)

    Most of that thousands to one virtualization is based on the same idea that is driving commodity virtualization ala ESX, most servers spend most of their time idle.

    That's part of it, but it's not the only part. Otherwise we'd see thousands of virtual machines on a single ESX core, and that's just not what's happening. (The virtualization ratios per core are pretty small. Still useful, though.) Virtualization also places heavy stresses on cache, memory, and I/O performance. IBM System z10 machines are no slouches on CPU -- they have the highest clock speed (4.4 GHz) CPUs with more than 2 cores (they're quad) on the market -- but they balance that with kick-ass cache, main memory, and I/O performance. They also have hypervisors (PR/SM and z/VM) which are extremely refined and uniquely co-evolved with the hardware over decades. Add that all together and you begin to understand why the virtualization ratios get much higher in real world use.

  • Re:Easy answer (Score:4, Interesting)

    by LWATCDR ( 28044 ) on Wednesday March 04, 2009 @10:56PM (#27073231) Homepage Journal

    Frankly I would say the same thing about you. This is about running Windows on a Z Series IBM mainframe. The Z Series is descended from the 360/370/390 line. It is a CISC ISA and is nothing like the X86 ISA! The current Z Series CPU is based on the POWER but uses a the Z Series ISA and not the POWER ISA.
    So simply What the heck are you talking about?

  • by Anonymous Coward on Wednesday March 04, 2009 @11:12PM (#27073357)

    Back in 1994 I worked in a datacenter that had one single mainframe. It had 36 Terabytes of storage, and since we went through 40 Terabytes of data per month, it was empty to full every 24 days (or so). Please try not to be an idiot and pretend something built in 1958 is still in use today. Back in 1994, a fast personal computer was a 486 running at 66 MHz. Now you say "gee, PC's have gotten sooo much faster...!!!". And, (for the slow and somewhat retarded...) so have mainframes. The difference between 40 Terabytes in 1994 and 500 Megabytes in 1994 (a large hard disk for a PC then) is the same as the difference between mainframes now, and PC's now. I have 1.5 Terabytes on my PC. A typical mainframe will have 1.1 Petabytes of storage (gee sparky, more than the 'pewter at home). Instead of having 1 processor or 2, it might have 250 or 500 processors, but they will be tuned to work well together. They will also have a very high bandwidth. Lastly, a 'pretty interface' is something you can squander the resources of a PC on. Mainframes usually deliver flat ascii data because its much easier to store information in that format. It adds nothing to data to 'store it pretty' or 'process it pretty'. In fact, those things dramatically slow processing down, and make storage more cumbersome. They add zero value. Since squandering resources is not something useful, a terminal (even a dumb terminal) is all that is required from a mainframe. Yes sparky, they can process a hundred million 1000 table queries per day, day after day without fail. They are built to. Your little baby rack mounted wonder would cave on the first one. It never had a chance, but then, it wasn't built to do high bandwidth stuff like the mainframe. The short answer is: it was about 100,000 times as fast as the PC when it was new. 10 years on, a new PC will only be 6250 times as slow. 10 years after that, a new PC will only be 390 times as slow. 10 years after that, a new PC will only be 24 times as slow. 10 years after that, a new PC will nearly be as fast (follow Moores law). Replace the 40 year old mainframe with a new one, and it will be about 100000 times as fast as a new PC.

  • by BBCWatcher ( 900486 ) on Wednesday March 04, 2009 @11:32PM (#27073487)

    Emulating a $500 PC Server on a $500,000 mainframe... yeah, that sounds real cost-effective!

    Then why aren't you driving a Yugo (I presume)? It has a lower price, doesn't it? :-)

    If you run this simultaneously in 1000 virtual machines, do you need 1000 Windows licenses?

    That's up to Microsoft. I can't wait to see Microsoft's mainframe price list. :-) But if Microsoft wants to be competitive with Oracle and IBM, to pick a couple software vendor examples, then for server software at least (e.g. Microsoft SQL Server) they'd license by core. And yes, a core is a core is a core. How the price of that Yugo looking? :-)

  • Frig Sizes (Score:3, Interesting)

    by BBCWatcher ( 900486 ) on Wednesday March 04, 2009 @11:43PM (#27073557)
    Good question. There are two physical sizes available: a System z10 BC and System z10 EC. The BC is roughly the same size as a single conventional rack of pizza box servers, and the EC is a double wide (about two racks). In refrigerator terms that's probably closer to the JennAir (or two for the EC) but well shy of the cow locker. Here's a picture of the EC shown to scale with two IBM executives: http://japan.zdnet.com/news/hardware/story/0,2000056184,20368219,00.htm?tag=z.keyword.st [zdnet.com]
  • Re:In other news... (Score:3, Interesting)

    by Hadlock ( 143607 ) on Thursday March 05, 2009 @02:13AM (#27074345) Homepage Journal

    Guaranteed to take up 90% of cycles and 75% of RAM
     
    I thought this was a joke, and I thought my mom's computer was virus-laden, but after 3 years of agonizingly slow response time i finally uninstalled norton and installed SVG and lo and behold, the computer runs normally again(!). Turns out even though she had a 1.8ghz P4, she only had 512mb ram which was causing the comptuer to absolutely crawl when trying to run norton in the "background". Might as well have had the computer encoding h.264 videos continiously for all the good it did.

  • Re:Easy answer (Score:3, Interesting)

    by Amouth ( 879122 ) on Thursday March 05, 2009 @02:14AM (#27074347)

    i love linux as much as everyone else but in reality there isn't a product yet out side of exchange that gives the amount of seemless intgration that exchange gives.

    but exchange sucks ass when talking to the rest of the world directly.

    so we use slack+sendmail+clamav+spamassiasn to buffer and filter all incoming mail - then use one to buffer and send out. while it adds a couple second to a couple min delay on incoming mail based on filter lists.. it is a perfect setup for us, and all running virtualized on the same box.

  • Not new (Score:3, Interesting)

    by 1s44c ( 552956 ) on Thursday March 05, 2009 @05:02AM (#27075055)

    This isn't new. Windows NT used to run on HP superdomes. The project was scrapped as there wasn't any customer demand for it. Google for 'NT on superdome'.

    NT in this environment wasn't any faster or any more stable but it was WAY more expensive.

  • by mabinogi ( 74033 ) on Thursday March 05, 2009 @05:38AM (#27075177) Homepage

    and Unisys and HP have done it too, for just as long.
    I think the original poster has a confused idea of exactly what a mainframe is.

  • Re:Floating Point (Score:3, Interesting)

    by TheRaven64 ( 641858 ) on Thursday March 05, 2009 @07:21AM (#27075591) Journal

    Intel and IBM just disagree about this aspect of CPU design

    Interestingly, the 8087 (and, therefore, all subsequent x86 FPUs) has instructions for loading and storing BCD data. This lets you combine the computational accuracy of binary floating point with the storage density of binary-coded decimal.

    Yes, it's a mystery to me why anyone would want this too.

  • Re:Reliability. (Score:2, Interesting)

    by jra ( 5600 ) on Thursday March 05, 2009 @10:52AM (#27077041)

    I believe you've mispelt "100.0%".

    And on a zSeries, I might actually say "100.00%"; mainframes really are impressive, impressive pieces of machinery.

    The canonical story, of course, is the guy who ran 44,763 copies of Linux directly under z/VM (I think it was) before the machine he was on said "Ok, then; that's enough". He wasn't *doing* anything, of course, but let's assume you could only get 5% of that number running real loads on the same hardware.

    That's *2 cabinets*. How many 1RU dual quads can you fit in 2 cabinets? 84? So, 672 cores.

    Compared to 2,000ish VMs.

    And, really: 5 year uptimes, punctuated only by "We need to shut it down because we need to upgrade the shunt bypass on the UPS that feeds it". (True story)

    Google Linux+390 and read a little bit...

Ya'll hear about the geometer who went to the beach to catch some rays and became a tangent ?

Working...