Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Supercomputing IT

IBM Touts Supercomputers for Enterprise 94

Stony Stevenson writes "IBM has announced an initiative to offer smaller versions of its high-performance computers to enterprise customers. The first new machine is a QS22 BladeCenter server powered by a Cell processor. Developed to power gaming systems, the Cell chip has also garnered interest from the supercomputing community owing to its ability to handle large amounts of floating point calculations. IBM hopes that the chips, which currently power climate modeling and other traditional supercomputing tasks, will also appeal to customers ranging from financial analysis firms to animation studios."
This discussion has been archived. No new comments can be posted.

IBM Touts Supercomputers for Enterprise

Comments Filter:
  • by HockeyPuck ( 141947 ) on Wednesday May 14, 2008 @11:23PM (#23413682)
    I've seen more than my share of traditionally big iron applicatoins (databases, data warehousing etc..) being moved off of specialized hardware (ibm p595s, sun e15k, HPSuperdomes etc..) being moved onto (or attempted to) commodity hardware. Management hears,

    "We can replace that $2m server with 10racks of servers each $1k and after 3yrs we just throw them away and replace them with the latest and greatest x86 based hardware with 2x performance still $1k/server?
    Now IBM wants to push highly specialized blades. Somewhere someone's saying, "How many x86 servers can we get for one Cell blade?"

    Personally, I'm sick of managing farms of physical servers, and with the introduction of VMWare, I'm now managing 3x the number of machines (albeit virtual machines). Have an FTP server? Run that in it's own image. Also have a syslog server? Yet another virtual machine. I really hope this sells well. Maybe I can now play PS3 games in the datacenter.

  • Re:Flamage (Score:3, Interesting)

    by zappepcs ( 820751 ) on Wednesday May 14, 2008 @11:59PM (#23413876) Journal
    Actually, the Cell processor is a rocking piece of hardware. I'd like to see something like it added to motherboards as an optional coprocessor arrangement. Yes, I realize that the code/compilers would have to be redone, but I think that this is something that would make a huge difference in performance with little actual hardware/cost increase. It's a thought anyway.
  • by Doc Ruby ( 173196 ) on Thursday May 15, 2008 @12:20AM (#23413996) Homepage Journal
    From IBM's detailed press release [ibm.com]:

    the QS22 boasts an open environment, utilizing the flexibility of Red Hat Enterprise Linux as the primary operating system and the open development environment of Eclipse.


    That means that a PS3 running Linux [psubuntu.com], even with its ridiculously low 512MB RAM, can be used as a $500 development platform for these CellBE BladeServers.

    And, in turn, some QS22 SW might be usable on the PS3, if it can be ported to use the tiny RAM. Or if someone hooks an i-RAM bank to the SATA port as swap/ramdisk, using perhaps iSCSI over its Gb-e for storage.
  • Re:IBM pricing (Score:3, Interesting)

    by BiggerIsBetter ( 682164 ) on Thursday May 15, 2008 @12:49AM (#23414166)

    There's only one Pixar and there's only one ILM. The other million animation studios out there don't have budgets even close to these guys, particularly considering the turn around on hardware (today's super cluster is tomorrow's pile of junk). Renderfarms will be staying with cheaper vendors (which also means white box for most) for some time to come yet.
    I understand your argument, but there's an upside. Maybe someone will put energy into Cell processor renderfarm software, so all the fiscally-challenged shops can buy a small rack of PS3s and go at it? IBM or not, the Cell is still a damn quick CPU for serious number crunching (say raytracing, etc).
  • Re: Flamage du jour? (Score:4, Interesting)

    by zappepcs ( 820751 ) on Thursday May 15, 2008 @12:53AM (#23414174) Journal
    Even though this might make me sound a bit off, we already have co-processors for video, network, etc. Why not go a bit further and specialize the hardware just a bit more. Let the Cell do all the real work and sandbox the user on the x86 cpu in a way that allows the user to be rather free in operation while the real work is done on the Cell processor in protected manner. That "should" be enough processing power to isolate the user completely from the tasks of the computer itself. The idea would be to sort-of create a mainframe/client environment where it would be nearly impossible for the user to accidentally introduce viri to the system.

    The UI and i/o shelled through the x86 system. There are examples of this in some smaller embedded systems where system memory is separate from user memory etc. The details of this seem sketchy as I have not worked them out to any degree that would make the proposal sound workable thus far. I do know of examples where techniques like this are used to protect the 'system' while 'user agents' do what they want without the intrusion of security software at every turn. When the system is turned off, the user space is cleared. The protected system space is always protected.

    Yes, that leaves room for infections on the Cell side to act like root kits as there is always some spot that is vulnerable, but it does offer a much more bullet resistant setup. The effects are not too different from working from a live CD all the time. Reboot and all is clean again, but with a more permanent and less inconvenient process. If you run some version of Linux/Unix on the client side, and strictly control the communications to the Cell side it becomes a much tighter box to try to squeeze a virus into. It may provide opportunity for the Cell side to monitor processes in the client/UI side meaning that keyloggers and such wou9ld become a thing of the past. In general, I mean to add horsepower by splitting system tasks from UI tasks and add a much stronger sandbox for the client to operate in, rather than continue lumping all the work on one cpu and letting security run in the same sandbox as the questionable software.

    It's an idea... obviously I do not design motherboards or OSes for a living (IANACSPHD ??)
  • by Anonymous Coward on Thursday May 15, 2008 @01:31AM (#23414350)
    I agree, VM has more development coming out. Lately they seem to be on the bleeding edge. I think the VM Ranger product will go by the wayside soon, as it will be incorportated into virtual center.

    If you have looked at the new Virtual Center, you can now patch all types of virtual machines using the VM console, even while they are off. They are obviously reading the files of the virtuals to see what patch levels they are at, etc. It is only a small step from there to see what services are running and use VC to manage those. That will make the virtual server creep that most vm admins experience much more managable.
  • Re:This is a Failure (Score:1, Interesting)

    by Anonymous Coward on Thursday May 15, 2008 @06:14AM (#23415560)
    OK, so you have observed people facing problems getting performance out of the Cell.

    These people are not alone, but it looks like they are not the most "advanced" when it comes to programming the Cell.

    More and more games developers are starting to "appreciate" the Cell.

    No, the Cell is not meant to be to take on the X86 market. So running a MySQL DB on the Cell is not a good idea.

    IBM are targetting specific markets/customers.

    One project where you see the tremendous power of the Cell is with the Folding project.

    Perhaps you should talk with the people at Stanford, instead of just complaining about the Cell.

    And why would IBM ditch the Cell, when Toshiba is even preparing a laptop with the Cell as a copro?
  • by njcoder ( 657816 ) on Thursday May 15, 2008 @08:57AM (#23416618)

    I've seen more than my share of traditionally big iron applicatoins (databases, data warehousing etc..) being moved off of specialized hardware (ibm p595s, sun e15k, HPSuperdomes etc..) being moved onto (or attempted to) commodity hardware.
    I'm honestly curious. How well does this really work out for databases and data warehousing?

    One of the benefits I understand from going with one big-ass server is that the memory to pipelines between cpu and memory are much better than the ethernet/myrinet/infiband/whatever connections between cluster nodes.

    Depending how you're going to be doing the clustering, you're either going to have some type of cluster fs or you're going to be using a shared NAS rather than SAN for the data. This is also going to be slower than local disks or a dedicated iSCSI connection.

    From my understanding, the clustering technologies for databases aren't very good when you have a lot of writes to the db. Data warehousing is different, but I would think that something like an e15k or M9000 would be better for really big databases.

    Things like 3d rendering and certain types of data analysis and modeling can be clustered easier. For these companies that need this type of service, it's probably better and cheaper to use a utility computing provider like Sun's Grid. Why pay the electric/cooling/sqft costs for running a supercomputer when you're only running reports quarterly/annually or if you're rendering on a per job basis that gets billed to the client. Easier and probably cheaper to bill for a service than to try and factor in your overhead for you're own rendering engine.

    Add to that, you're now maintaining 10 racks of servers vs the equivalent of 1. Assume 40 nodes per rack with 2u for switches. That's about 40-80 amps of power per rack. Times 10 racks that's a lot of power, as well as heat that will work the AC harder. If I remember correctly an e15k is going to be somewhere around 50-80amps. All the extra power consumption and cooling is going to add up. Not to mention the space, cost of wiring up and testing all those nodes.

    With 400 servers in 10 racks, even at 1k per server, between the racks and the labor to set it all up you got to be in the $750K-$1M ballpark. Even if you only really need 200 servers to get similar performance to an M9000 that's still not a huge savings compared to the operating costs, setup costs and simplicity.

    In the long run does it really make sense?

    I remember reading a comment about how PayPal is setup. They have a large number of linux servers as their front and middle tiers. In the back are 3 big Sun boxes that handle the databases. That to me seems like the right type of setup.

    As for virtualization. A lot of people are just plain doing it wrong.

Anyone can make an omelet with eggs. The trick is to make one with none.

Working...