Please create an account to participate in the Slashdot moderation system


Forgot your password?
Supercomputing Technology

Supercomputer Repossessed By State, May Be Sold In Pieces 123

1sockchuck writes "A supercomputer that was the third-fastest machine in the world in 2008 has been repossessed by the state of New Mexico and will likely be sold in pieces to three universities in the state. The state has been unable to find a buyer for the Encanto supercomputer, which was built and maintained with $20 million in state funding. The supercomputer had the enthusiastic backing of Gov. Bill Richardson, who saw the project as an economic development tool for New Mexico. But the commercial projects did not materialize, and Richardson's successor, Susana Martinez, says the supercomputer is a 'symbol of excess.'"
This discussion has been archived. No new comments can be posted.

Supercomputer Repossessed By State, May Be Sold In Pieces

Comments Filter:
  • by scheme ( 19778 ) on Thursday January 03, 2013 @10:12PM (#42470641)

    My experience is it would be better to provision a cluster of EC2 boxes to run the task than build a purpose-built super computer (with some exception). One disadvantage of clustered machines is longer communication latency, so tasks that require lots of process to process communication will run slower. Many problems can be tweaked with search spaces sliced so that this latency is not a big deal.

    There are huge classes of problems were you can't tweak things like this. Basically any simulation where things are large distances interact or where there is a lot of communication can't really be shoved into a cluster. For example, computation fluid dynamics (e.g. anything looking at air or water moving over surfaces), weather simulations, molecular dynamics, simulating gravity, etc. All of these types of problems will run like crap if you try to use EC2 instances for them.

    Also, have you really priced out what computation and data storage on EC2 costs? There's a few studies that show that EC2 on-demand instance will cost you 2-3 times more than purchasing a comparable server even with power, cooling, and maintenance/administration factored in. See, this [] or this [] for example. EC2 is great if you want to explore certain problems and need to temporarily scale up or want the ability to scale up on demand but if you have a base level of work that you'll be doing all the time, it's much more efficient to buy your own hardware. That is doubly true if your problems need any significant amount of storage space.

  • by mikael ( 484 ) on Thursday January 03, 2013 @10:23PM (#42470717)

    It's not going to be entirely broken up and sold as scrap. As the system is superscalar, the universities and mining institutes want to split the system into three blocks : UNM wants 10 racks, New Mexico State University want 4 racks, and New Mexico Institute of Mining and Technology would take 2 racks. They are each going to have their own physical campus space and energy consumption budgets, so no one could afford the entire system.

    Look at the statistics of the system:

    Type of system: SGI Altix ICE 8200 cluster
    Number of racks: 28
    Number of processor cores per rack: 500
    Total number of cores: 14000
    Processing power: 172 Trillion calculations per second
    Power consumption 32 Kilowatts per cabinet (not sure if racks == cabinets, but that would mean 896 Kilowatts/hour if it were the case)

    Normally, when someone requests time on a supercomputer, they put forward a funding bid, get some grant money which pays for fixed amount of time and number of cores. The administration of the system, then book in the time and schedule it with the other tasks running. If there are just a few regular customers and they each have a fixed amount of funding, then it's going to be cheaper for each of them to have their own portion of the system.

    I'd imagine Intel and SGI thought they could work together to build this system, house it somewhere locally, and lease it out to whoever needed it, and gain experience with parallel processing as well as make a healthy profit, slowly gaining number of customers. Prospective customers probably freaked out at the cost of doing their processing on an external system that wasn't under their control versus running on desktop PC's with Kepler/CUDA/OpenCL systems.

  • by scheme ( 19778 ) on Thursday January 03, 2013 @10:35PM (#42470805)

    There are not many problems these days that cannot be parallelized and split up to be run on a large number of off the shelf hardware. It is much easier to grow a Beowulf Cluster to add performance than redesigning to eke out every bit of capability of top-of-the-line hardware. Much easier also, to redesign your problem so that it can take advantage of parallelism. I agree that this was probably a boondoggle by a politician wanting to get some publicity for himself.

    You're mistaken. There's a large class of problems that are pleasantly parallel and can be split up like you say (e.g. einstein@home or seti@home type problems). However, any problem that requires a lot of internode communication such as computation fluid dynamics, gravity simulations, weather or climate simulations/forecasting, combustion/flame problems (e.g. modeling engines), molecular dynamics will require a system like this. A beowulf cluster using ethernet to connect nodes together will result in most of the cpus waiting for information from neighboring nodes to be sent to it so that it can go through an iteration. A lot of the cost in a system like this comes from having a very low latency, high speed network connections. Ideally, you'd want to have every cpu connected to every other cpu, but that is impossible so you end up trying to maximize the number of connections and bandwidth while minimizing the collisions with other cpu-cpu communications for a given amount of money. It's not cheap by any means.

  • Re:Damn (Score:4, Informative)

    by c0lo ( 1497653 ) on Thursday January 03, 2013 @11:22PM (#42471197)

    Probably, but it still doesn't make much sense.

    C'mon, it's /. , what do you expect?

    Nerds need to be exact, not necessarily to make sense... (otherwise they wouldn't be different enough from non-nerds to justify a distinctive term)

  • by Macman408 ( 1308925 ) on Friday January 04, 2013 @01:39AM (#42472267)

    I think another problem is that there's probably not much reason for a business to be physically located close to a supercomputer. It would be just as easy to use it from out of the state, with the added benefit that your business can be located somewhere with a larger talent pool. Without that draw, there's not much reason for the state to sponsor such a project, since there's not likely to be a net positive gain for the taxpayers. For a country, it makes more sense to invest in a supercomputer, as there are higher barriers to people and data moving across international boundaries than across state borders. Of course, countries also generally have more use for supercomputers themselves.

    Also, from looking at the stats, it's not a terribly efficient machine. It's currently at #185 on the Top500 list (not bad, for being fairly old), but it burns 861 kW. Only 286 of the Top500 systems list their power, and of those it comes in at #271 in terms of efficiency, or #241 in total power. So it's in the 63rd percentile speed-wise, but the 5th percentile in terms of efficiency. This is largely related to its age; the top 84 of 286 systems were all built in 2011 or 2012. I could imagine that having such a low efficiency makes it quite a bit harder to turn a profit. Especially when the most efficient machines on the list (including the fastest machine in the world) use 14-16 times less power for the same performance level.

"Never face facts; if you do, you'll never get up in the morning." -- Marlo Thomas