Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Supercomputing Technology

Supercomputer Repossessed By State, May Be Sold In Pieces 123

1sockchuck writes "A supercomputer that was the third-fastest machine in the world in 2008 has been repossessed by the state of New Mexico and will likely be sold in pieces to three universities in the state. The state has been unable to find a buyer for the Encanto supercomputer, which was built and maintained with $20 million in state funding. The supercomputer had the enthusiastic backing of Gov. Bill Richardson, who saw the project as an economic development tool for New Mexico. But the commercial projects did not materialize, and Richardson's successor, Susana Martinez, says the supercomputer is a 'symbol of excess.'"
This discussion has been archived. No new comments can be posted.

Supercomputer Repossessed By State, May Be Sold In Pieces

Comments Filter:
  • Imagine... (Score:5, Funny)

    by PaulBu ( 473180 ) on Thursday January 03, 2013 @07:07PM (#42469415) Homepage

    A Beowolf clusted of these! :)

    Paul B.

    • by Taco Cowboy ( 5327 )

      Supercomputer is a tool.

      Like any other kind of tool, if used correctly, a supercomputer can be very beneficial, and can generate a lot of profit and/or prestige for its owner.

      But of course, like any other kind of tool, if a supercomputer is ***NOT*** used correctly, it'll become a burden, a waste of money, an eyesore.

      • by TheGratefulNet ( 143330 ) on Thursday January 03, 2013 @07:46PM (#42469751)

        and supercomputers often require recoding of the 'app' so that it runs better and uses the hardware better.

        when I was at SGI (and cray was still part of them) I got some time on a cray machine to run some code that I was usually running on indys and octanes. I expected a HUGE increase in speed but I saw only about 2x. my app was not broken down to be cray-friendly and so I never got any real speed out of it.

        unless you go to lengths to use the SC in 'its preferred way' its a wasted and expensive resource.

        • Re: (Score:3, Interesting)

          by frosty_tsm ( 933163 )
          My experience is it would be better to provision a cluster of EC2 boxes to run the task than build a purpose-built super computer (with some exception). One disadvantage of clustered machines is longer communication latency, so tasks that require lots of process to process communication will run slower. Many problems can be tweaked with search spaces sliced so that this latency is not a big deal.

          A governor who thinks that spending $20m on this will bring more businesses to his state in the world of the
          • by scheme ( 19778 ) on Thursday January 03, 2013 @09:12PM (#42470641)

            My experience is it would be better to provision a cluster of EC2 boxes to run the task than build a purpose-built super computer (with some exception). One disadvantage of clustered machines is longer communication latency, so tasks that require lots of process to process communication will run slower. Many problems can be tweaked with search spaces sliced so that this latency is not a big deal.

            There are huge classes of problems were you can't tweak things like this. Basically any simulation where things are large distances interact or where there is a lot of communication can't really be shoved into a cluster. For example, computation fluid dynamics (e.g. anything looking at air or water moving over surfaces), weather simulations, molecular dynamics, simulating gravity, etc. All of these types of problems will run like crap if you try to use EC2 instances for them.

            Also, have you really priced out what computation and data storage on EC2 costs? There's a few studies that show that EC2 on-demand instance will cost you 2-3 times more than purchasing a comparable server even with power, cooling, and maintenance/administration factored in. See, this [indico.cern.ch] or this [google.com] for example. EC2 is great if you want to explore certain problems and need to temporarily scale up or want the ability to scale up on demand but if you have a base level of work that you'll be doing all the time, it's much more efficient to buy your own hardware. That is doubly true if your problems need any significant amount of storage space.

            • You raise an excellent point (both the classes of problems and the costs). If you keep things busy, it's cheaper to own it yourself. It's hard to keep machine usage that high (not to say NOAA or others aren't doing that).
          • by bmo ( 77928 )

            Between you and the bitcoin guy getting the only 5 point posts in here, I have to say that yours is the dumber of the two responses.

            You obviously don't know what supercomputers do and what is "trivially parallel" (what you can do in ordinary clusters) and what you need an actual supercomputer for. And neither do you care, and that's the saddest part of all this.

            --
            BMO

          • by Macman408 ( 1308925 ) on Friday January 04, 2013 @12:39AM (#42472267)

            I think another problem is that there's probably not much reason for a business to be physically located close to a supercomputer. It would be just as easy to use it from out of the state, with the added benefit that your business can be located somewhere with a larger talent pool. Without that draw, there's not much reason for the state to sponsor such a project, since there's not likely to be a net positive gain for the taxpayers. For a country, it makes more sense to invest in a supercomputer, as there are higher barriers to people and data moving across international boundaries than across state borders. Of course, countries also generally have more use for supercomputers themselves.

            Also, from looking at the stats, it's not a terribly efficient machine. It's currently at #185 on the Top500 list (not bad, for being fairly old), but it burns 861 kW. Only 286 of the Top500 systems list their power, and of those it comes in at #271 in terms of efficiency, or #241 in total power. So it's in the 63rd percentile speed-wise, but the 5th percentile in terms of efficiency. This is largely related to its age; the top 84 of 286 systems were all built in 2011 or 2012. I could imagine that having such a low efficiency makes it quite a bit harder to turn a profit. Especially when the most efficient machines on the list (including the fastest machine in the world) use 14-16 times less power for the same performance level.

    • Imagine...... (Score:1, Offtopic)

      ...how quickly iTunes would load on this sucker!!

      errp...

      waiting...

      still waiting...
  • by jtownatpunk.net ( 245670 ) on Thursday January 03, 2013 @07:12PM (#42469461)

    2008 technology. Seems more like three universities are getting stuck with it than anything else. The parts will be 5 years old by the time everything is divided up and distributed. That's fine if you're redistributing old desktops to set up a lab for kids to type up term papers or something but supercomputers are supposed to be cutting edge. Maybe they can use it for a computer history class. "This is how we built supercomputers back in the day."

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      I suspect that you can still upgrade those SGI Altix boxes with the newest Itanium CPUs, so i think once can still squeeze respectable performance out of them. Additionally, they are not clusters but large single Image systems, i.e. only one instance of Linux runs with 1024 or 2048 CPUs, so the resulting system may be more suitable for some tasks than a cluster of "normal" PCs.

    • Re:Oh, boy! (Score:4, Insightful)

      by Sir_Sri ( 199544 ) on Thursday January 03, 2013 @07:30PM (#42469599)

      Useful for educational purposes. You give people a chance to execute code on an actual distributed cluster setup without taking away CPU time from actual projects, and it's still going to be a lot more powerful than most people have access to.

      • by cnettel ( 836611 )
        You still need fair power and cooling facilities to make any use of them, as well as some sysadmin staff to set up and maintain the queuing system. If an outdated cluster was kept intact, you could possibly benefit from already solving the utilities and system setup issue, but if they are dividing it into pieces, those benefits are also lost.
      • by Lumpy ( 12016 )

        not really. There is already a project to make a 1024 processor RasberryPi "supercomputer" that will give mepople the ability to do "supercomputing" at an affordable level.

        This thing is a waste of money now to do anything but grind it up for it's metals. its already 5 years out of date and it's processing power per Kw used is so high it's useless to most universities.

        • clustering rasp pi's?

          why? they have NO io that is worth a damn. and its all about io since you NEED fast and low-latency to make a cluster really worth it.

          "when all you have is a Pi, everything looks like a slashdot article"

          • Re:Oh, boy! (Score:5, Insightful)

            by Immerman ( 2627577 ) on Thursday January 03, 2013 @10:00PM (#42471009)

            Actually, for learning how to do good supercomputer programming it might be quite viable. After all most beginner code is horribly inefficient, and most beginner projects are quite small. On anything resembling a "real" supercomputer even the most inefficient code will still finish within seconds - whereas on slow hardware with poor I/O a poorly coded implementation may take many minutes or even hours versus the seconds needed for a well-written program to do the same task. Technically speaking the difference between .1 seconds and 10 seconds is just as informative as the difference between 10 seconds and 17 minutes, but the latter carries far more psychological weight.

            Besides which - how many entry-level tasks can you think of that could actually make use of even a few dozen clustered "real" systems, much less a thousand? Hands-on experience in how to effectively partition a task between numerous nodes shouldn't be underestimated, and it's a rare university that's going to want to turn beginning programmers loose on their big iron, other departments want to use it for real research. A $30-50k cluster on the other hand might be just what the CS department ordered.

          • by Lumpy ( 12016 )

            No it's not. IO means absolutely nothing at all in this context.

            If you cant have a $2500 "supercomputer" to learn on because you are all IO snobby, then you will never ever learn how to code for a supercomputer.

            I would rather have a lab with 50 of these Rasberry Pi supercomputers than 1 out of date one that quadruples the universities power bill.

    • Useful for calculations? no... useful for learning how to use and program for super computers? Definitely.

      • by Gilmoure ( 18428 )

        New Mexico also has a yearly supercomputer challenge for kids in elementary school through high school. Having more systems available for the kids will make thing easier.

      • Actually even for doing calculations it may be useful. Keep in mind than NM is the poorest state in the union - only the wealthiest universities have any sort of big iron at all. I used to manage the "supercomputer" for a mid-sized NM university, among other duties - I was the only professional IT guy for the CS department, and the cluster consisted of half a rack of dual-core servers. Even at that the chemistry department were the only ones to ever tax it for any length of time, and their simulations we

    • by c0lo ( 1497653 )

      That's fine if you're redistributing old desktops to set up a lab for kids to type up term papers or something but supercomputers are supposed to be cutting edge.

      And... why don't you trust them to break the supercomputer in such way that the pieces will all have cutting edges?

    • Alright so this thing won't place on the Top 500 list but that's not the point. Its a real supercomputer and an ideal learning environment for distributed computing. No a room full of desktops and gigabit ethernet is not the same thing.

      • by jd2112 ( 1535857 )

        Alright so this thing won't place on the Top 500 list but that's not the point. Its a real supercomputer and an ideal learning environment for distributed computing. No a room full of desktops and gigabit ethernet is not the same thing.

        On the other hand that's pretty much how Google got started.

    • by Buzh ( 74397 )

      Actually, it is sort-of useful to get an old supercomputer. If you have the space, cooling and power for it - it's a great development and teaching platform, as well as letting you do trials or smaller runs without taxing your allocation on the bigger, national/international HPC systems.

      I recently had the pleasure of giving away about a third of our recently replaced HPC cluster to the physics department of the university I work for, and they were very very happy indeed. [uniforum.uio.no] (in norwegian)

  • by Anonymous Coward

    As a citizen of New Mexico, Susana Martinez is probably the dumbest and most shortsighted politician I've ever seen in office. She makes George W. Bush look like Albert W. Einstein.

    I know that's off-topic, but Goddamn is that woman stupid. I just had to say something.

    • Comment removed based on user account deletion
      • by Anonymous Coward

        Oh, no, I wasn't talking about this issue specifically.

        I was just pointing out that she's a horrible governor and dumber than a bag of sand. I have nothing against cutting off a program that's way over budget and doesn't have anything to show for it.

      • by Anonymous Coward
        I'll play along and say that there's a difference between a failed project and calling a 20 million project that had a clear goal and failed to meet it a "sign of excess". Million dollar bonuses are signs of excess, not projects that have real potential uses.
      • by ae1294 ( 1547521 )

        I'll play along and say you are right for the sake of argument. But if a 20 mill project approaches or goes over budget with little to show for it along the process, why keep throwing money at it when there are plenty of other super computers to purchase a time-slice from?!

        All they needed in one word.... BITCOIN MINING... Ok two words.. All they needed in two words...

    • by mikael ( 484 ) on Thursday January 03, 2013 @09:23PM (#42470717)

      It's not going to be entirely broken up and sold as scrap. As the system is superscalar, the universities and mining institutes want to split the system into three blocks : UNM wants 10 racks, New Mexico State University want 4 racks, and New Mexico Institute of Mining and Technology would take 2 racks. They are each going to have their own physical campus space and energy consumption budgets, so no one could afford the entire system.

      Look at the statistics of the system:

      Type of system: SGI Altix ICE 8200 cluster
      Number of racks: 28
      Number of processor cores per rack: 500
      Total number of cores: 14000
      Processing power: 172 Trillion calculations per second
      Power consumption 32 Kilowatts per cabinet (not sure if racks == cabinets, but that would mean 896 Kilowatts/hour if it were the case)

      Normally, when someone requests time on a supercomputer, they put forward a funding bid, get some grant money which pays for fixed amount of time and number of cores. The administration of the system, then book in the time and schedule it with the other tasks running. If there are just a few regular customers and they each have a fixed amount of funding, then it's going to be cheaper for each of them to have their own portion of the system.

      I'd imagine Intel and SGI thought they could work together to build this system, house it somewhere locally, and lease it out to whoever needed it, and gain experience with parallel processing as well as make a healthy profit, slowly gaining number of customers. Prospective customers probably freaked out at the cost of doing their processing on an external system that wasn't under their control versus running on desktop PC's with Kepler/CUDA/OpenCL systems.

      • Don't forget as well that research-oriented simulations are likely to be at least brushing up against real cutting-edge science and/or technology, and the researchers will be loath to run their simulations on hardware outside their control. After all if they come up with something big what's to stop some IBM lackey from making a copy of their results and selling it to the highest bidder? At least their grad students have a little skin in the game. Is it mostly an ego trip? Probably. But ego trips compris

      • by ThorGod ( 456163 )

        The "New Mexico Institute of Mining and Technology" should not be called a "mining institute". It's really New Mexico Tech: a small college with a strong emphasis on the STEM fields.

        • by wwphx ( 225607 )
          NM Mining is also a home away from home for Mythbusters. They used their rocket rail at least twice, and used their explosives range for trying to make diamonds with explosives and also to test the RPG vs revolver from the movie Red. Mythbusters also used Apache Point Observatory as part of their lunar landing myth episode, my wife operates APO's Apollo lunar laser ranging system.
    • Isn't she also the one who was bought by Texas corporate interests? Or am I thinking of someone else?

      (long-time NM citizen. Wasn't interested in politics then, even less interested now, but I still hear some of the most egregious stories.)

  • by Anonymous Coward

    Susana uses an iPhone.

  • Fools! (Score:5, Funny)

    by kurt555gs ( 309278 ) <kurt555gs@nOsPaM.ovi.com> on Thursday January 03, 2013 @07:21PM (#42469531) Homepage

    Think of how many Bitcoins this thing could make. Someone should tell New Mexico.

  • by Anonymous Coward

    Not a symbol of excess. It's a symbol of how the government cannot and should not try to identify (and fund) particular technologies (see: Solyndra). Let the market determine the market. Central planning hasn't worked for anyone. Jeeeez.

    • You're absolutely right. The integrated circuit (NASA), the internet(DARPA), interstate highways, public utilities, etc. have never contributed to economic growth or social development.

      Admittedly "planned economies" don't have a great track record, but then again I can't offhand think of any examples not controlled by short-sighted despotic governments, so that's not necessarily much of an attack against the concept. Targeted investment and development on the other hand has no shortage of success stories t

    • well. except for that whole internet thing everyone's always talking about
  • by virtigex ( 323685 ) on Thursday January 03, 2013 @08:34PM (#42470265)
    There are not many problems these days that cannot be parallelized and split up to be run on a large number of off the shelf hardware. It is much easier to grow a Beowulf Cluster to add performance than redesigning to eke out every bit of capability of top-of-the-line hardware. Much easier also, to redesign your problem so that it can take advantage of parallelism. I agree that this was probably a boondoggle by a politician wanting to get some publicity for himself.
    • That's essentially what modern supercomputers are, including this one.

    • by scheme ( 19778 ) on Thursday January 03, 2013 @09:35PM (#42470805)

      There are not many problems these days that cannot be parallelized and split up to be run on a large number of off the shelf hardware. It is much easier to grow a Beowulf Cluster to add performance than redesigning to eke out every bit of capability of top-of-the-line hardware. Much easier also, to redesign your problem so that it can take advantage of parallelism. I agree that this was probably a boondoggle by a politician wanting to get some publicity for himself.

      You're mistaken. There's a large class of problems that are pleasantly parallel and can be split up like you say (e.g. einstein@home or seti@home type problems). However, any problem that requires a lot of internode communication such as computation fluid dynamics, gravity simulations, weather or climate simulations/forecasting, combustion/flame problems (e.g. modeling engines), molecular dynamics will require a system like this. A beowulf cluster using ethernet to connect nodes together will result in most of the cpus waiting for information from neighboring nodes to be sent to it so that it can go through an iteration. A lot of the cost in a system like this comes from having a very low latency, high speed network connections. Ideally, you'd want to have every cpu connected to every other cpu, but that is impossible so you end up trying to maximize the number of connections and bandwidth while minimizing the collisions with other cpu-cpu communications for a given amount of money. It's not cheap by any means.

  • I think they should set up a not-for-profit foundation, like sdf to maintain and administer the box, and open it up to public access via ssh like SDF.
    • by rtb61 ( 674572 )

      Face it you got a bunch of dopey Republicans who would rather take a political shot at the previous Democrats rather than do anything useful with the supercomputer. Anyone with half a brain would simply rent out access at negotiated rates to those three university rather put it out of commission. Of course the whole scam will be to sell it as cheap as possible, spend as much as possible on breaking it up and then blame everything on democrats in the next election cycle.

      • Or dopey democrats who will find I way to spend as much as possible to operate it, then blame the republicans....

        your partisan schill weasel words annoy me. get out
    • by Gilmoure ( 18428 ) on Thursday January 03, 2013 @10:14PM (#42471125) Journal

      It was a non-profit organization that was running this and they owed money to sgi for maintenance.

  • by fufufang ( 2603203 ) on Thursday January 03, 2013 @09:17PM (#42470681)

    Where science and engineering is considered as excess,but litigation/lawsuit are considered as normal.

  • by Anonymous Coward

    This is a very old machine. It was a piece of crap the day it was turned on and never got better. It isn't worth the electricity and cooling even when broken up. For the money it will take to dismantle, move, re-install, power, and cool individual racks you could get something smaller, less power hungry, brand new and in support for half the money.

    The whole thing needs to get scrapped. What the state has actually done here is find a way to avoid paying to have it scrapped by "gifting" it to the universi

  • Well, get some priest admins, then. Exorcise the demons!!
  • That would be a testament to government. One cent on the dollar, or less.

Put your Nose to the Grindstone! -- Amalgamated Plastic Surgeons and Toolmakers, Ltd.

Working...