Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Supercomputing Technology

Building the World's 4th Fastest Supercomputer 75

ngkabra writes "In November 2007, a previously unheard of supercomputer called EKA, built by CRL, India came out of nowhere to become the 4th fastest supercomputer in the world. It is also the only supercomputer in the top 10 that hasn't taken any government funding — which means it has no strings attached against commercial exploitation. That is one of the reasons why Yahoo! chose EKA for the cloud computing research that they announced at the Hadoop Summit earlier this week. Yesterday, I attended a presentation by the team that built EKA, and they touched upon a lot of the technical details of EKA, and the challenges faced in designing and building it, which makes for interesting reading."
This discussion has been archived. No new comments can be posted.

Building the World's 4th Fastest Supercomputer

Comments Filter:
  • Well, interesting enough for me to read to the end of the article. The article, however, does seem to lack substance and details. It seemed to spend a lot of time apologising for inadequacies and seemed to aimed at a less technical audience. That is fine, writing to meet the target audiences needs or wants is good (I guess I just wasn't in the target audience.) The article raised more questions for me than it answered though (which, again, might be good--it will at least make me google for more information.
    • You would think that the target audience would be more tech savvy people though. It is an article about a supercomputer which is something I don't think Joe Schmoe really cares about too much. I really want to know more about this interconnect architecture. We're about to replace our monster SGI system using Star pattern interconnect to an IBM one that I have yet to go to school for.
      • by ngkabra ( 245586 )
        > I really want to know more about this interconnect architecture.

        Actually I had the same question myself. The talk itself did not go into the details of the interconnect architecture (other than to point out its advantages). I have sent an e-mail to the architect of Eka asking for these details. Should hopefully get some references from him. (In which case I'll post it as an update to TFA.)

        In any case, the presentation was targeted towards a general tech audience (i.e. people who don't really have a bac
      • You are absolutely correct. And you have alerted me to the question: What audience was the article targeted at? Reading it (TFA) again, I am not sure it is targeted to any audience, but was written by a person without a clue. I can't for the life of me imagine why someone would to allude to facts and then skim over them saying "I don't understand". Maybe it was to meet a deadline; I don't know.
        • Well, I am currently working on a project with CRL, our aim is to increase the speed of certain operations as much as possible, and belive me they are going to be pushing this in a big way in a couple of months, I think they are going to be making some announcement in july, cause we have been given a june deadline for our implementations, and belive me the amount of money they are spending on this. ... scary man. I can't go into details (mostly cause I signed an NDA), but they can very easilsy increase the
      • by LilGuy ( 150110 ) on Saturday March 29, 2008 @11:51AM (#22905572)
        I concur. If you asked Joe Schmoe if he knew what a super computer was he'd probably tell you Windows Vista, which would be close, except he'd leave out the words "anything that can run".
    • please, could someone reference some information that isn't wirtten by a 6 yr. old?

      i'm genuinly interesting in this 'perfect difference' network and would like to understand why it makes a decent interconnect. (btw, i dont think cray has ever built a hypercube machine)
  • by bogaboga ( 793279 ) on Saturday March 29, 2008 @12:52AM (#22902934)
    India is now on track to earning the respect it deserves. Good for them! But I wonder what countries like Russia have hidden from the public. I am sure they have respectable hardware. Those very deadly [nuclear] weapons and missile systems must have had their design done on some pretty cool hardware.
    • Lazy Americans. Who needs a supercomputer when you have pens, paper, and an endless supply of cheap labor?
      • by Yetihehe ( 971185 ) on Saturday March 29, 2008 @05:29AM (#22903652)
        Actually people are not very good building nodes of supercomputers. They think slow, make way too many errors, and have horrible i/o speed, not to mention the best have about 7 abstract bytes of working memory (can hold about 7 concepts at a time). It would take thousands of years and millions of people to make calculations which this supercomputer can manage in minutes.
        • by mkiwi ( 585287 )
          It would take thousands of years and millions of people to make calculations which this supercomputer can manage in minutes.


          You will be assimilated.

    • by jd ( 1658 ) <imipak@[ ]oo.com ['yah' in gap]> on Saturday March 29, 2008 @01:54AM (#22903120) Homepage Journal
      What does it take to make a supercomputer? Well, very fast switches, very low-latency, high-bandwidth interconnects, and decent CPUs. (You don't need mega-fast CPUs, you just need lots of them.)

      Let's start with the switches. You can parallelize network traffic - fragment the packets, stick full headers on each fragment, switch the fragments in parallel, reassemble, have the queue re-order inbound traffic by sequence number. (That last step isn't too hard, you have some fraction of the sequence number and the full fragment number map to a unique address in a permanently allocated ring buffer. Copy the payload to that address and the packets are in sequence order, not delivery order.) So, instead of having individual switches that are fast, bank the switches up and have the combined virtual switch run very fast. You can do that on exportable commodity hardware.

      Or, you could sneak through Homeland Security (who have much more interest in nipple rings than dangerous weapons) a bunch of 24-channel 5Gb/channel InfiniBand switches. You wouldn't need many of those to get a decent Quake LAN party, and not many more before you could run weapons design software. It is unclear how many are required to run Vista, once service pack 1 is installed.

      Interconnects. Obviously, InfiniBand is hellishly fast. So is SCI. 10 Gb ethernet, ideally with iWarp extensions, would be much slower but still perfectly good for a commodity cluster. If you scrap the idea of having machine-to-machine communication and do memory-to-memory communication, you could actually use PCI-e 2.0 as one gigantic interconnect. Ideally, you'd have the memory appear as two separate devices - slave and master - so that direct memory-to-memory RDMA could be initiated. A lot of very similar work has already been done by US supercomputer giants, and given how many have either been bought, gone bankrupt, or otherwise vanished, it's reasonable to suppose large quantities of such RAM could have "migrated" overseas.

      CPUs - well, there are some respectable 16-bit pile-of-pc clusters. One was reviewed on Slashdot some time back. Even a cluster of Cell processors, if large enough and well-enough programmed, could be very effective. A hostile nation wouldn't need high-end 4x4 multi-threaded multi-core SMP systems, although again given how juvenile airport security is, I can't imagine it would be hard for someone to export, say, a couple of hundred motherboards at that spec.

      Ok, what about OS? Who needs one? Anyone with a copy of OSKit or something similar can work at almost bare metal levels as if they had a full OS. If they did want a full OS, then NetBSD or MOSIX would be quite sufficient. Or they could take an OS project like Exopc and add high-performance networking to it.

      Software? If you've a decent copy of BLAS, LAPACK and some solvers, tightly optimized for the platform, you're set. Those core maths functions are critical. Since the functions and API are fixed, it would not be impossible for someone wanting raw power to have put them into an FPGA, SoG or ASIC. Collective operations are also nasty, but they too can be done entirely in hardware, giving you orders of magnitude speedup over conventional software solutions. Synchronizing is the third killer, but there are meta schedulers to handle that and you could again place those on dedicated hardware.

      In short, although I couldn't afford to build a top 500 machine, it is only the affording of it that is a problem, and foreign countries are quite well aware of that. Especially after China built its first (publicly-announced) Government-funded Beowulf. Supercomputing is easy, it's the price tag that isn't.

    • I believe Russia had some very credible Vax clones during that era, reverse-engineered and built in bulk. Also a rather eclectic array of 2nd gen stuff, a couple of PC (including Apple ][) clones and the odd mainframe. See http://en.wikipedia.org/wiki/History_of_computer_hardware_in_Soviet_Bloc_countries/ [wikipedia.org]
      • by RCL ( 891376 )
        Yes, I remember that they still had VAX in my university (I'm Russian) even back in the 1990-s. We even had a few lessons on it (but not anything VMS- or architecture specific). From what I know, that VAX had been smuggled in 1980s with help of some African or Scandinavian proxy (COCOM made it impossible to buy from US firms directly), so it wasn't a clone (although clones did exist).

        Russia had two major branches in its hardware industry: one based on reverse-engineered clones and a "domestic" one. Neithe
    • If by "pretty cool hardware" you mean "pen and paper" then yes.
    • There is a distinct difference in philosophy between American/Western physicists and Russian physicists, especially during the Cold War era. Namely Russians are famous for working on closed form solutions to problems or hand capable integrations, where as Americans typically were more apt to "running" to the computer. I would hazzard a guess that the Cold War era weapons built by russians were developed by alot of sweat over some ultra advanced computing resources. Besides, you have to notice that a majo
  • by longacre ( 1090157 ) * on Saturday March 29, 2008 @01:07AM (#22902966) Homepage
    Will tech support for this Indian computer be outsourced to Dell reps in Texas?

    • where's the imagine a beowulf cluster of these post?

      I think there would only be need for about six of these in the entire world.

        rd
    • Re: (Score:3, Interesting)

      by tuomoks ( 246421 )
      Maybe funny but look at the investments of some large Indian companies (no names.) Some are heavily investing or at least interested to invest for support centers in US. Computing is getting more and more global but, I have to say, Indians are fast learning where they can find not so expensive support. Good and bad, depends on how you look it.
  • ...to save you reading the article, it actually IS the worlds fastest super computer, but it's running Vista so now it only comes in 4th.

  • by edwardpickman ( 965122 ) on Saturday March 29, 2008 @01:16AM (#22903012)
    A Vista spec machine in non Governmental hands!
  • This is old news (Score:4, Informative)

    by Apoorv Khatreja ( 1263418 ) on Saturday March 29, 2008 @01:28AM (#22903046) Homepage
    I have known about EKA for years now. Being an Indian, I try to be aware of all the developments in IT in and around India, so I knew about this project when it was at it's earliest stages.

    What most people don't know is that this project is funded by the Tata Group, and is directly influenced by TCS (Tata Consultancy Services), which I am strongly related to. The initial goal of this project was to surpass IBM's Blue Gene in the field of supercomputing, but then, things went wrong (as is usually the case with Indian projects), and it landed up in the 4th position, which I still feel, is quite a big feat for a projcet funded privately.
    • Re: (Score:2, Interesting)

      by ngkabra ( 245586 )
      Another thing that is impressive about Eka is that it was built for just $30 million. This is, according to the Eka architects, significantly lower than the costs of the other computers that are in this league.

    • by Sheen ( 1180801 )
      I did not read article, but if the supercomputer cost less pr thz/ghz/mhz then the others on top, i would still say its quite impressive for a country like India to do something like this! PS, I'm north european.
    • What most people don't know is that this project is funded by the Tata Group

      Nice. We like big tatas.

      We like nano [slashdot.org] tatas too.
    • "What most people don't know is that this project is funded by the Tata Group"

      You mean this tata group? http://www.savethetatas.com/ [savethetatas.com]

      • Tata is a big player here in India. They control almost everything - communications, IT, motors, steel, infrastructure, power, beverage, satellites and research. This new supercomputer only strengthens their monopoly.
  • I didn't see in the article if it is capable of running vista in Aero mode :)

    Probably means no...

  • Why didn't this hit Slashdot sooner? Anyway, it seems to be very "green" as well. It looks as though there is only one computer more powerful than it that beats it on the Green500 complimentary list:

    http://www.green500.org/lists/2008/02/ranks1-100.php [green500.org]
  • It is also the only supercomputer in the top 10 that hasn't taken any government funding
    So this implies that IBM has taken government funding for their fastest server? That's news to me...
    • by Anakron ( 899671 ) on Saturday March 29, 2008 @02:32AM (#22903238)

      Blue Gene has government funding?
      Err, yes? Blue Gene/L is situated *inside* the Lawrence Livermore national labs. I'm pretty sure the department of energy funds the entire project. IBM is the largest, but not the only technology partner in it.
      • by Tiosman ( 614633 )
        There is a large BG/L at Livermore because Livermore bought it through a competitive procurement process. Just like other national labs and private companies. There was definitively no public funding used in the development in the BlueGene product line. However, IBM use government money to do research-oriented development, like X10 through HPCS funding.

        If you believe Tata bought this cluster with a business plan to recover its cost in X years, you are mistaken. This machine is (advertisement + ego pumping +
  • ... oh, never mind.....


    I'll just have be content with a beowolf cluster of these things imagining me in Russian.
  • by HockeyPuck ( 141947 ) on Saturday March 29, 2008 @02:45AM (#22903270)
    # 1800 blades,
    # 80TB storage. Simple SATA disks. 5.2Gbps throughput.

    So using the 80TB * 1024GB/TB / 1800blades gives us about 45GB/blade. If they're using "simple SATA disks" this would imply internal disks and 1800 internal disks would have an aggregate throughput much higher than 5.2Gb/s (5.2Gb / 1800 = 2.95Mb/s per disk). Now typically you'd boot the nodes from the network (so you can change the identity of the node easily by booting it from a different image) from some sort of FC array accessed via an IB to FC gateway. However, 5.2Gbps is an odd number to get to since FC comes in 2 and 4Gb formats (1Gb fibrechannel is outdated and 2Gb is on the way out).

    While I always see all the CPU details in these articles, I really wish they'd publish more about the storage requirements and methods rather than just staying (we have xTB of disk...). How do they back the finished datasets all up? Tape? MAID? VTL?

  • The posting title reminds me of the Simpsons gag of Homer imagining what it would be like to be a moderately rich man. "Tee hee! I could rent anything I want!"

  • I couldn't help but think of the episode where the computer on Red Dwarf became dramatically smarter at the expense of lifespan. This computer is tremendously fast, but it has to be backed up and rebuilt all the time. It seems to be the reliance on thousands of off the shelf parts that creates this problem. I wonder if it is always the case with supercomputers or if this problem can be overcome with improvements like flash storage?
    • Re: (Score:3, Interesting)

      by Gorobei ( 127755 )
      It's really not that much of a problem. Supercomputers have very different hardware vs support cost trade-offs compared to consumer devices.

      True story: a few months ago, my program was running slowly, and I needed a lot more CPUs. Yelled to the guy a few desks away, and he gave me the names of a few idle compute farms. Job was 50% complete after 10 minutes, I thanked him, and laughingly said that I like to measure my compute power in megawatts rather than MIPS.

      Now, a couple of metawatts of compute don't
  • My understanding is that the list of Fastest Supercomputers [top500.org] is for supercomputers that are publicly funded - which is why we don't see any mention of google or microsoft clusters. But yet the summary claims that this one has not taken any government funding.

    So then how can this cluster claim to be fourth fastest, when they really are not competing with the publicly funded clusters of the main list?

    Obviously, we cannot expect the big corporate supercomputers to spill their beans to contest this, but it
    • You think very wrong.
      Funding has nothing to do with it.
      Its the useage: As a supercomputer.
      20k 1U servers running as shared hosts at an ISP would be a supercomputer if they had the interconnection and the software background. But they arent, so they dont. Same for google, ect.

      You will notice a lot of company computers (at airlines, automotive companies, ect, in the top500)
    • Dude the site you provided ... was very useful ... I just had to look up the list for November 2007 & look up the details of the fourth supercomputer to check if the original post got it right.

      And guess what? Its right there ... don't believe me? Well look up the top 10 [top500.org] list for Nov '07.

      And to quote from that site you pointed to:

      The TOP500 table shows the 500 most powerful commercially available computer systems known to us.

      Bold by me.

      Try as I might, I could find no information that this list is only for supercomputers that are publicly funded. May be I didn't try hard enough ... could you

    • There are lots of corporate computers in the Top 500 list. The difference is these are specifically computers tested using the Linpack benchmark. This is typically going to be traditional HPC clusters and not single purpose machines optimized for another need.

      You WILL find a Microsoft's Ranier cluster running Windows on 2048 processors as the 116th fastest computer in the world on the Top 500 list.
  • These words make no sense. This machine uses a Clos topology, without source-routing and using rather small 24-port crossbars I might add. Nothing new there, has been done for 20 years. It's full bisection on paper, but Head of Line (HoL) blocking statistically reduces it to ~60% efficiency. And without adaptive routing, no way around it.

    This machine has no new concepts or new hardware, boring.
  • by PPH ( 736903 )

    Build the 4th fastest supercomputer:

    1. Build the worlds fastest supercomputer
    2. Load Vista

    Want to drop off the end of the Top 500 [top500.org]list? Load SP1.

This is the theory that Jack built. This is the flaw that lay in the theory that Jack built. This is the palpable verbal haze that hid the flaw that lay in...

Working...