Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Technology Science

TeraGrid v. Distributed Computing 124

Nevyan writes "After three years of development and nearly a hundred million dollars the TeraGrid has been running at or above most peoples expectations for such a daunting project. On January 23, 2004 the system came online and provided 4.5 teraflops of computing power to scientists across the country. However, the waiting list for TeraGrid is long, including a bidding process through the National Science Foundations (NSF's) Partnerships for Advanced Computational Infrastructure (PACI) and many scientists with little funding but bright ideas are being left behind. While the list of supercomputer sites and peak power is growing how is the world of Distributed Computing faring? "
This discussion has been archived. No new comments can be posted.

TeraGrid v. Distributed Computing

Comments Filter:
  • by Iesus_Christus ( 798052 ) on Sunday July 18, 2004 @05:25PM (#9733693)
    The problem with using distributed computing for everything is that the number of people willing to let others use processing power on their computer is not infinite. It is a very large number, but eventually everyone who wants to/knows how to help out their favorite cause will have something already installed. In addition, the more useful endeavors that use distributed computing, the less users you will get for each, and only the 'interesting' projects will get many users. Who wants to use their computing power to analyze some boring old physics experiment when you could be finding aliens or curing cancer?

    Distributed computing has its uses, but remeber: the public will only be willing to help you as long as they feel like they're contributing to something worthwhile.
    • by The_Mystic_For_Real ( 766020 ) on Sunday July 18, 2004 @05:32PM (#9733718)
      This could possibly be beneficial to science, whose purpose is ultimately to serve humanity. This creates something of a democracy in science. Now the public can chose what problems that it wants solved and play a direct role in helping while they sleep.

      I think that this is a Good Thing (TM). Distributed computing has the postential to not only further the cause of science, but to bridge the gap between the public and the scientists.

      • beneficial to science, whose purpose is ultimately to serve humanity


        Only if they are investigating cannibalism. The purpose of science is the advancement of knowledge. Service to humanity, if it happens is incidental.
        • You need to remember, humans invented mathematics and science. Numbers are the the relations and functions used in science and mathematics are human creations to describe the universe, much like language, only with more stringent rules.
          • by hunterx11 ( 778171 ) <hunterx11@NOSpAm.gmail.com> on Sunday July 18, 2004 @06:24PM (#9734042) Homepage Journal
            Mathematics and science are neither arbitrary inventions nor entirely self-evident discoveries. They are our attempts to understand and categorize the universe in the most objective manner possible. I would even argue that language is a much more abstract type of categorization. Despite mutually unintelligible differences in languages, all languages are used to describe the same reality.

            What I'm trying to say is that the semantics of how we describe the universe may be arbitrary, but the universe is objectively describable.

        • by Anonymous Coward
          The purpose of science is the advancement of knowledge.

          Science doesn't have purpose. It's a study of phenomena. At least, that's the scientific method:

          The principles and empirical processes of discovery and demonstration considered characteristic of or necessary for scientific investigation, generally involving the observation of phenomena, the formulation of a hypothesis concerning the phenomena, experimentation to demonstrate the truth or falseness of the hypothesis, and a conclusion that validates

        • I wish schools would do their service to humanity and teach people proper grammar and punctuation.

          • Speaking as a person who loves the diversity and complexity possible in the English language: we've got one Fsck'd up Language!

            To list only a few rants:
            • vastly inconsistent spelling;
            • multiple phonemes attached to one character (I think this is the way to properly note that one letter has lots of different sounds);
            • a complete lack of a distinction between singlular and plural 'you' (a very common situation, solved in the Southern U.S. with "y'all");
            • many homonyms; no representation (alphabetic characters) fo
            • > solved in the Southern U.S. with "y'all"

              Who would have guessed that ignorant hicks could improve the English language?

              I suppose it makes sense, though, that someone with less education could come up with simple & obvious solutions (although I'd prefer using a word that didn't make you sound stupid).
      • by samael ( 12612 ) <Andrew@Ducker.org.uk> on Sunday July 18, 2004 @06:05PM (#9733921) Homepage
        Now the public can chose what problems that it wants solved

        Jesus, there's a horrible thought. I've met the public (and seen it's choice in TV). I'd rather have monkeys choose.


        • Now the public can chose what problems that it wants solved

          Jesus, there's a horrible thought. I've met the public (and seen it's choice in TV). I'd rather have monkeys choose.


          Well the idea is that, you, as a member of said public, take responsibility more serious instead of just dissing it because others do.

        • Your Gorilla agent (Score:1, Interesting)

          by rvw ( 755107 )
          Jesus, there's a horrible thought. I've met the public (and seen it's choice in TV). I'd rather have monkeys choose.

          You might be right about those monkeys. In Holland, we have the Beursgorilla (http://www.beursgorilla.nl/ [beursgorilla.nl]). This gorilla decides what stock to buy or sell based on the bananas presented to him. He proves to be better at "advising" than most of the other "real" and expensive advisors.

          For me, the DistributedComputingGorilla might decide what project will run on my computer.
      • You assume they have good judgement as to which causes will end up benefitting them the most.
      • "I think that this is a Good Thing (TM). Distributed computing has the postential to not only further the cause of science, but to bridge the gap between the public and the scientists."

        I do not see how this is in anyway bridging the gap between the public and scientists. How is donating your free computer cycles any different than donating a few bucks? You are just donateing a resource cash or cycles to a project.
        I mean it is not like most people running Seti at home know what an FFT is.
        Not a bad thing min
      • The only things which serve humanity are aliens. To say that science has such a purpose places scientists, who are the best and brightest, in the position of slaves. Worse, it enslaves them to the worst and most ignorant - the degree of their evil or ignorance dictates how much moral claim they can place on the scientists. It reminds me of a quote "You have the privilege of strength, but I have the right of weakness!"

        Disgusting. Your Nickname says it all. My only hope is that after I'm long dead, you
    • This is exactly why there's BOINC, distributed.net, Grid.org, etc. that have multiple projects served to one user-installed program: Instead of projects having to compete for real-time resources, they run work units one after the other, and users can pick and choose which projects are run. This should prevent said burnout for most users.
    • by Alkonaut ( 604183 ) on Monday July 19, 2004 @02:57AM (#9736163)
      Why aren't websites sponsored by applets doing distributed tasks? I'm thinking mainly websites with huge numbers of visitors, where visitors tend to stay long enough to do any meaningful work. (Like GMail for example). Most people don't use more than 5% of computing power when surfing the web, and an applet is safe and easily distributed.

      Personally, I'd much rather have an applet using 10% of my cpu power instead of an annoying flash banner (which probly itself uses 10% cpu...).

      Obviously someone has to pay for internet content, and that to me would be the least intrusive way. Popup-blockers will be inefficient by the end of the year. Ads will be inside the site content. Or worse still, the popup window is the main window, while the actual content is spawned as "pop under" meaning that if you have a popup stopper, all you get is the ad window...

      Cpu cycles is the perfect internet currency. Everyone who visits a website has them.

    • You're right on here. And to limit the potential even further is the effect that the *cause* might have on the altruism of the user. I for one would think differently about what I do with my spare clock cycles depending on who the end user is. Consider a pure research, potentiallly common good project versus a *pure research* funded by a AN Other Megacorp project. Its still an interesting idea though - BOINC would seem to me to be the next step in allowing different actions (programmed to a certain exte
    • The problem with using distributed computing for everything is that the number of people willing to let others use processing power on their computer is not infinite.

      Off-topic. Teragrid is a dedicated distributed computing system. Various research centers are purchasing dedicated clusters to participate. For example, instead of three universities each purchasing a large cluster which will sometimes be idle; each will purchase a slightly smaller cluster and use each other's resources when available. In

  • My Personal Vision (Score:3, Interesting)

    by Ignignot ( 782335 ) on Sunday July 18, 2004 @05:31PM (#9733714) Journal
    Is for a different kind of distributed computing client, one that allows you to sign up for different kinds of research programs. For example, you could say "donate half my spare time to aids research, and 1/4 to math reserach, and 1/4 to seti research". Also integrate a method of possible payment for work units completed (and a checking process to remove cheaters) and I think you will have an increase in effeciency in the entire way that we treat computers. Maybe instead of everyone shelling out thousands for top of the line computers whose peak output they only need for 5% of the time, they shell out a lot less for a networked computer that buys time from other people's machines. Clearly this wouldn't work in all applications (particularly those requiring low latency) but with improving network connections I think this is a possible future.
    • by jpr1nd ( 678149 ) on Sunday July 18, 2004 @05:38PM (#9733777)
      The BOINC [berkeley.edu] platform (that seti@home is switching [slashdot.org] over to) has the ability to divide work between project as you suggest. Though I'm not really sure that there are very many other projects running on it.
      • by Corporal Dan ( 103359 ) on Sunday July 18, 2004 @09:30PM (#9734852)
        From ClimatePrediction.net:
        Hi, we are still rolling along with BOINC, hoping for an alpha test by the end of the month, beta in July, and hopefully a release in August when David Anderson from SETI/BOINC will be visiting us for a few weeks.


        We threw together a simple sign-up page to be contacted (just once or twice when we're ready for beta testers), so if you want to try out the Windows, Linux, or Mac versions of CPDN please signup here!

        http://climateprediction.net/misc/beta.php
      • The BOINC presentation from Madrid claims folding@home is using BOINC. Unless they've recoded the clients without telling anyone, this is incorrect. Folding@home uses Cosm as it's communications framework. There is no reference to BOINC in any of the documentations I've seen from f@h.
      • Though I'm not really sure that there are very many other projects running on it.

        Currently available projects are...

        SETI@home [berkeley.edu]
        Predictor [scripps.edu]-Protein structure prediction

        Coming soon....

        climateprediction.net [climateprediction.net]
        Folding@home [stanford.edu]

        Farther in the future (i.e. pending funding)...
        Einstein@home [physics2005.org] -- a search for gravitational waves.

        In the conceptual stage, since sometime last week...
        neuralnet.net -- studies of the nature of intelligence using neural nets and genetic algorithms

    • by Iesus_Christus ( 798052 ) on Sunday July 18, 2004 @05:39PM (#9733781)
      The idea of payment for work units is interesting. While it would certainly provide incentive for participating in distributed computing projects, I can see two problems with it already:

      1) Getting the money to pay people. One advantage of distributed computing is that you don't have to pay for time on expensive cluster. That advantage disappears when you pay distributed computing users. Of course, it may still turn out to be cheaper, and there may be users willing to participate for free.

      2) Botnets and profit. We all know of spammers using zombies to peddle goods, and of script kiddies using them to DDoS. What if some enterprising but immoral person decided to use the computing power of his zombies to profit off of the distributed computing payments? With enough zombies, he could easily make a good amount of money off of other people's computers.
      • by Caseylite ( 692375 ) on Sunday July 18, 2004 @06:13PM (#9733980) Homepage
        Another way to pay people would be to offer incentives such as allowing me to write off your process time (wear and tear on my system) as a charitable donation to your non-profit group. ~Casey
      • Simple fix: Provide an authentication method which would require providing real information before you can accept the money. Then track down where all the bots' money is going. Bam, you've got your crook.
      • by Anonymous Coward
        1) Somebody does pay for time on an expensive cluster. They are built and maintained with your (and my) tax money.
        2) Yes, security is a big issue in Grid computing. And it ain't there yet.
      • 2) Botnets and profit. We all know of spammers using zombies to peddle goods, and of script kiddies using them to DDoS. What if some enterprising but immoral person decided to use the computing power of his zombies to profit off of the distributed computing payments? With enough zombies, he could easily make a good amount of money off of other people's computers.

        So? I'd rather the spammers (or "clustered computer users" in this case) used their zombie machines for that. Then it would get the spam to stop.

        • ObAOL: This was my first thought as well. If other distributed projects paid as well or better than spamming, then fewer virus writers would waste time on spam. This would also shift the burden off the system (distributed computing is CPU intensive rather than bandwidth intensive) and leave it purely on the individuals with insecure PCs.

          The only bad possibility is that it might increase the number of zombies. However, this is not necessarily so. It is not evident to me that it would be simple to incre
    • by billstr78 ( 535271 ) on Sunday July 18, 2004 @05:44PM (#9733808) Homepage
      IBM is already making that vision a realization.

      They are in beta stages of a massive computation cycle for hire program that will allow organizations without the funding for an entire cluster to purchase cycles provided by a large IBM Power cluster.

      It will allow for a computation cycle market to eventually arise, much like the wheat, corn or gold markets. Companies will compete to provide cheaper cycles, small-time scientists around the world will be able to have thier computation intensive problems solved at a fraction of the current cost possible today.
      • I noticed you wrote "companies". I would have thought the largest resource is personal computers rather than corporate ones... That aside, if the EFF and similar provided clients and benefitted from the proceeds it would be a low hassel way to donate to their projects.
        • While I think that in the beginning, you might have a large number of small-scale clients, but eventually it will evolve in the same way that agricultural commodities have evolved, that is we'll have massive networks whose sole purpose is to contribute cycles to a for-pay distributed project.
        • If only you could win lawsuits with distributed computing power. [/wishful]
          I'll bet there tons of people that would donate processing time to helping defend open source.
    • I suspect that the money given per work unit would be too small.

      I wouldn't mind it if I could make a little cash to eventually help pay back for the computer, maintainance and energy (and heat removal in the summer), I doubt it would happen. I'd love to see someone prove me wrong.
  • Looks good to me (Score:4, Interesting)

    by DruidBob ( 711965 ) on Sunday July 18, 2004 @05:32PM (#9733719)
    There have been big projects like SETI@home, Great Internet Mersenne Prime Search, RC5-64 and many others.

    There are some like the Casino-21 http://www.climate-dynamics.rl.ac.uk/ [rl.ac.uk] and Evolution-at-Home http://www.evolutionary-research.org/ [evolutiona...search.org] too.

    It's becoming easier to create the required code for distributed projects, and it most certanly has become easier to actaully get them distributed.
  • Just in time for Doom 3!

    But will it run Longhorn?

  • by Anonymous Coward on Sunday July 18, 2004 @05:32PM (#9733723)
    Important to remember that the Grid is a _kind_ of distributed computing. But the main thing about The Grid (like The Internet, The Grid is basically TeraGrid in the US + European Data Grid) is that it is suitable for handing off parallel jobs with high intercommunication needs to (i.e. MPI jobs). Not necessarily because these jobs can run across different nodes of the grid (though they can with MPI/Nexus or whatever it's called), but because each "node" in the Grid network is a HUGE MOFO LINUX CLUSTER or similar. The grid gives lots of physicists access to computing resources for parallel processing jobs that would otherwise be sitting idle.

    What /.ers generally mean by distributed computing is a bit different - most apps there are "embarrassingly parallel" ones you can just farm out. They don't need to chatter to eachother, just process some data and send it back to Central.
    • by Anonymous Coward
      There isn't a single Grid. Grid is a concept not an actual physical infrastructure, a way of working. In fact Grids can be ephemeral and dynamic based on the related concept of Virtual Organisations (VO)

      There are various collections of machines which have been designed to facilitate Grid computing (instances of Grids), TetraGrid being one of them. Some systems or Grids are suitable for some types of jobs, some for others. As you rightly note, for the likes of MPI you need relatively closely coupled nodes.
      • by Anonymous Coward
        As other people have said whether there is "The
        Grid" or Grids is like "The Internet" vs multiple
        IP-protocol networks, including the private ones.

        However, for practical purposes there is one "The
        Grid" which will probably evolve into The Grid
        without the quotes, and that is the worldwide
        LHC Computing Grid, currently spread across North
        America, Europe and northwest Pacicifc Rim.

        Through EGEE (in Europe) and Open Science Grid
        (in the US) LCG technology will spread out into
        the wider scientific and research commun
  • by BoneThugND ( 766436 ) on Sunday July 18, 2004 @05:33PM (#9733731) Homepage Journal

    Google's distributed OS has been discussed a lot on Slashdot, but it is more than just a search algorithm on their own servers:

    Google Compute is a feature of the Google Toolbar that enables your computer to help solve challenging scientific problems when it would otherwise be idle. When you enable Google Compute, your computer will download a small piece of a large research project and perform calculations on it that will then be included with the calculations performed by thousands of other computers doing the same thing. This process is known as distributed computing.

    The first beneficiary of this effort is Folding@home, a non-profit academic research project at Stanford University that is trying to understand the structure of proteins so they can develop better treatments for a number of illnesses. In the future Google Compute may allow you to also donate your computing time to other carefully selected worthwhile endeavors, including projects to improve Google and its services.

    - The Google Compute Project [google.com]

    • Just tell me, will you allow someone to use your computing power for a really _unknown_ purpose?
      • by zogger ( 617870 )
        " Just tell me, will you allow someone to use your computing power for a really _unknown_ purpose?"

        Just think, for once redmond got it right! Windows has had that feature for years!
    • by billstr78 ( 535271 ) on Sunday July 18, 2004 @05:38PM (#9733771) Homepage
      Hmmm. I think you are confusing the distributed OS additions they've made to Linux for their own clusters with the idle process harvesting of thier Google Toolbar.

      The distributed OS and Filesystem in thier own clusters is far more advanced than a SETI@Home parallel work distribution algorithm. This OS/FS and projects like it are where the grid's heritige lies. There are many problems unique to the grid, but none of it could exist without the distributed system problems first solved in local area clusters.
  • by StateOfTheUnion ( 762194 ) on Sunday July 18, 2004 @05:38PM (#9733776) Homepage
    Distributed computing has its uses, but remeber: the public will only be willing to help you as long as they feel like they're contributing to something worthwhile. Uh, I'm not sure what this has to do with the TeraGrid . . . The TeraGrid is a distributed computing system . . . but it does not use the "public's" computers. It uses university and computing center machines across the USA (e.g. NCSA, Argonne National Labs, Purdue, etc.) .
    • Ah, but many other "true Grid" projects do. There really is not any distiction between Distributed Computing and the classic definition and implementation of a Grid. Un-federated wide-area computers collaborating to work on many different tasks, scheduled to run efficiently and effectivley.
    • >> Distributed computing has its uses, but remeber: the public will only be willing to help you as long as they feel like they're contributing to something worthwhile.

      > Uh, I'm not sure what this has to do with the TeraGrid . . . The TeraGrid is a distributed computing system . . . but it does not use the "public's" computers.

      Oh, the header was "TeraGrid v. Distributed Computing" and the entry ended in a phrase "While the list of supercomputer sites and peak power is growing how is the world o
    • They are public in the sense that they are publically funded. They are not owned by a private corporation or restricted to classified use.

      • "Public" implies that usage isn't restricted to people who work there
        • Correct. Usage is not restricted to the people who work there. In fact, 99.999% of the work done on the NSF supercomputing resources is done by people who do not work at the supercomputing centers. Researchers from U.S. institutions apply for time on the supercomputers. The centers are a computing resource for the whole country.

        • Well, that's an interesting definition of "public" that doesn't appear to be anyone else's.

          Despite that fact, you are correct... most of the work that's run on those things is done by people that aren't part of the supercomputer centers, or the ANL. (There are a few "chief scientists" that DO run their work there, so I wouldn't say it's the 99.99-whatever% that another poster did).

          It's NOT available for the general public's use though. Even if you work at those places, that doesn't give you ANY certaint
  • by Anonymous Coward
    ...Anonymous Cowards!
  • by bersl2 ( 689221 ) on Sunday July 18, 2004 @05:43PM (#9733807) Journal
    If you can divide your problem into very many independent subproblems, clustering or distributed computing will work well. If not, your best bet is a true supercomputer.

    So: SETI@Home splits up its scans into sections, each of which do not depend on any other; therefore, a distributed solution is efficient. However, the Earth Simulator deals with chaotic systems (or so I would assume), which do not independently parallelize; this is where having hundreds of processors and terabytes of RAM and using something like NUMA is greatly more efficient.

    In short: use the right tool for the job.
    • by billstr78 ( 535271 ) on Sunday July 18, 2004 @05:47PM (#9733825) Homepage
      As noted in earlier comments, the TeraGrid's individual nodes _are_ NUMA clusters. This allows large, non-parallel computations to be run without individual service level agreements, login coordination and scheduling issues gumming up the process. The TeraGrid is an effort to remove the administrative nightmare's keeping most clusters from being fully utilized and most small-time scientists work from being completed.
    • However, the Earth Simulator deals with chaotic systems (or so I would assume), which do not independently parallelize; this is where having hundreds of processors and terabytes of RAM and using something like NUMA is greatly more efficient.

      AFAU Earth Simulator solves mostly nothing more than a big Finite Element Method problems. Speed-up of such problems depends much on the connection time as normally the FEM solver exchange borders every several iterations or so, while the amount of data is not so much

  • Access and Denial (Score:4, Insightful)

    by nevyan ( 87070 ) on Sunday July 18, 2004 @05:46PM (#9733819)
    The problem with large projects like TeraGrid, EarthSimulator and other supercomputer sites is that the underfunded _brilliant_ ideas are left behind by those who can afford to pay for or build these centers and sites.

    While TeraGrid is a powerfool tool it is one that thousands of scientists and laboratories are standing in line to use. Meanwhile Distributed Computing is available, cheap and relatively quick.

    While it may look good on your project to say you used a IBM BlueGENE or DeepComp 6800 is it really worth the extra cost and waiting in line for your chance to use?

    True Distributed Computing is the way to go and shows positive results. Now we just need to tinker with it some more!
    • There are many other Grid projects that don't have the long waiting line of TeraGrid. Much designed with the Globus toolkit allows anybody to contribute CPU power and just about anyone to run jobs. It's less powerfull and is more usefull for highly-parallel computations, but it is more true to the definition of what a Grid really is.
    • Re:Access and Denial (Score:5, Interesting)

      by Seanasy ( 21730 ) on Sunday July 18, 2004 @06:43PM (#9734157)
      The problem with large projects like TeraGrid, EarthSimulator and other supercomputer sites is that the underfunded _brilliant_ ideas are left behind by those who can afford to pay for or build these centers and sites.

      What are you talking about? These are publically funded resources. You apply to the NSF for time on these machines. If you're at a U.S. institution and you have a real need for supercomputing you can get time on these machines.

      While TeraGrid is a powerfool tool it is one that thousands of scientists and laboratories are standing in line to use. Meanwhile Distributed Computing is available, cheap and relatively quick.

      And Distributed Computing can't even begin to solve some of the problems that supercomputers are designed to address.

      While it may look good on your project to say you used a IBM BlueGENE or DeepComp 6800 is it really worth the extra cost and waiting in line for your chance to use?

      Yes. When you want to simulate every molecule of a proteing in a water solution (~17000 atoms worth) you need a supercomputer. DC can't do it.

      True Distributed Computing is the way to go and shows positive results. Now we just need to tinker with it some more!

      DC is neither a religion nor a panacea.

      • by aminorex ( 141494 )
        > When you want to simulate every molecule of a
        > proteing in a water solution (~17000 atoms worth)
        > you need a supercomputer.

        But if you want to simulate a billion molecules,
        DC is the way to go: Then it's not a tightly
        coupled system.
        • True, if you're not simulating how those molecules interact. Don't get me wrong, DC is great for a lot of things. My point is that for certain problems DC just isn't up to the task.

          • Maybe I'm confused about just what issue is at hand, but I thought the terraGrid was a distributed computing machine itself, located at nine facilities and growing. In fact, the grid part pretty much implies a distributed and omnipresent system. Are you for DC, for TerraGrid, or against both?
            • TeraGrid is distributed across several sites, however, most jobs will run at only one site and consume 'x' CPUS. The job scheduler running at the head of the cluster will simply schedule each job to run on the best resource available.

              Most applications will face a performance plateau at

              Most scientific applications are network intensive, and 2 CPUs with a local, high-bandwidth/low-latency connection (GigE, IB, Quadrics) could very well outperform 100 CPUs distributed across the internet.

              Only for very, ver
            • The TeraGrid is distributed-computing but not like what the poster is calling DC. The poster is talking about things like SETI and encryption cracking. They are massively parrallel -- i.e. problems that can be broken up up into discrete units and worked on by unconnected machines.

              Anyway, I'm for both and against misinformation about what both are. The poster confuses the issue terribly.

      • The problem with large projects like TeraGrid, EarthSimulator and other supercomputer sites is that the underfunded _brilliant_ ideas are left behind by those who can afford to pay for or build these centers and sites.

        What are you talking about? These are publically funded resources. You apply to the NSF for time on these machines. If you're at a U.S. institution and you have a real need for supercomputing you can get time on these machines.

        Suppose you're not at a U.S. institution and/or have such

        • There is no conspiracy. Supercomputers solve problems that DC cannot begin to address. Your example DC simualtion wouldn't work because of the time it would take to communicate all that information. The latency issue alone would make it unsusable.

          Do you really think only prestigous scientists get to use supercomputers? That the little guy is being intentionally kept down by the supercomputing man? The idea is ludicrous. If someone has a truly 'brilliant idea' they will get on the superomputer.

          There are

        • By your logic, I'm famous.

          But I'm not, so your logic is wrong.

          I use supercomputers all the time.
    • I call BS (Score:5, Informative)

      by Prof. Pi ( 199260 ) on Sunday July 18, 2004 @09:34PM (#9734879)
      True Distributed Computing is the way to go and shows positive results. Now we just need to tinker with it some more!

      It's too bad that whoever modded this Insightful doesn't know much about parallel applications.

      DC is fine and very cost-effective for its niche of applications, which is those that are "embarassingly parallel." This is (somewhat circularly) defined as being very easy to parallelize on a DC machine. What characterizes these apps is very low communications between different tasks, which works for DC because the high network latency doesn't get in the way.

      I've love to see you try to put Conjugate Gradiant (CG) on a distributed system. It involves large matrix-vector multiplies that inherently require lots of vector fragments passing between the processors. CG is one of the 8 NAS Parallel Benchmarks, and if you look at Beowulf papers that use NAS, you'll see that they often leave out CG because performance is so bad. If it's low on a Beowulf, where the network is presumed to be local and dedicated, it will totally suck on anything with a typical high-latency/low-bandwidth network.

    • The problem you describe is not a problem of large projects - that is a problem of small scientists.
      They should go elsewhere. If they're really scientists they probably have a full time job at an institution with some kind of cluster or are affiliated with an institution that can provide access to resources commesurate with the scientist's ability.

      If there's a guy with no previous record who claims he can create AI if they let him use TeraGrid exclusively for 6 months, should he be given access to TeraGrid
  • Why the versus? (Score:5, Insightful)

    by Anonymous Coward on Sunday July 18, 2004 @05:54PM (#9733864)
    I don't understand why we are asking how a hammer is doing compared to a screwdriver? Both are varied computational models, and are at best architectural descriptions as titles; TeraGrid v. Distributed Computing. They have specific application domains and are used to solve different types of problems. One dealing with non-discrete data and experimental calculations (TeraGrid), the other focused on discrete chunks of data being filtered or rendered and are non-time nor message dependent (Distributed Computing; as defined by the Nevyan's reference). You have two tools in your tool chest. What makes one better than the other? They have completely different jobs that they tackle. They both will be successful. They need not be in competition.
    • by Anonymous Coward
      I don't understand why we are asking how a hammer is doing compared to a screwdriver

      We're not, we're asking how TeraGrid is doing compared to Distrubuted Computing. Read the Fine Title!

      What? Oh? Nevermind...

  • by rumblin'rabbit ( 711865 ) on Sunday July 18, 2004 @05:58PM (#9733884) Journal
    4.5 Teraflops for $100 million? Surely not. That much compute power can be had for 1/20'th the price. What am I missing?
  • Mac OS X users! (Score:2, Informative)

    by arc.light ( 125142 )
    Charles Parnot of Stanford University is looking for your spare CPU cycles for his distributed XGrid@Stanford [stanford.edu] project.
  • Misinformed (Score:4, Informative)

    by Seanasy ( 21730 ) on Sunday July 18, 2004 @06:17PM (#9734000)
    ...and many scientists with little funding but bright ideas are being left behind.

    Care to cite a source?

    When you apply to the PACI program you get a grant of Service Units -- i.e. time on the computers. You don't need huge amounts of funding. The requirements [paci.org] state that you need to be a researcher at a U.S. institution. It also helps if you can show that you actually need and can use that kind of computing power.

    And, please, distributed computing and supercomputing are not synonymous in terms of what problems they address. Distributed computing cannot replace supercomputers in every case. DC is good for a limited set of problems.

    Lastly, an example of Teragrid research: Ketchup on the Grid with Joysticks [psc.edu].

  • Wolfgrid (Score:5, Informative)

    by admiralfrijole ( 712311 ) on Sunday July 18, 2004 @06:34PM (#9734101) Homepage
    Wolfgrid [ncsu.edu], the NCSU Community Supercomputer, is coming along nicely.

    It is based on Apple's XGrid, and uses volunteers from the Mac community here at NCSU, as well as some of the lab macs, and soon we will hopefully have official Linux and Windows clients, maybe even Solaris, to run on more of the computers around campus.

    There is even a really nice web interface [ncsu.edu] that shows the active nodes and their status, as well as the aggregate power of the two clusters.

    Its really nice, anyone who is part of the grid can just fire up the controller and submit a job, I am a part of the lower power grid since my TiBook is only a 667, but I was able to connect up and do the Mandelbrot Set thing that comes with XGrid at a level equal to around 7 or 8 GHz.

    There are some screenshots here [ncsu.edu]

  • I think the Seti distributed thing is great and all, but I'd like to donate my free cycles to a project with more immediate (and likely) benefits to mankind. Any suggestions?

    -Brien
    • by antispam_ben ( 591349 ) on Sunday July 18, 2004 @08:55PM (#9734696) Journal
      There's this one, it's probably what you want:

      http://www.stanford.edu/group/pandegroup/folding/ [stanford.edu]

      But I'm quite selfish (and actually interested in primes abd or at least know more about them than I do about protiens), and there are entities offering big prize money for big primes, and if one of my machines finds one, I'll get big bucks:

      http://mersenne.org [mersenne.org]
      • by the gnat ( 153162 ) on Monday July 19, 2004 @03:29AM (#9736248)
        Actually, as an experimental and theoretical biophysicist-in-training who knows about proteins, I'd say the folding project is only marginally more useful than the prime number search. Most biology research projects, especially computational ones, has to be sold on the basis of potential benefits to human medicine. Such advertising does not actually mean that medical benefits exist.

        While there's much to learn from studies of protein folding, there's very little medical importance to purely theoretical simulations. Since the delusion that we'll be able to replace laboratory research with really big computers is attractive to people who know nothing about biology, the impact of this type of research gets vastly overstated.

        On the other hand, Folding@Home has already yielded far more interesting results (if not exactly "useful" outside of the world of biophysics) than SETI@Home probably ever will, so go for it.
        • On the other hand, Folding@Home has already yielded far more interesting results (if not exactly "useful" outside of the world of biophysics) than SETI@Home probably ever will, so go for it.
          IMHO seti@home's benefits lay in the spin-offs. Advancements in distributed computing and digital signal processing. I doubt they would find an alien email anytime soon, but they are very likely to come up with better ways to analyse signals. (read: better cell phones and digital cameras)
        • On the other hand, Folding@Home has already yielded far more interesting results (if not exactly "useful" outside of the world of biophysics) than SETI@Home probably ever will, so go for it.

          The key word there is "probably." One positive, one single positive, from SETI will be arguably the greatest discovery in human history (certainly in the top 5).

    • Yes I like the folding@home project. But you may find something interesting from the projects listed here: http://www.aspenleaf.com/distributed/distrib-proje cts.html

      yes I know my sig is coming up
    • Since someone has already posted the Aspenleaf list of projects, I'd like to point out my personal favorite, Find-a-Drug [find-a-drug.com]. It has actually returned positive anti-cancer and anti-AIDS results that have been lab tested and verified by the National Institute of Health. If that doesn't have immediate benefit to mankind written all over it, I don't know what does.
  • by hal2814 ( 725639 ) on Monday July 19, 2004 @08:23AM (#9737205)
    "The "Grid" portion of the TeraGrid reflects the idea of harnessing and using distributed computers, data storage systems, networks, and other resources as if they were a single massive system." (from the TeraGrid FAQ)

    It looks like TeraGrid is latching onto a catchword in order to boost awareness of their system. What they are describing here is not Grid computing at all. Grid computing was designed to take advantage of all the dead cycles that computers typically have. The idea is that someone might have a large group of computers that do not take full advantage fo their computational cycles (like a large lab for reading e-mails and browsing the Internet). With Grid computing you would take these computers (not some Itanium cluster like TeraGrid is doing) and distribute work accross these nodes that can be performed during otherwise dead cycles. (I have no sources immedeately available but check out Grid computing through the ACM or something and you'll see plenty of info on what Grid computing really is.)

    This is what Seti@home does. It takes underutilized machines and runs computations on them. TeraGrid on the other hand, takes large clusters of otherwise unused machines and lays an abstraction over them that makes them look like one large supercomputer. This is nothing more than a distribution strategy. It looks like a nice distribution system that has the potential to scale well, but it's not Grid computing and it's nothing new.
    • by Anonymous Coward
      ""The "Grid" portion of the TeraGrid reflects the idea of harnessing and using distributed computers, data storage systems, networks, and other resources as if they were a single massive system." (from the TeraGrid FAQ)

      It looks like TeraGrid is latching onto a catchword in order to boost awareness of their system. What they are describing here is not Grid computing at all."


      No, they are right and you are wrong.

      Using spare cycles is one thing you can
      do with Grid technology, but it is not the
      essential qual
    • Whoa..you're so off-base on this it's not funny.

      The TeraGrid people (ANL, Ian Foster, etc) are the ones that coined the term "Grid" in the first place!

      You might not like their use of that term, but since they're the ones that came up with it in the first place, they're more right than you are.
  • by haruchai ( 17472 ) on Monday July 19, 2004 @09:08AM (#9737485)
    It's an all *nix environment presently totalling around 4200 CPUs of which 96 ( in a single cluster)
    is AIX 5.2, 3128 (WOW!!) is on Tru64 (in 2 clusters) and the rest, distributed in 5 clusters
    are some form of Linux.
    Two of the clusters have a second phase which together will add 316 CPUs on Linux.

    As of October 1 of this year, 5 clusters at 3 sites will be added with the OS / CPU breakdown as follows:
    Linux : 1800 CPUs in 3 clusters
    AIX 5.1 : 320 in 1 cluster
    Solaris 9 : 256 in 1 cluster

    That's an awful lot of Unix and a buttload of Tru64 and Linux

Adding features does not necessarily increase functionality -- it just makes the manuals thicker.

Working...