Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Software Technology

Visions Of The Future Of Grid Computing 145

CaptianGrid writes "Computing grids, or software engines that pool together and manage resources from isolated systems to form a new type of low-cost supercomputer, have finally come of age. BetaNews sat down with some of the world's leading grid gurus to discuss the significance of such distributed technologies and separate grid hype from grid reality."
This discussion has been archived. No new comments can be posted.

Visions Of The Future Of Grid Computing

Comments Filter:
  • hm (Score:2, Funny)

    by Anonymous Coward
    if i had a grid computer, maybe i would've been able to get first post.
    • Re:hm (Score:2, Funny)

      by Koiu Lpoi ( 632570 )
      Of course, a beowulf cluster of grid computers would be much better, agree?
      • but wait? would A beowulf cluster of grid compuetrs still be a grid computer? And in that case would you not make a beowulf cluster of those? Mmmm... I need some Advil...
    • Re:hm (Score:2, Offtopic)

      by Eric Giguere ( 42863 )
      If only grid computers would make intelligent first posts possible -- then they'd really be worthwhile!
  • by cakefool ( 801210 ) on Monday February 21, 2005 @03:46PM (#11738841) Journal
    Then more recently we have seen Univa being created, which I am involved as founder and advisor

    Univa
    Univac - a successor of Multivac, the largest computer in Asimovs world.

    Nerds - they get everywhere.
  • Only way (Score:2, Interesting)

    by Janitha ( 817744 )
    I guess its time that the power of a single CPU (Ghz and instruction per clock) are leveling off, and this seems like the only way to increase computing power, hook lots of it together. Hopefully we will be able to find some answers from the SETI or cure some nice cancer for the Folding projects. Would be nice if the commertial grids also help out those projects by giving them their spare cycles. GRIDS CRUSH SINGLE CPU.
    • Not the only way. (Score:5, Interesting)

      by Eunuch ( 844280 ) * on Monday February 21, 2005 @03:53PM (#11738883)
      There will never be a substitute for a single box with a lot of CPUs on it. For tightly coupled dataset the latency of a grid will be a limitation.
      • Re:Not the only way. (Score:4, Informative)

        by Rei ( 128717 ) on Monday February 21, 2005 @04:07PM (#11739006) Homepage
        True; however, when you think about the big picture, the vast majority of real-world situations have at least *one* parallelism in them, and with even one parallelism, a grid usually makes sense.

        For example, let's say you're trying to determine the best FOO, and running a FOO is a highly serial process. Even though you can't split up running each FOO, you can pass the processing of each FOO test case that you want to run to a different machine.

        True - sometimes, you need the results of your previous run in order to plan your next run, and sometimes your *only* need is a single run of a very simple algorithm for which there can be absolutely no parallelization. However, more often than not, even algorithms that need tightly coupled data have at least *one* stage which can benefit from parallelization - and you really need only one to get the benefits.
        • Re:Not the only way. (Score:3, Interesting)

          by ergo98 ( 9391 )
          Don't forget that the overhead of a grid infrastructure - packaging the task up, transmitting it to another machine through a glacially slow network, unpackaging the task, performing the work, packaging the results, transmitting it to another machine, unpacakaging the results, coordinating in the results in some sort of orchestrator.

          Unless your task is signficantly computationally demanding, this overhead can significantly outweigh just doing the task directly, regardless of how parallel the task can be.
          • You're adding processing latency to storage latency. But you won't find maximum theoretical latency quoted in their grid rentals.
          • by Rei ( 128717 )
            packaging the task up, transmitting it to another machine through a glacially slow network, unpackaging the task, performing the work ...

            This would help [slashdot.org]. :)

            Unless your task is significantly computationally demanding

            Isn't that what we're talking about here?

            Of course, one can't forget the main benefit of grid networking: it's *cheap*. You get a *lot* more CPU horsepower for your dollar, so you want to use it on "computationally demanding" tasks if you can.
          • Re:Not the only way. (Score:3, Informative)

            by nr ( 27070 )
            Yes, but you know that there are tighly coupled clusters with hundreds or thousands of nodes available on the grid that looks/acts like a single resource on the grid. There are also vector supercomputers like Cray's and NEC's available if one need the capabilities these provide.

            Here is a link to a cool Java applet that shows all jobs running on the European research grid:

            LCG2 Real Time Grid Monitor [ic.ac.uk]
        • Yes, but considering the problems they want to solve require the largest computers in existance... For example, it wasn't until ASCII Red (the ~10,000 processor PentiumPro machine) when there was enough compute power to come *close* to simulating an atomic bomb blast. So, you'd have to replicate machines of that order all over the place, each for a particular instance of simulation.
          • ASCII Red is a decade old :) It had, according to one page that I ran into, 50us latency communicating with processors. For comparison, you could build a small linux cluster for each task (heck, one for the entire project) using InfiniPath and get 1.5us latency - i.e., making them far *better* at serial tasks than ASCII Red. Heck, even using bloody TCP/IP on your average modern network will give you 60-95 us (Myrinet=8-12, InfiniBand=5-8). Message passing has really improved in the past decade :) Infin
            • Hmm... I see how AI research (ANNs specifically) may be benefiting from access to GRIDs... at the moment a computer can process information MUCH faster (in 2001 it was about 10^-9 nanoseconds) than a human brain (10^-3 milliseconds, about a million times slower) however, a human brain can find many things much faster than a computer... because it's parallel (about 10^11 neurons acting as simple processors).
  • Imagine!
  • X-Grid (Score:5, Informative)

    by qwertphobia ( 825473 ) on Monday February 21, 2005 @03:50PM (#11738867)
    Check out Apple's X-Grid technology [apple.com]!

    It runs on any OSX system, 10.2.8 and up. Put your spare cycles to work.

    Xgrid: High Performance Computing for the Rest of Us
    • by Rhys ( 96510 ) on Monday February 21, 2005 @04:12PM (#11739048)
      If you've got a problem that's trivially parallelizable, then sure grid computing is great! RC5, seti@home, and similar projects can benefit from grid computing (really, that's what grid computing is -- someone else's code able to run on your machine when it's idle and do work).

      However, don't even begin to think you'll be solving anything that requires any sort of processor to processor communication. Rocket simulation (our local favorite example here at UIUC) for instance is heavily communication based.

      The linpack benchmark that top500 uses also needs a low-latency interconnect to perform really well, so don't expect to see "the grid" sitting up at the #1 supercomputer slot on top500.org anytime soon (or really, ever, unless someone develops FTL networking). Latency on the internet in general (and specifically around the world thru all those switches and latest_slashdot_hot_chick_movie.torrent packets) is nothing near what a supercomputer needs.

      Now, there are research groups looking at ways of making communiation delays less of a problem, including the one I was in while I was in grad school. There's a number of ways to do it, but none of them I've seen are going to take on worldwide-network-latency and survive with their performance intact.

      Even something as "simple" as chess wants to have a fast interconnect - every node that's gotten stranded working on low-priority (bad move) work is a wasted node you may as well not have.
      • by magefile ( 776388 ) on Monday February 21, 2005 @04:26PM (#11739162)
        Maybe it's less efficient (as in your chess example), but the amount available may be able to compensate for it. If your grid is only 60% as efficient as your supercomputer, but you have three times the power (#'s pulled out of the air), then grid is still beneficial.
      • by Anonymous Coward
        hm. The european data grid is a grid OF clusters and shared memory supercomputers (as is the american "teragrid"). You don't use it for trivially parallelisable jobs necessarily - you use the grid infrastructure to farm out runs of potentially tightly coupled codes to different computers on the grid, not run the jobs across the grid (even with a perfect continent-wide grid, lightspeed lag would kill your latency!). So on Cluster A, because there's a time slot at 17:00, I'm running my job with input parame
      • by jd ( 1658 ) <imipakNO@SPAMyahoo.com> on Monday February 21, 2005 @06:43PM (#11740186) Homepage Journal
        Personally, the way I'd design a high-performance computer is to set up a grid of clusters. Keep things local to a cluster that involve high I/O throughput, and farm out to another cluster anything where the I/O is less important.


        The reason for such an arrangement is that high-speed interconnects are expensive. Building a single cluster that is uniformly very high performance would be horrible for anyone other than a very rich organization to consider.


        On the other hand, grids alone are way too slow to handle the needs of time-critical communication, which is what you have a lot of the time in parallel computing.


        A hybrid, able to place components of a problem according to that component's needs, would seem to be the logical solution. It is also the scalable solution. Clusters often have an upper limit in size. By having grids of clusters, you have a virtually infinite capacity. True, there simply aren't any clusters that have reached the upper limit. Yet. But it's getting tough at the size they are at right now.

      • Trivially parallelisable codes will, by their very nature, be the easiest to parallelise on an arbitrary computing infrastructure. However, even then Grid (which is an aspiration and collection of technolgies rather than a single technology currently) can offer benefits, with regards to resource discovery, and submission of work to a remote resource.

        Grid is not just for parallelisable codes, but also for the ability to find a resource to run some code on, collecting and virtualising resources, and creating
      • I think you havent really understood what grid computing is. What you are describing ( someone else's code able to run on your machine when it's idle) is not grid computing it is called cycle-scavenging or when it is using donated computer power Public Resource Computing.

        Grid computing is supposed to be usefull for exactly the type of problems you say they are useless for. At least the two i have worked with/on (NORDUgrid and LCG).

        The idea is interconnecting supercomputers, so that your specific program,
    • Re:X-Grid (Score:5, Informative)

      by bpbond ( 246836 ) on Monday February 21, 2005 @04:12PM (#11739050) Homepage
      Our research group (at UW-Madison) has been experimenting extensively with Xgrid, and right now (with the generally available version, the beta TP2) our conclusion is: not ready for prime time. Lots of promise, pairing Apple interface simplicity with powerful (and open source) underpinnings. But in its current form, there are too many bugs. You have to implement a lot of your own code and do a bunch of workarounds to really get a big, workable grid going.
      • Re:X-Grid (Score:4, Insightful)

        by Red Leader. ( 12916 ) on Monday February 21, 2005 @04:36PM (#11739260) Homepage
        What on earth are you bothering with X-grid for?

        You have the in-house developed Condor [wisc.edu] that's amazing!
        • Re:X-Grid (Score:5, Informative)

          by bpbond ( 246836 ) on Monday February 21, 2005 @04:58PM (#11739430) Homepage
          It's a good question. We bothered because

          (i) Curiosity. Xgrid was new, and looked interesting.
          (ii) Potential. Xgrid (final version) is going to be bundled with the upcoming 10.4 release of the Mac OS. That's an awful lot of machines that will have Xgrid preinstalled, with the user basically just needing to click "start."

          B
        • Condor was made many years before grid computing, it is just a bad comparison for people who doesn't know how to explain grid computing.
          • In a Grid context, people usually mean Condor-G [wisc.edu].
          • Please expand on this. Or, if anyone else can shed some light on why openglx said that Condor isn't "real grid technology," I'd be greatly appreciative. I've used Condor for a couple years and been very happy with it. How is Condor not one implementation of grid computing? Is it because it is not an API?

            I understand that Globus is the big API that everyone uses to do grid computing, but you can't just use it to run jobs on a grid - you'd have to write an application to do that. Why the bigotry? I'm s
          • Grid is a form of distributed computing. Currently what Grid actually means (Foster's book not withstanding) isn't entirely clear. To me Grid is an aspiration, and there a host of technologies that can be used to create a Grid or Grids. Condor is one useful way to virtualise a set of machines based on a CPU-scavenging model (others include Entropia, Inferno, etc). Grids also do not need to be based on Globus (e.g. Inferno, Unicore) although standards make life a lot easier. The standards are still in a proc
  • by PornMaster ( 749461 ) on Monday February 21, 2005 @03:50PM (#11738872) Homepage
    The article mentions the commoditization of grid computing by adhering to a set of standards, but past a certain point, it makes little sense for IBM or Sun to make their tools interoperable... that makes their consulting value-add on top of grid resources they offer diminish.

    I think that for full standards compliance, you'll need to look to companies which don't offer their own computing resources -- platform-agnostic companies. But then who do you buy the compute resources from? Unless you're buying your own systems for use (which makes "utility computing" less viable), it's a bit of a catch-22.

    • The advantages of IBM and Sun are that they can just pull racks of servers out of their factories at cost and pay their own people to set them up. This will always give them a price advantage over the platform-agnostic competition who would be using the same Opteron, Xeon, SPARC, or POWER CPUs, anyway.
    • Failing to make the tools interoperable damages the market for Grid as a whole and does neither Sun nor IBM any good, given that most companies have mixed hardware. Standardisation has taken place over a whole series of networking layers: first TCP/IP, then services on top such as http, and Grid services will follow.
  • Plan 9 & Inferno (Score:5, Informative)

    by HyperChicken ( 794660 ) on Monday February 21, 2005 @03:52PM (#11738877)
    If you want examples of operating systems that help with gridding, check out Plan 9 from Bell Labs [bell-labs.com] and it's sister project Inferno [vitanuova.com]. Nice thing about Inferno is that it runs on Linux, Windows, Mac OS X, Plan 9, and on native hardware.
  • by Anonymous Coward on Monday February 21, 2005 @03:53PM (#11738888)
    Computing grids, or software engines that pool together and manage resources?

    Pure Bolshevism, that's what!
  • When all the switching and installation are done from cheap-labor countries, a lot more techies will be out of work.
  • by Galuvian ( 755742 ) on Monday February 21, 2005 @03:59PM (#11738940)
    From TFA:
    What we provide is primarily an implementation of Web services standards to allow people to build services, and the primary goal is also for us to provide a set of pre-defined services that allow you to use Web services protocols to interact to request the allocation of compute resources, the creation of computational services and moving the data from one place to another and so forth.

    Does this sound like Carly Fiorina attempting to explain HP's strategy to anyone else?

  • Grid: loaded word (Score:5, Informative)

    by selectspec ( 74651 ) on Monday February 21, 2005 @03:59PM (#11738944)
    The new generation of marketeers use Grid, but they rarely are refering to what computer science engineers refer to grid clustering. I think the marketeers talk about Grid when they really mean virtual Operating Systems running on abstracted hardware platforms: either a mainframe, or otherwise kick-ass multi-way system that has been virtually partioned, or something like vmware piecing together several x86 style servers.

    Frankly, I don't like the word Grid being applied in this way. However, the latter technology is facinating (virtual OS) and will come to dominate computing in the next few years.

    The basic idea is total abstraction of the application/service from hardware/location. The app gets the resources it needs, can be cloned/replicated to another location for distaster tolerance, and can scale and grow on demand based on needs by simply throwing more hardware modules at it. It's not just limited to computing but also applied to storage and network.
    • Re:Grid: loaded word (Score:1, Interesting)

      by Anonymous Coward
      Because I'm feeling contrarian today, I'll call you on your prediction. While virtualization technology might seem new and hip to some, in computer terms, it is an ancient technology, older, in fact, then the operating system itself. Early computers were developed using virtualization of hardware and IBM ran all of their systems on top of a firmware, which virtualized the environment that the operating system runs on. Higher up in the system, one of IBM's first modern operating systems was VM, which was
      • I completely agree. Nothing is really new. It was all invented at DEC eons ago...

        Virtualization alone is just another layer to manage. There's no point in hiding a technology if you are replacing it with an equally complex and less efficient one.

        You are correct that virutalization adds expensive overhead, and it can be complex and inflexible.

        I believe the solution resides where the costs of downtime associated with direct associations with hardware outweigh the costs of virtualization. Virtual Memory
      • Nope, wrong. (Score:3, Informative)

        by Bozdune ( 68800 )
        Because I'm feeling contrarian, too, I'll call you on your claims. Virtualization can be very cheap, and very easy to administer. VM/370 was based on CP/CMS, which was developed using government money, so it was open source. In an early example of why open source is such a good idea, several big timesharing companies took CP/CMS and hacked CMS to get rid of the real I/O instructions (CCW's, or Channel Command Words) inside it. You see, CMS was a real single-user OS. So CMS could run on bare hardware, j
      • There are two potential layers of virtualisation:

        The first is at the low level - containers in which to run an operating system. This allows the system to be provisioned with the required OS that the user requires, no matter what the hardware layer. This allows the user more options of where their jobs might run rather than scouring the world for the one server that has the right OS, is cheap enough, and can have the job done by next Tuesday.

        At the higher level there is the virtualisation of collections

    • I don't like the word Grid being applied in this way

      Me neither, but for slightly different reasons.

      The main definition of a grid is a pattern of intersecting lines. While sun or ibm may arrange their computers neatly in rows of vertical racks and build it in a grid pattern physically, nothing of this remains for the actual use or architecture of so-called grid computing. This leaves large swaths of parallel algorithms by the wayside. The only things you can efficiently compute on a grid are the "emba

      • The use of the word "grid" here is in the sense of an electric power grid. The idea is that you should get computing power on demand, just like you get electic power on demand.
  • Oh boy... (Score:5, Interesting)

    by Duncan3 ( 10537 ) on Monday February 21, 2005 @04:03PM (#11738972) Homepage
    Look, the bottom line is there is nothing new here, just new sets of buzzwords. You have been able to submit massive computer jobs to IBM or Sun (with their insane $1/cpu-hour), or even most college campuses (the U of Minnesota had such systems) for the last 35 years. MPI/PVM standardized and commoditized the clustering side of things long ago.

    Globus is now "web services" and not "GRID". GRID is so last century. It's far more cool now that it's in Java too. Anyone still working on GRIDs should search/replace immediately!!!

    And did they drop the name of every single business partner they have in that article, or did only I notice that? ;)
  • Blackout 2003 (Score:5, Insightful)

    by Ced_Ex ( 789138 ) on Monday February 21, 2005 @04:03PM (#11738974)
    Hasn't the blackout taught us to move away from GRID type setups? If people just created their own power the blackout would have affected us less. Could this principle not be used for home computing? Rely on yourself and not on others?
    • Couldn't agree more
      Quote TFA:

      Sun Microsystems recently unveiled a new grid computing offering that promises to make purchasing computer time over a network as easy as buying electricity and water.

      That sounds very much to me like Sun's another try to warp the world back to the "classical" server/dumb-terminal era.

    • OK, so you want to rely on yourself for generating all your power? So you'd setup a single generator and refill the fuel regularly. How reliable do you think that will be compared to having the power delivered by your electricity company?

      Redundancy is key for obtaining reliability. Because of this, just relying on yourself is not going to improve things but make them worse. I'm not familiar with the actual numbers of power availability in your area in the U.S., but I suspect in the past 10 years you've had
      • I'm not advocating going all "survivor style" in having to produce everything for yourself, but would you not be less reliant on the "GRID" if you were able to provide for yourself.

        For instance, if you had solar panels, or a small wind generator, you couldn't completely power your house, but you could keep essential things running like your fridge.

        In this situation, unlike electricity and water, computers can be run completely on their own without a grid, as evident by current computer setups. So why mov
  • While RTFA, I couldn't help but wondering what the overhead of a Web service-based grid solution might be and how the overhead would get compounded by the frequent communication among the grid nodes.
  • I have started to look into having some of my cheaper machines grid together to be a nice cluster, though I haven't found a solution to something I thought would be necessary for this kind of environment... Thread Migration.

    Sure, it may be much harder then migrating a whole process, but too often spawning whole processes is simply not the answer to SMP programming.

    Thus far, I have looked at Mosix/OpenMosix and OpenSSI, and both fail here. Can anyone give me some insight perhaps? Maybe I am missing somethi
    • Um, why do you want thread migration?
    • Re:Threads (Score:1, Insightful)

      by Anonymous Coward
      Threads are a step backwards in the evolution of software. Basically you are turning your back on all the protection that the OSes give you for free. A single bad instruction in a thread will bring down the entire process. Because threads are very difficult to migrate across machines in a cluster you will see more software written to use heavier-weight but more grid-friendly ways to communicate like sockets and pipes.
    • You can't migrate single threads per definition. You have to have a duplicate of the whole adress space of the process running the thread (of course you can optimize by copy-on-write but you can do that when forking processes as well) and since a thread+adress space makes a new process it isn't really a thread by any known definition but another process.
    • Moving threads is possible and desirable. Think of a single MySQL server spread over a cluster.

      I did some work on that at Uni years ago. You can move threads from one machine to another transparently. You page in memory over the network on demand, mark "dirty" pages, and send page diffs back. It's neat to see it working (two threads running on different machines), but network latency is a problem.

      Google for distributed shared memory for similar projects.

  • by Anonymous Coward on Monday February 21, 2005 @04:05PM (#11738987)
    Just do all your computation in whatever hemisphere is in winter. They can use the heat.
  • Grids eh! (Score:3, Interesting)

    by igrigorik ( 818839 ) on Monday February 21, 2005 @04:06PM (#11738999) Homepage
    Only problem with this kind of setup is in fact it's limited ability to accomplish anything usefull to a consumer or a medium company. While, of course it is an interesting field, and one that needs to be researched, technology like proximity computing (SUN) is what will dictate the technology in the future. It's hard as it is to even get decent multiprocessor scheduling without too much overhead on a single pc, overhead incurred with grids would be enormous (I guess that's why the primary applications would be file storage etc.) Proximity computing on the other hand, is an innovative approach that doesnt try to solve a problem in place, but avoids it all together.
  • What I want to know, is there anyway to sell my unused cycles on the open market. I love SETI and all, but making a $$$ would be super cool.
  • by argoff ( 142580 ) on Monday February 21, 2005 @04:09PM (#11739028)
    Is a combinataion of grid and virtualisation.

    Grid in the sense that if my datacenter needs more resources, I just plug in a blamk PC with extra CPU/MEM/Disk and not worry about it. Or if one goes bad, I just rip it out without worring about what it will destroy.

    Virtualisation in sanse that if I need an email server - I just create a virtual one on this grid and let it go, if I need a DNS server - I just create one on this grid and let it go, a web server ... same thing, ldap server same thing. If a server gets under load, it will automatically devote more memory/space/cpu/bandwidth to it as reasonable.

    That is my idea of a true grid.
  • by G4from128k ( 686170 ) on Monday February 21, 2005 @04:11PM (#11739044)
    Grids are great for non-time critical computations tasks. But what happens when everyone needs cycles now! My guess is that systems will evolve to give cycles to the highest bidder/highest priority. In such an environment, low-priority tasks will become effectively impossible on a grid - there will always be some higher-priority/higher-paying task that usurps the cycles.

    I wonder how long SETI@home will last if home PC users realize they can "sell" cycles to meet for-pay demand for computational power.

    • Sun has been experimenting with EBay for quite a while now. It would be pretty neat if they could figure out a way to auction off chunks of their grid on some sort of how-much-and-how-soon basis, like you say. If a movie company or fluids dynamics contractor needs the whole thing yesterday, they would be willing to pay a premium for not having to make a grid of their own and get a few thousand CPUs _right_now_.
      • Now that I think of it, this sounds a lot like airline ticket pricing. Cheaper three weeks out, getting more expensive up to the day of the flight, but getting really really cheap just before takeoff (e.g., Priceline.com). The difference is that CPUs don't take off, so the price dip at the end wouldn't happen (Sun could just turn off the servers if they really want to).

  • Am I the only who saw this and thought that GRID laptops were coming back? I loved the old GRID laptop I had, I swear you could drive your car over it and it would still work. I wonder what happened to that company?
    • If you love your laptop so much, why would you drive your car over it?
  • P2P? (Score:2, Interesting)

    by cyocum ( 793488 )

    The guy in TFA talks about P2P being another type of grid and that a family could create a distibuted environment for shared data. He also talked about trust.

    My idea is that with adding strong encryption you get basically small priate network that is almost impossible to crack. DVDs + CDs + Encrypted P2P among a small group of people == Old Skool Sneakernet (aka borrowing your friend's stuff). You and your friends can share all the entertainment among yourselves as you like. All you need is a P2P-type

  • Spammers and the seedy underbelly of programming have grid computing all figured out. Why is it taking the "legitimate" computing world so long? Look at how successful the spyware/malware networks are at building a "grid" of zombie PC's and harnessing their combined resources for spam, DDOS attacks, etc.

    Somebody please analyze what the malware world is doing, and share it with the grid computing gurus. The technology can't be THAT different, can it?

    • The problems are different.

      The problem of sending 10 billion identical emails is basically parallizable to 10 billion pcs without a problem, timing isn't important, only small amounts of data must be transfered to and from a controlling central host,...

      DDOS attacks require timing accuracy of a few seconds, better isn't necessary if you have a lot more capacity than needed (which is no problem if you don't pay for it).

      Most legal applications for clusters, grid computing,... are much more difficult sinc
  • by NZheretic ( 23872 ) on Monday February 21, 2005 @04:29PM (#11739190) Homepage Journal

    Take a pinch of Standard Linux [linuxbase.org]
    Wrap it up in Xen [cam.ac.uk]
    Add a touch of SELinux [nsa.gov]
    And a little bitty bit of Globus [globus.org]
    Oh like a Sandboxed Platform [blogspot.com]
    Oh Lordy, Lordy, mixed with Free and Open Source Code [freshmeat.net]
    You know you lump it all together
    And you got a recipe for a Multi Vendor Development scene [google.com]
    It is coming though, you know, you know.

    What we have is a great big melting pot
    Big enough enough enough to take every vendor and all IT's got
    And keep it stirring for a hundred years or more
    And turn out Application Service [google.com] and Content Providers [ostg.com] by the score.

    With apologies to Blue Mink [dustygroove.com] .

    • Take a pinch of Standard Linux [linuxbase.org] Wrap it up in Xen [cam.ac.uk] Add a touch of SELinux [nsa.gov] And a little bitty bit of Globus [globus.org] Oh like a Sandboxed Platform [blogspot.com] Oh Lordy, Lordy, mixed with Free and Open Source Code [freshmeat.net] You know you lump it all together And you get a lump of poorly integrated junk.
  • What it's all about (Score:2, Interesting)

    by Y2 ( 733949 )
    "Grid" is all about "You let me use your spare cycles, and I'll pretend I'm going to let you use my spare cycles in return."
  • Oh come on. (Score:4, Informative)

    by Moderation abuser ( 184013 ) on Monday February 21, 2005 @04:35PM (#11739250)
    "Grid" technology to do this stuff has been around for decades e.g. NQS, hell NASA gave away PBS in the 80s & 90s.

    The problem is that most of the CPUs out there run Windows, which is currently damned near useless for this kind of thing. It'll require a rewrite of the OS to take proper advantage of the potential of a network of windows boxes for general purpose computing. OTOH, a couple of shell scripts and SGE (http://gridengine.sunsource.net/) does the job on Linux and other Unix systems.

  • by zarathustra6625 ( 807247 ) on Monday February 21, 2005 @04:55PM (#11739400)
    What no one is mentioning is that these big cluster/grids that Sun/IBM are building to later sell over the network are dependant on the ratio between network speed and batch file sizes.

    EXAMPLE: IBM is currently offering CPU/Hour service in Houston to oil and gas companies. Sounds great till you realize the multi-terabyte files that consume such a massive compute service are too big to be readily sent over the network. Instead they use vans to haul tape and disk over to IBM and then run the process on it.

    What is the bandwith of a station wagon? Right now its faster than the internet on a 20 mile drive across Houston.

    But even take it a step further and the ratio remains. What if I wanted to pay Sun/hr for CPUs while I worked on a big Maya render of 200 gigs. By the time I've sent that over cable modem have I gained a ton in performance time?

    The problem I see is that we are making CPU massively parrallel but not networks. So will it EVER make sense to send a massive file to a commercial grid over a singular network connection.

    Somone should do the math.
    • 100 Gigabit in experimental phase and 10 Gigabit available if you have the cash, LANs are starting to get the kind of throughput too start having the Motherboard IO system become the bottleneck.

      It's only the Internet that's holding back Disbursed Interconnected Grids (DIGs). But that's why there's an Internet2 to focus on greater bandwidth and reduce latency times. With such large data sets the other side to the coin is data compression or the ability to transform it into something more manageable while
    • That problem is being addressed while we type. In your example, and in many grid applications, you know in advance where your communication partner is in a networking sense. You don't need a full blown packet forwarding ip network with 1M$ core routers. 'Just' use dynamically switched lambdas (lightpaths) and send your grid traffic directly to the destination. The good thing about this? Well, for a given required capacity, layer 3 hardware (think IP) is 10 time more expensive then layer 2 hardware (think et
  • I will probably look back on this in 5 years and be like, "That was a moronic thing to say". But I think it would be an intresting concept if the whole network would share cpu cycles. Kind of a communist state of process, especially if storage was distributed as well, everything running asynch in one giant mass of processing power. The whole internet hundreds of millions of cpu's running ASMP, Crypto would be dead.. Sounds like fun to me.

    Like I said though I will probably look back and call myself a moron,
  • A Beowulf cluster of computer grids.
  • "Just-in-time-enterprise-resource-management! People, that's the paradigm"

    "Deployment of deliverables on-cycle and on-quota"

    I HATE management-ese. It's nothing but BS. "...deployment in the science space," "...focus on vertical particulars, financial services for example..."

    If you need to generate obtuse buzzwords to justify your job, I need to generate ways and means of deploying you at the unemployment line. This is worse, because the guy seems to come from academia, not industry.

  • http://www.worldcommunitygrid.org/

    Current Project:

    Human Proteome Folding Project: A layperson's Explanation
    Proteins are essential to living beings. Just about everything in the human body involves or is made out of proteins.

    What are proteins?
    Proteins are large molecules that are made of long chains of smaller molecules called amino acids. While there are only 20 different kinds of amino acids that make up all proteins, sometimes hundreds of them make up a single protein.

    Adding to the complexity, protein

If all the world's economists were laid end to end, we wouldn't reach a conclusion. -- William Baumol

Working...