Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Technology Hardware Science

World's Largest Working Computing Grid 110

fenimor writes "UK particle physicists claim that they will demonstrate the world's largest, working computing Grid with over 6,000 computers at 78 sites internationally. The Large Hadron Collider Computing Grid is built to deal with 15 Petabytes of data each year from the Large Hadron Collider (LHC), currently under construction at CERN in Geneva. 'This is a great achievement for particle physics and for e-Science,' says Professor Tony Doyle, leader of GridPP. 'Our next aim is to scale up the computing power available by a factor of ten'."
This discussion has been archived. No new comments can be posted.

World's Largest Working Computing Grid

Comments Filter:
  • imagine (Score:2, Funny)

    by denthijs ( 679358 )
    the optimization flags on one of those,....
    • Imagine a beowulf cluster of.... Oh.... nevermind. :-(
  • At last (Score:5, Funny)

    by modest apricot ( 785620 ) on Sunday September 05, 2004 @05:29PM (#10164665)
    Finally, something to run doom3 on. Though I may still have to turn shadows off...
  • by -ing AnonymousCoward ( 810651 ) on Sunday September 05, 2004 @05:29PM (#10164667)
    But let's talk about something serious: how many FPS in Doom III?



    ...



    Mmmmm... That might be worth the upgrade then...
  • by djfray ( 803421 ) on Sunday September 05, 2004 @05:30PM (#10164674) Homepage
    finally something to deal with those pesky environmentalists.... :-P
  • Computing power (Score:2, Interesting)

    by nemexi ( 786227 )
    Does anybody know facts about the computing power of the grid? How many teraflops will it be able to achieve?
  • Grid vs. LHC@Home? (Score:3, Interesting)

    by Anonymous Coward on Sunday September 05, 2004 @05:31PM (#10164679)
    What's the point of the Grid thingy if they've also setup this?

    http://lhcathome.cern.ch/

    • My guess is reliabilty and specialization. Or, they're just greedy for power.
    • by David McBride ( 183571 ) <david+slashdot AT dwm DOT me DOT uk> on Sunday September 05, 2004 @06:02PM (#10164843) Homepage
      The LCG resources have several different things that most home machines do not:

      1) A Linux install with the requisite libraries for the already-written experiment analysis programs to run on.
      2) Fast network interconnects, both to other LCG cluster nodes at the same site (using Myrinet, Infiniband, etc.) and large network connections to other participating sites (ie 100Mbit+).
      3) Large amounts of reliable local storage, ie 1TB+.

      SETI@Home-like distributed computing problems only work well for problems which do not require large amounts of communication between nodes before, during, and after an individual run. Many problems do not fall into this category.
    • lhc at home is not for processing the data output, but helping them to position the magnets as they
      *build* the LHC.
  • This week, UK particle physicists will demonstrate the world's largest, working computing Grid. With over 6,000 computers at 78 sites internationally, the Large Hadron Collider Computing Grid (LCG) is the first permanent, worldwide Grid for doing real science. The UK is a major part of LCG, providing more than 1,000 computers in 12 sites. At the 2004 UK e-Science All Hands Meeting in Nottingham, particle physicists representing a collaboration of 20 UK institutions will explain to biologists, chemists and c
  • imagine a beowulf cluster of Half-Life 2 preloads! ;-)
  • Fenimor can't make hyperlinks.... :)
  • The CERN link should look like this [www.cern.ch].
  • Images (Score:5, Funny)

    by Limburgher ( 523006 ) on Sunday September 05, 2004 @05:40PM (#10164737) Homepage Journal
    I found a picture of the system here [yimg.com]. You may have to zoom in a bit to see individual machines.
    • Re:Images (Score:5, Informative)

      by rokzy ( 687636 ) on Sunday September 05, 2004 @05:44PM (#10164763)
      your joke being funny not withstanding, that's a map of America, probably the least relevant place to show for this particular project.

      CERN and Grid is European, notably Switzerland, France and UK.

      the USA has plenty of great particle physics of its own (excitable New Yorkers beware - there's a particle accelerator on your doorstep - think of the children!) but this is not one of them.
      • Yeah, I know. That didn't even occur to me until after I'd posted. I'm usually one of the least Yankeecentric (awk?) Americans I know. I just found this image first.

        And, since we're picking nits, that's a photograph, which while you could technically consider it a form of map, is at the same time the most accurate and least useful type of map. :)P

      • CERN and Grid is European, notably Switzerland, France and UK.

        Actually the CERN member states are a lot more than Switzerland, UK and France. In fact there are lots of Germans and Italians at CERN as well as a whole host of other nationalities.

        Furthermore the Grid is a lot more than just Europe. Speaking as a European, here in Canada we have Grid resources that will be used for the LHC experiments. Even the US is taking part although I understand they are having trouble because the US government is not

        • I didn't say it was just Switzerland, UK and France just mentioned them because CERN is IN Switzerland and France and a lot of the Grid is in UK.

          the E in CERN stands for European. of course there will be other nationalities involved since science is international but it's still a European centre.
    • Re:Images (Score:1, Informative)

      by Anonymous Coward
      I think the joke works better as a reference to the Hitchhiker's Guide to the Galaxy.
  • Coordination (Score:5, Interesting)

    by erick99 ( 743982 ) <homerun@gmail.com> on Sunday September 05, 2004 @05:41PM (#10164747)
    I wonder how they are coordinating the use of all of those computers? The article doesn't say that they will be exclusively for this project and, if they are not, then that is some task to have them all online and not otherwise busy. They must have some damned serious storage vaults as well if they are generating 15 Petabytes a year of data, which doesn't include the output from processing. Still, it must be something to have all of the "horsepower" at your command.

    Cheers,

    Erick

    • Re:Coordination (Score:1, Informative)

      by Anonymous Coward
      I wonder how they are coordinating the use of all of those computers? The article doesn't say that they will be exclusively for this project

      Well, GridPP is exclusively particle physics, although there are other grids in construction. Large numbers of people will do large numbers of analyses with the LHC data - it's not just a case of running one job on all the data, it's a case of many jobs and many subsets of the data.

      Globus and digital certificates are also part of your answer.
    • Re:Coordination (Score:1, Interesting)

      by Anonymous Coward
      I wonder how they are coordinating the use of all of those computers?

      Carefully. Well, with some very complex schedulers and batch systems and LDAP directories and SQL databases and and bits from the Globus project and lots of other scripts and random crap. It's kindof a miracle it all works (when it works...), a bit like the Internet itself really.
    • How come that picture of the earth does not show stars? Were they purposely removed, or is it an artifact of the software for joining images from several satelites (if that is how it was done)?
      • Re:Coordination (Score:3, Informative)

        by MrNixon ( 28945 )
        Because the Earth is a LOT brighter than the stars (because the stars are far away), and to properly expose the Earth onto whatever media is being used(film, CCD, whatever), less exposure is needed than would be necissary to pick up any stars (save the sun).

        Just like pictures from the moon - you'll not see any stars in pictures taken of the moon on the moon (by Neil Armstron et al).

        Hope that helps
    • Re:Coordination (Score:3, Interesting)

      by steve_l ( 109732 )
      I have access to some of the machines; we donate idle systems to the project in exchange for low cost (read free) access to the superjanet network. When they arent doing UK NeSC grid stuff I can bring up vmware images of whatever distro I feel like, run whatever stuff we need -in my case usually distributed testing of distributed software.

      That is how the grid works -it uses spare cycles on machines in the network. Unlike Seti@home, they are very fussy about bandwidth; you need a serious link to play. Most
  • they found themselves being upstaged by claria or some other spyware company with their legion of zombie computers.. :-P
  • Physics (Score:5, Funny)

    by penguinoid ( 724646 ) on Sunday September 05, 2004 @05:45PM (#10164767) Homepage Journal
    Getting the physics right has been an important part of many of our favorite 3D games lately...
  • Largest? (Score:5, Interesting)

    by anethema ( 99553 ) on Sunday September 05, 2004 @05:53PM (#10164808) Homepage
    I am not sure how they define largest...

    Are these 6000 super computers? Or just other computers?

    Distributed.net had around 330 thousand participants on the latest completed rc5 key.They had 15 thousand active on the last day of the challenge.

    I would say this is much larger in computer numbers, but since they dont mention almost any usefull information in the article, I'm not sure if more computer power would be in the d.net.

    However the line: By 2007, this Grid will have the equivalent of 100,000 of today's fastest computers working together to produce a 'virtual supercomputer', which can be expanded and developed as needed

    So right now it isnt even 100 thousand computers, maybe not even close, so the computing power might be similar. (assuming 15 thousand active computers on d.net)

    Either way, right now i highly doubt its the largest ;)

    • Re:Largest? (Score:2, Informative)

      by Anonymous Coward

      So right now it isnt even 100 thousand computers, maybe not even close, so the computing power might be similar. (assuming 15 thousand active computers on d.net)


      The point is not so much assembling all that computing power now (the LHC won't come online till 2007 or so anyway, and you don't really need the Grid to run Monte Carlo) so much as assembling the infrastructure so that when 100 universities go out and buy new analysis farms in 2007, they can get tied together and used efficiently.
  • 10? (Score:5, Funny)

    by real_smiff ( 611054 ) on Sunday September 05, 2004 @05:54PM (#10164810)
    why a factor of 10? why not take it to.. eleven.

    for when your particle collider needs that little push over the cliff..

  • Copycat writeup (Score:5, Informative)

    by David McBride ( 183571 ) <david+slashdot AT dwm DOT me DOT uk> on Sunday September 05, 2004 @05:57PM (#10164826) Homepage
    That writeup looks a lot like the one at The Register [theregister.co.uk] -- which came out a good two days early, the same day the results were actually announced at the AHM conference.
  • by Hitmen ( 780437 ) on Sunday September 05, 2004 @06:05PM (#10164861)
    Erm, I think I read that wrong.
  • But... (Score:2, Funny)

    by rune2 ( 547599 )
    Can it run Longhorn?
  • by Louis Savain ( 65843 ) on Sunday September 05, 2004 @06:22PM (#10164924) Homepage
    Grid computing has been a target for IT developers and scientists for more than five years. It allows scientists to access computer power and data from around the world seamlessly, without needing to know where the computers are.

    The key word here is "seamlessly." The problem with a world grid is the latency introduced by communication between nodes. If a computation is dependent on results from another computation happening half way around the world, I cannot see how a world grid can compete against a linux cluster. Besides, unless there is provision for redundancy (sorry, I did not read the entire article), a critical node may be down due to a power outage or something as mundane as the cleaning people turning off the computer. This would bring everything to a halt.
    • by tkittel ( 619119 ) on Sunday September 05, 2004 @09:11PM (#10165667)
      True, and this makes it difficult for people who want to calculate protein folding or predict next weeks weather. But for particle physics computations we hardly need any communication between nodes at all. Rather, we need something simulated a huge amount of times (as in, "simulate this proton-proton collision 10 billion times") or "apply this fancy pattern recognition algorithm to each of these billions of events we took this week". Particle physics computations are to a large extent parallel in nature from the beginning.

      The grid related problems faced in particle physics are of another nature, such as ensuring that the data is copied around the various grid facilities as needed and of ensuring that even if a given node fails to execute its job for some reason it is rerun elsewhere automatically - that sort of thing.
      • The grid related problems faced in particle physics are of another nature, such as ensuring that the data is copied around the various grid facilities as needed and of ensuring that even if a given node fails to execute its job for some reason it is rerun elsewhere automatically - that sort of thing.

        I appreciate your input on this matter. I tend to look at things from an AI/neural network perpective. So I thought, there is no way a brain could be simulated on a world grid because timing is crucial to the
  • Why is there a link to some crackpots theory of physics at the bottom of this article? Is this not a reputable source?
  • One tiny problem ... (Score:2, Interesting)

    by Anonymous Coward
    One of my buddies was an early numerical modeller. If I learned one thing from him it was that all the computer power in the world was no use if your model was even slightly defective. The models tended to 'blow up'. Imagine a hundred foot wall of water moving majestically down the estuary.

    Typical of stories about these giant computers, they don't really describe the problems they intend to solve. In a way, that is the more interesting story. Mind you, that story is much harder to tell if you want you
  • Imagine if they had a super big worldwide "grid" of computers all connected via some common protocol! It would be amazing!
  • by jimmysays ( 811031 ) on Sunday September 05, 2004 @08:17PM (#10165381)
    I was under the impression that the world's largest working grid was the United Devices grid.org project. They have over 2.5 million registered users and average over 300,000 work units returned every day. check out www.grid.org They are also doing real science.
    • Seeing as all the participants in all the public distributed computing projects only ever communicate with the central server it isn't very accurate to call them grids. More like a many spoked wheel without a rim. I assume the computers in this project will comunicate with each other in a P2P fashion.
  • Is anyone else thinking here comes the Forbin Project!?
  • til we can get this as a laptop? And can you do something about the battery life?
  • by tod_miller ( 792541 ) on Monday September 06, 2004 @03:40AM (#10167174) Journal
    640 petabytes of memory should be enough for anyone, ever.

    Although, if 640kb sounded anything like 640 petabytes does now, I'll have to rape moores law over a barrell and say I doubt we will ever have computers with 640petabytes of ram as standard.

    Of course, I say that in jest, I would love for future people to read this post and laugh thier tits off (some futuristic velcro tits no doubt)

A committee takes root and grows, it flowers, wilts and dies, scattering the seed from which other committees will bloom. -- Parkinson

Working...