A Look At CERN's LHC Grid-Computing Architecture 53
blair1q writes "Using a four-tiered architecture (from CERN's central computer at Tier 0 to individual scientists' desk/lap/palmtops at Tier 3), CERN is distributing LHC data and computations across resources worldwide to achieve aggregate computational power unprecedented in high-energy physics research. As an example, 'researchers can sit at their laptops, write small programs or macros, submit the programs through the AliEn system, find the necessary ALICE data on AliEn servers, then run their jobs' on upper-tier systems. The full grid comprises small computers, supercomputers, computer clusters, and mass-storage data centers. This system allows 1,000 researchers at 130 organizations in 34 countries to crunch the data, which are disgorged at a rate of 1.25 GB per second from the LHC's detectors."
I wonder when.. (Score:1)
I wonder when we will have the equivalent computing power at home? :)
Re: (Score:1, Funny)
According to Moore's Law.. about 4 Hitlers away
Damn thats Godwins.. Moore's says about 12 years.
Re: (Score:1)
I wonder when we will have the equivalent computing power at home? :)
When you will create a black hole at home. Simple !!!
I have a dream... (Score:1, Funny)
Your post makes me wonder about a future where I have a home computer powerful enough to run an algorithm which downloads as many tracks off of iTunes as it needs and then can compute by extrapolation the future hits of RIAA, before they are released.
One wonders whether the courts would find that such a program is a circumvention of DRM for the purposes of the DMCA. Unfortunately, the computer, which can answer that question, will be destroyed by the construction of a .....
(Ouch. I should go get some sleep.
Re: (Score:1)
And someone started such a simulator : http://everything2.com/title/Monkey+Shakespeare+Simulator [everything2.com] That's what you use this for. And Pr0n.
Speaking as an LHC physicist... (Score:2)
Re: (Score:2)
That's 3.7 Libraries of Congress per hour for those of us on the other side of the pond.
Re:1.25GB/sec not that much. (Score:4, Insightful)
A single 10gb ethernet connection can handle that quite easily.
Eh. A 10 Gb ethernet connection can't handle 1.25 GB/s at all, not to mention doing it reliably. Theoretically, 10 Gb is exactly 1.25 GB, but then you need to account for protocol overhead, packet loss and so on.
Re: (Score:3, Interesting)
You don't have to use TCP/IP over ethernet, you know. AoE & FCoE come to mind.
There are very few ways you might lose packets in a well-built local data-link network. Collisions/congestion are a thing of the past. Modern networking gear is fully capable of forwarding packets at full speed. Packets don't just go flying out of CAT-6A, never to be seen again.
And yes, bonding two 10Gbps ethe
Re: (Score:2)
You don't have to use TCP/IP over ethernet, you know. AoE & FCoE come to mind.
Mechanisms for ensuring reliable transfer of data aren't exclusive to TCP/IP. There is also some overhead in the packet headers (yes, it could be made very small if you use non-standard Ethernet frames).
There are very few ways you might lose packets in a well-built local data-link network.
Well, yeah, depending on how you define local. I don't know what distances they need to transfer the data at CERN, and I imagine there could be all sorts of nasty EM fields around a 14 TeV particle accelerator.
Anyway, to say you could easily handle this data stream on a single 10 Gb/s Ethernet connection is
truly amazing (Score:5, Funny)
I was having lunch with some CERN guys a couple weeks ago, and was asking them about the speed of their analogue to digital converters. I don't remember what the number was, but it seemed low to me, something like 200kHz. So, of course, I had to point out that *my* cheapo converters ran faster than theirs by more than an order of magnitude. They responded with "well, each of our converters does 200kHz on all of our 4000 channels at the same time, so we're really recording at..."
They won.
Re: (Score:2)
This, coming from someone who is too chicken to post their troll material under their real name and take the karma hit like a man.
200kHz x 400 channels is nothing (Score:2)
Re: (Score:2)
You know, I don't really remember the exact numbers we were talking about at lunch. I'm sorry if I got some of the numbers wrong, particularly if they ended up being far to small, that would annoy me.
I just remember being very impressed at the insane amount of total data coming in. I'm definitely more used to the setup of a single GHz ADC, switching between a handful of channels.
There's a big difference in scale between a condensed matter experiment, where I get to do absolutely everything myself, and som
In other words: The LHC Grid (Score:1, Funny)
is a botnet !
Thanks in advance.
Yours In Akademgorodok,
K. Trout
Good boffin! Who's a smart boffin? Yes you are! (Score:1)
I want more! (Score:1)
I was quite saddened to find that this 'Look at CERN's LHC GRID' ....didnt include any pictures. :-(
Re:I want more! (Score:5, Interesting)
LCH@Home (Score:2)
I just wish they would send some more work units down the LHC@Home pipe. None of my computers have done any work for that project in ages.
-l
Re: (Score:2)
Hrm, maybe they think my computers are too slow or something. It's been well over a year since I did any LHC work.
-l
Re: (Score:2)
Not likely, one Athlon XP 1700+ with 768 MiB of RAM, that I keep around here, got maybe a dozen workunits in the last month.
It seems now the project is gearing up specifically for calculations of later stage of LHC, when it will transfer to operating at greater power.
Re: (Score:2)
I have another theory. I don't leave any computers on at night so perhaps the jobs are going out during the European daytime when mine are offline and there is simply not enough work to go around, yet.
-l
Re: (Score:1)
One actual example (specifics removed to protect the innocent):
"So you have any proof of such IT incompetence by them, it's not just jealousy right ?"
"Oh dear no! I am a story teller guy, I dont deal with proof! Proofs are a wikipedia problem..."
and so on...
Correction (Score:3, Informative)
Sorry for being pedantic, but the article says there are three tiers between the central computer and the scientist's machines (which are tier 4, not 3).
Re: (Score:2)
Tier 1
Tier 2
Tier 3
Re: (Score:2)
And finally Tier 4 which comprises, according to TFA, the scientists desk/lap/palmtops.
Re: (Score:2)
Correct. Tier 3 is local batch farm facilities, etc., which aren't really part of the project as such.
Re: (Score:2)
Interesting. I don't remember writing it that way. Did someone edit the submission before it was posted? Or has it just been a long weekend?
Data to crunch (Score:5, Interesting)
As someone who worked on the processing of HEP experimental data for awhile, let me say that there is a ton of work to do. You have particles entering the detector every ~40ns and hundreds of different instruments making measurements, which leads to a ton of data very quickly. You then have to reconstruct the path of the particle based off of the detector information, but it's not straight-forward. The detector can have gaps in coverage; neutrinos (which are undetectable) can be created removing momentum; particles from the previous event can still be in the detector et cetera.
And all of the data crunching you do must be done in 40ns, so that you're ready for the next set. (Of course, you can do some processing offline, but if you don't maintain a 40ns average, then your data will start piling up.)
Re: (Score:2)
Holy crap! I had no idea that the relativistic speeds involved would cause the mass to increase that much!
Re:Data to crunch (Score:5, Informative)
You have particles entering the detector every ~40ns and hundreds of different instruments making measurements, which leads to a ton of data very quickly.
Not exactly true. It's running at 40 MHz, so that's 25 ns bunch spacing. Further, you don't exactly have to 'crunch' the data as it comes in, there are multiple triggers that throw lots of data away based momentum cuts and other criteria before it ever makes out of the detectors.
In ATLAS, for example, there are ~ 10e+9 interactions/sec. The Level1 Trigger, consists of fast, custom electronics programmed in terms of adjustable parameters to control filtering algorithms. Input is from summing electronics in the EM and hadron calorimiters, and signals from the fast muon trigger chambers. The info is rather coarse at this point (transverse momentum cuts, narrow jet criteria, etc), and at level one the info rate is decreased in about ~2us (including communication time), from 40MHz to about 75KHz. Level2 now does a closer look, taking more time and focusing on specific regions of interest (RoIs). This process takes about 10ms, and data rate is reduced to about 1KHz for sending to the event filter. Here, the full granularity of the detector (the 'detector means all the bits - Inner detectors: Pixels, strips, Transition Radiation tracker - The calorimiters - The muon tubes at the outside radius) and runs whatever selections algorithms are in use. This takes a few seconds, and output is reduced to about 100Hz and written to disc for a gazillion grad students (like myself) to analyze endlessly and get our PhDs.
There is much more to it of course, but you can find info about it on line if you really are interested in the details. Have a look at the ATLAS Technical Design Report: http://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/TDR/TDR.html [web.cern.ch]
USENIX LISA presentation (Score:2, Interesting)
There was a good presentation at LISA '07 on this entitled "The LHC Computing Challenge":
http://www.usenix.org/event/lisa07/tech/tech.html#cass
It was given by Tony Cass, who is/was "responsible for the provision of CPU and data storage services to CERN's physics community". They're planning on collecting 15PB/year.
Re: (Score:2)
Re:lhc infrastructure history? (Score:4, Funny)
I mean: who could have guessed the processorspeed and diskspace we have now.
Gordon Moore?
As a Tier-3 manager... (Score:3, Interesting)
I can say that the article doesn't explain it very well. Since CERN has been calling the sites "Tier", this terminology has become a buzzword, and everything is a Tier (the managers call their services "Tiered" just to make them sound important).
Tier0 and Tier1 are well described by the article. Tier2 are mostly computing clusters, with of course big storage, but they're mainly for analysis. Tier3 are like Tier2 but not really. They are "uncertified" Tier2 in the sense that they do not strictly adhere to the Tier2 standards in terms of middleware and configuration and policies.
Tier4... never heard of that, I think the buzzword Tier backfired and they're calling their desktops Tiers. When I started managing the Tier3 we did not even call it like that... it was just a cluster.
Imagine (Score:1)
Imagine a Beowulf cluster of these! \o/