A Simple Grid Computing Synchronization Solution 55
atari_kid writes "NewScientist.com is running a article about a simple solution to the synchronization problems involved in distributed computing. Gyorgy Korniss and his colleagues at the Rensselaer Polytechnic Institute proposed that each computer in a grid synchronize by occasionally checking with a randomly chosen computer in the network instead of centralizing the grid by having a global supervisor."
SImilar ideas to P2P (Score:4, Insightful)
The gist is, use a mathematical ploy to ensure that the ammount by which the system can degrade over time is compoensated by the simplest system possible.
This idea could perhaps be taken further...
Re:News Flash! (Score:2)
Also in a grid the machines are presumably more reliable - they'll stay up for a much longer time than typical P2P hosts.
So I think this is a pretty cool - and novel - idea.
Since it might help grids scale better and many grids are Linux is another step in the world domination campaign
Gridshare? (Score:1)
Starcraft RPG? only at [netnexus.com]
But... (Score:3, Informative)
Not as I read it... (Score:3, Interesting)
Each individual computer makes occasional checks with randomly-chosen others, to ensure it is properly synchronised.
"This means each individual processor only has to communicate with a finite number of others," says Korniss.
To me this would imply processors being 'grouped' with different groups checking with one another randomly. e.g. if you have 2 groups of 2 processors (to keep the example simple) group A gets it's timings by checking with a random processor in group B and vice versa. This way a whole group cannot go out of sync because it's timings are determined by a different group.
Of course if I'm talking crap I'm absolutely certain someone will tell me.
Re:But... (Score:5, Insightful)
It is more like the way that an entire auditorium full of people can clap in unison without a leader.
Each node just queries some other random node, and if it is behind that node, it advances a little, (say 10% of the difference,) and if it is ahead of the other node, it backs up a little. This way, by repeatedly seeing how the others are doing, each node tracks onto the average of the group. The goal isn't to be right, it is just to agree.
But... (Score:1)
...there is a leader. People clapping in unison only do so in the presence of a rythmic audio source, namely music.
This is the equivalent of a central timing server. When people start to go by their neighbours (ie people looking around with excited looks on their faces ["aren't we cool, clapping to this music!"]) and stop listening to the source of music, one half of the auditorium gets waaay off beat.
My apologies if you were referring to people beginning to clap after a performance, because you have a point there.
My $0.02 CDNRe:But... (Score:2)
Uh, no. People clapping in an auditorium can hear the combined audio output of everybody else clapping. I'm not just listening to one random person in the audience.
Without thinking too hard about it, it seems that what's needed is not just a random (as in unpredictable) number, but a well distributed random number, so that you avoid the formation of subgroups that are just polling each other.
Re:But... (Score:2)
Re:But... (Score:1)
Re:But... (Score:2, Interesting)
If there is a subset of computers that only consult each other, and never any of the other computers, and none of the other computers consult these, then there is a much greater probability of drift for that set of computers.
Just my interpretation, I could be wrong.
-Rusty
Re:But... (Score:2)
Law of averages is such a wonderful thing.
Re:But... (Score:1)
Re:But... (Score:1)
The computer equavalent of... (Score:1, Funny)
However (Score:1)
The real answer is a smaller scale super computer controllig the distributed computing.
Re:However (Score:1)
In addition, there are massive problems with scaling if you have a central controller. You simply cannot add more nodes to the system past a certain point without heavily loading the controller.
Re:However (Score:3, Interesting)
So let's say a computer named Louise is trying to stay in sync with the group as a whole. At some point it checks with another computer, Virginia, to see how far ahead it is (it's a very fast computer, it knows it'll be ahead). It finds that it's not very far ahead at all, so it corrects just a small amount. Next time it wants to check itself, it checks with Algernon. Algernon is a 7 year old Macintosh Performa. Louise finds itself to be way, way ahead, and holds itself back a lot.
The point is that the average amount by which Louise finds itself to be ahead will depend directly on the average amount by which it's ahead, so while it'll always be a bit out of sync, it'll keep itself from ever getting too far off. It's a matter of statistics and keeping errors within acceptable ranges, rather than achieving perfection.
the details are scant, but this sounds like.... (Score:4, Interesting)
"Greedy algorithms work in phases. In each phase, a decision is made that appears to be good, without regard for future consequences. Generally, this means that some local optimum is chosen. This 'take what you can get now' strategy is the source of the name for this class of algorithms. When the algorithm terminates, we hope that the local optimum is equal to the global optimum. If this is the case, then the algorithm is correct; otherwise, the algorithm has produced a suboptimal solution. If the best answer is not required, then simple greedy algorithms are sometimes used to generate approximate answers, rather than using the more complicated algorithms generally required to generate an exact answer."
http://www.cs.man.ac.uk/~graham/cs2022/
Re:the details are scant, but this sounds like.... (Score:2, Funny)
Yup, good old mom. How right she was. Nowadays, we've got travesties like the Selfish Gene [amazon.com] kicking around the gay gene [theonion.com] (the second one down.)
Ah, and it all comes down to.. (Score:1)
Lets keep this ideas since they give something new not only to technology but the whole pfilosophy the western civilization is based on.
We are used to masters and slaves, what about all equal?
All responsible.
All powerfull.
And this is somehow news? (Score:2, Informative)
Why is this news?
Distributed systems that do not rely on a centralised authority, be it for synchronising or resource distribution, are by far not a new thing. To name a random example (and you can find a dozen others with five minutes of Googling), the Prospero Resource Manager [isi.edu] was a USC project started in the early 90s that relied on distributed authorities with no centralised command centre.
Furthermore, if the computers are self-controlling and not guarded by anything besides their internal mechanisms that rely on the checks on other computers, the potential danger lies in a computer in the grid having a seriously fscked-up internal state. In other words, can a malfunctioning computer be trusted to monitor itself correctly? I think not.
Re:Internet Connections (Score:1)
Clearly this idea will only work if it was possible to do it in some way originally, this being just a different way of doing it.
Mitigating factors (Score:3, Interesting)
The extent to which communication is a bottleneck in parallel processing depends strongly on the problem at hand and the algorithm used to tackle it. Some problems are amenable to batch processing (e.g. Seti@home), others require some level of boundary-synchonisation (simple fluid codes), others require synchronisation across all nodes (e.g. more complex plasma simulations)
For batch processing tasks, there isn't an issue. For the other's the loose synchronisation may be acceptable depending on the knock-on effect. Loosening the synchronisation obviously decreases the network and infrastructural burden on the job allowing the algorithm to scale better, but the effect of this has to be carefully studied.
This is important to the application developer, but is not particularly relevent to grids per-say. Grid activity, at the moment, is mainly towards developing code at a slightly lower level than application-dependant communication. It is already building up an infrastructure in which jobs can run which tries to remove any dependancy on a central machine. This is because having a central server is a design that doesn't scale well (and also introduces a single point-of-failure). The Globus toolkit [globus.org] provides a basic distributed environment for batch parallel processing, including a PKI-based Grid security system: GSI.
On top of this, several projects [globus.org] are developing extra functionality. For example, the DataGrid project [edg.org] is adding may component, such as automatic target selection [server11.infn.it], fabrication management [web.cern.ch] (site management, fault tolerance, ...), data management [cern.ch] (replica selection, [web.cern.ch] management [web.cern.ch] and optimisation [web.cern.ch], grid-based RDBMS [web.cern.ch]), network monitoring infrastructure [gridpp.ac.uk] and so on.
The basic model is currently batch-processing, but this will be extended soon to include sub-jobs (both in parallel and with a dependency tree) and an abstract information communication system which could be used for intra-job communication (R-GMA [rl.ac.uk]).
The applications will need to be coded carefully to fully exploit the grid, and reducing network overhead is an important part of this, but The Grid isn't quite at that stage, yet. But we're close to having the software needed for people to just submit jobs to the grid, without caring who provides the computing resource, or the geographical location they'll run.
NTP, anyone? (Score:2)
Re:NTP, anyone? (Score:2)
NTP is a hierarchy - tier 1, tier 2, etc.
This don't sound like one.
NTP is organized as a hierarchy because there is a "true" time which it attempts to track. Take away the atomic clocks, and NTP would collapse into a hierarchy-less system which nevertheless converges to agreement about some sort of "time".
The 80s retro work (Score:1, Insightful)
Research facility by a couple of the people involved in the ethernet development. They we synchronizing Xerox's phone/address list throughout the world by random contact and update. While they are certainly people with a hammer (random control) hitting anything looking vaugely like a nail, the experiment was a great success. They had a strong mathematical analysis developed in the medical community: communicable disease propogation. The system was far more reliable and lower cost (in communications) than any attempt to track the connections and run data propogation that "knows" an even slightly out-of-date view of available network connections.
If you think random cannot work in practice, don't use ethernet. For that matter, don't use semiconductor technology at all.
Read the actual article (Score:2, Informative)