Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Technology

A Simple Grid Computing Synchronization Solution 55

atari_kid writes "NewScientist.com is running a article about a simple solution to the synchronization problems involved in distributed computing. Gyorgy Korniss and his colleagues at the Rensselaer Polytechnic Institute proposed that each computer in a grid synchronize by occasionally checking with a randomly chosen computer in the network instead of centralizing the grid by having a global supervisor."
This discussion has been archived. No new comments can be posted.

A Simple Grid Computing Synchronization Solution

Comments Filter:
  • by Anonymous Coward on Saturday February 01, 2003 @10:57AM (#5203902)
    NewScientist also carried an article how randomly moving search agents can speed up P2P technologies, the current idea of : "Each individual computer makes occasional checks with randomly-chosen others, to ensure it is properly synchronised." is again very similar

    The gist is, use a mathematical ploy to ensure that the ammount by which the system can degrade over time is compoensated by the simplest system possible.

    This idea could perhaps be taken further...
  • But... (Score:3, Informative)

    by mrtorrent ( 598803 ) <mike@tGAUSShemikecam.com minus math_god> on Saturday February 01, 2003 @11:02AM (#5203929) Homepage
    If I understand this correctly, wouldn't it contain the potential for the computers to become very desynchronized. What I mean is that, since each computer may become slightly off from all the others on its own, if each computer synchronizes to another random computer in the group, couldn't some of the computers become massively off?
    • Not as I read it... (Score:3, Interesting)

      by Some Bitch ( 645438 )
      From the original New Scientist article...
      Each individual computer makes occasional checks with randomly-chosen others, to ensure it is properly synchronised.
      "This means each individual processor only has to communicate with a finite number of others," says Korniss.


      To me this would imply processors being 'grouped' with different groups checking with one another randomly. e.g. if you have 2 groups of 2 processors (to keep the example simple) group A gets it's timings by checking with a random processor in group B and vice versa. This way a whole group cannot go out of sync because it's timings are determined by a different group.

      Of course if I'm talking crap I'm absolutely certain someone will tell me.
    • Re:But... (Score:5, Insightful)

      by NortWind ( 575520 ) on Saturday February 01, 2003 @11:23AM (#5204055)

      It is more like the way that an entire auditorium full of people can clap in unison without a leader.

      Each node just queries some other random node, and if it is behind that node, it advances a little, (say 10% of the difference,) and if it is ahead of the other node, it backs up a little. This way, by repeatedly seeing how the others are doing, each node tracks onto the average of the group. The goal isn't to be right, it is just to agree.

      • ...there is a leader. People clapping in unison only do so in the presence of a rythmic audio source, namely music.

        This is the equivalent of a central timing server. When people start to go by their neighbours (ie people looking around with excited looks on their faces ["aren't we cool, clapping to this music!"]) and stop listening to the source of music, one half of the auditorium gets waaay off beat.

        My apologies if you were referring to people beginning to clap after a performance, because you have a point there.

        My $0.02 CDN
      • It is more like the way that an entire auditorium full of people can clap in unison without a leader.

        Uh, no. People clapping in an auditorium can hear the combined audio output of everybody else clapping. I'm not just listening to one random person in the audience.

        Without thinking too hard about it, it seems that what's needed is not just a random (as in unpredictable) number, but a well distributed random number, so that you avoid the formation of subgroups that are just polling each other.

      • That is a bad example. A large group of people never claps in unison. Some people are half a beat off or worse.
      • In my experience people only claip in unison if there is a leader. The difference between this idea and clapping is that with clapping you always follow the same person, or few people, ie those near you. With this you randomly choose other people anywhere in the audience, which spreads out the comparisons properly.
    • Re:But... (Score:2, Interesting)

      by rusty0101 ( 565565 )
      It would depend upon the inaccuracy of the randomness algorithm. If the algorithm is efficient, randomly selecting each of the computers before re-selecting any that have been hit before, then inaccuracies will go away.

      If there is a subset of computers that only consult each other, and never any of the other computers, and none of the other computers consult these, then there is a much greater probability of drift for that set of computers.

      Just my interpretation, I could be wrong.

      -Rusty
    • As long as the algorithm truly allows for a random distribution, drift should be kept absolutely to a minimum.

      Law of averages is such a wonderful thing. ;)
      • problem is you're betting on statistics so ... every once in a while it will break down and you'll get a lemming effect everyone jumps ahead. This is why everyonce in a while you need to synch up with an "offical time" this could be distibuted and done much less frequently. With proper safe gaurds you could provide a solution with no deffects.
    • but, the primary node is still going to what everyone is syncronizing off of. so if node b say gets unsyncronized, and node d randomly syncs with node b, then node d is out of sync as well as node b. this brings a whole mess of out of sync nodes. then supposed node c comes in and syncs off the primary node. node c is in sync. now b randomly selects c, so it gets synced, but c stays synced, b is now synced, and eventually d will become synced since it won't randomly select itself. therefore, eventually being synced off of b/c/primary. hence synced
  • Hey buddy, you got any change?

  • Who knows if the other computer is correct?

    The real answer is a smaller scale super computer controllig the distributed computing.
    • The real answer is a smaller scale super computer controllig the distributed computing.
      The whole point of distributed computing is to get away from the reliance on central controllers. If the controller crashes, then the whole system dies, whereas in a distributed mechanism, several nodes may die and the entire system can keep running.

      In addition, there are massive problems with scaling if you have a central controller. You simply cannot add more nodes to the system past a certain point without heavily loading the controller.

    • Re:However (Score:3, Interesting)

      The other computer doesn't have to be correct--that's the beauty of it. Each computer isn't checking some particular other computer at regular intervals, they're choosing a different computer to check with each time.

      So let's say a computer named Louise is trying to stay in sync with the group as a whole. At some point it checks with another computer, Virginia, to see how far ahead it is (it's a very fast computer, it knows it'll be ahead). It finds that it's not very far ahead at all, so it corrects just a small amount. Next time it wants to check itself, it checks with Algernon. Algernon is a 7 year old Macintosh Performa. Louise finds itself to be way, way ahead, and holds itself back a lot.

      The point is that the average amount by which Louise finds itself to be ahead will depend directly on the average amount by which it's ahead, so while it'll always be a bit out of sync, it'll keep itself from ever getting too far off. It's a matter of statistics and keeping errors within acceptable ranges, rather than achieving perfection.
  • by smd4985 ( 203677 ) on Saturday February 01, 2003 @11:44AM (#5204154) Homepage
    a greedy algorithm. at every iteration, do the best that you can and hope for the best. even if the solution/end-state is suboptimal, the huge resources needed for central coordination aren't needed.

    "Greedy algorithms work in phases. In each phase, a decision is made that appears to be good, without regard for future consequences. Generally, this means that some local optimum is chosen. This 'take what you can get now' strategy is the source of the name for this class of algorithms. When the algorithm terminates, we hope that the local optimum is equal to the global optimum. If this is the case, then the algorithm is correct; otherwise, the algorithm has produced a suboptimal solution. If the best answer is not required, then simple greedy algorithms are sometimes used to generate approximate answers, rather than using the more complicated algorithms generally required to generate an exact answer."
    http://www.cs.man.ac.uk/~graham/cs2022/g reedy/
    • My mother always said, "be careful, son, it's an algorithm-eat-algorithm world out there... Each node will always try to take whatever it can get... you must be sure to always maintain a fair share for yourself of the grid computing resources."

      Yup, good old mom. How right she was. Nowadays, we've got travesties like the Selfish Gene [amazon.com] kicking around the gay gene [theonion.com] (the second one down.)

      :) -- cruz

  • And it all comes down to, the great idea of the international mesh of networked computers. The same philosophy that says tree stractures are a bottleneck to a stable network. And indeed are. A network that feeds from its members, not from the "great master-servers" can serve well on stability and performance. In fact, its stability is guaranteed in a way that is impossible to cut off the net unless the whole neibhorhood gets cut off.

    Lets keep this ideas since they give something new not only to technology but the whole pfilosophy the western civilization is based on.

    We are used to masters and slaves, what about all equal?

    All responsible.

    All powerfull.
  • Why is this news?

    Distributed systems that do not rely on a centralised authority, be it for synchronising or resource distribution, are by far not a new thing. To name a random example (and you can find a dozen others with five minutes of Googling), the Prospero Resource Manager [isi.edu] was a USC project started in the early 90s that relied on distributed authorities with no centralised command centre.

    Furthermore, if the computers are self-controlling and not guarded by anything besides their internal mechanisms that rely on the checks on other computers, the potential danger lies in a computer in the grid having a seriously fscked-up internal state. In other words, can a malfunctioning computer be trusted to monitor itself correctly? I think not.

  • Mitigating factors (Score:3, Interesting)

    by CunningPike ( 112982 ) <paul@astro[ ]a.ac.uk ['.gl' in gap]> on Saturday February 01, 2003 @12:39PM (#5204426) Homepage
    Its always dangerous to comment about something without the full information available. The NewScientist article is quite vague and the Science paper that the article is based on is currently unavailable on-line, but I'll risk it ;)

    The extent to which communication is a bottleneck in parallel processing depends strongly on the problem at hand and the algorithm used to tackle it. Some problems are amenable to batch processing (e.g. Seti@home), others require some level of boundary-synchonisation (simple fluid codes), others require synchronisation across all nodes (e.g. more complex plasma simulations)

    For batch processing tasks, there isn't an issue. For the other's the loose synchronisation may be acceptable depending on the knock-on effect. Loosening the synchronisation obviously decreases the network and infrastructural burden on the job allowing the algorithm to scale better, but the effect of this has to be carefully studied.

    This is important to the application developer, but is not particularly relevent to grids per-say. Grid activity, at the moment, is mainly towards developing code at a slightly lower level than application-dependant communication. It is already building up an infrastructure in which jobs can run which tries to remove any dependancy on a central machine. This is because having a central server is a design that doesn't scale well (and also introduces a single point-of-failure). The Globus toolkit [globus.org] provides a basic distributed environment for batch parallel processing, including a PKI-based Grid security system: GSI.

    On top of this, several projects [globus.org] are developing extra functionality. For example, the DataGrid project [edg.org] is adding may component, such as automatic target selection [server11.infn.it], fabrication management [web.cern.ch] (site management, fault tolerance, ...), data management [cern.ch] (replica selection, [web.cern.ch] management [web.cern.ch] and optimisation [web.cern.ch], grid-based RDBMS [web.cern.ch]), network monitoring infrastructure [gridpp.ac.uk] and so on.

    The basic model is currently batch-processing, but this will be extended soon to include sub-jobs (both in parallel and with a dependency tree) and an abstract information communication system which could be used for intra-job communication (R-GMA [rl.ac.uk]).

    The applications will need to be coded carefully to fully exploit the grid, and reducing network overhead is an important part of this, but The Grid isn't quite at that stage, yet. But we're close to having the software needed for people to just submit jobs to the grid, without caring who provides the computing resource, or the geographical location they'll run.

  • Based on the (limited) details available, I'd say it sounds like they've just reinvented NTP -- except they've done it poorly, and without any security.
  • The 80s retro work (Score:1, Insightful)

    by Anonymous Coward
    In the early 80's I heard a talk at IBM's Almaden
    Research facility by a couple of the people involved in the ethernet development. They we synchronizing Xerox's phone/address list throughout the world by random contact and update. While they are certainly people with a hammer (random control) hitting anything looking vaugely like a nail, the experiment was a great success. They had a strong mathematical analysis developed in the medical community: communicable disease propogation. The system was far more reliable and lower cost (in communications) than any attempt to track the connections and run data propogation that "knows" an even slightly out-of-date view of available network connections.

    If you think random cannot work in practice, don't use ethernet. For that matter, don't use semiconductor technology at all.
  • by myd ( 85603 )
    The New Scientist summary is lame. Pick up a copy of Science and read the actual article if you can. It says, "Here, we show a way to construct fully scalable parallel simulations for systems with asynchronous dynamics and short-range interactions." This method, while interesting, does not generalize to a wide range of applications. For example, you could not apply this approach to molecular dynamics simulations, which involve primarily long-range interactions between atoms. Still, the authors of this article are clearly pretty clever.

Avoid strange women and temporary variables.

Working...