Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Math Supercomputing Technology Science

'Shrinking Bull's-eye' Algorithm Speeds Up Complex Modeling From Days To Hours (mit.edu) 48

rtoz sends word of the discovery of a new algorithm that dramatically reduces the computation time for complex processes. Scientists from MIT say it conceptually resembles a shrinking bull's eye, incrementally narrowing down on its target. "With this method, the researchers were able to arrive at the same answer as a classic computational approaches, but 200 times faster." Their full academic paper is available at the arXiv. "The algorithm can be applied to any complex model to quickly determine the probability distribution, or the most likely values, for an unknown parameter. Like the MCMC analysis, the algorithm runs a given model with various inputs — though sparingly, as this process can be quite time-consuming. To speed the process up, the algorithm also uses relevant data to help narrow in on approximate values for unknown parameters."
This discussion has been archived. No new comments can be posted.

'Shrinking Bull's-eye' Algorithm Speeds Up Complex Modeling From Days To Hours

Comments Filter:
  • by goombah99 ( 560566 ) on Monday November 16, 2015 @11:55AM (#50940795)

    Uh isn't this a toy version of the entire field Estimation of Distribution Algorithms for parameter optimization??? Anyone want to point out what is new here since the MIT article sure didn't point it out.

    • by Anonymous Coward on Monday November 16, 2015 @12:22PM (#50941025)

      Indeed, it looks much like this. I happend to have a dozen EDA papers from the past couple decades open on my desktop at this moment! The foremost of the three hard problems in estimation of distribution is how one describes a probability distribution over high dimensional space in a compact low dimensional way. An efficient, if naive, way to do this is just fit the distribution to a function like a Gaussian and then just change the axes and width of the gaussian to evolve it. one then can easily sample and update this gaussian process to narrow down on the parameter guesses. This may be what they "bullseye" approach is doing from the description. Another more general approach I just happened to be re-reading today is the Probability collective which sort of divides the high dimensional space into a low dimensional product distribution. This can handle non-unimodal surfaces and by adapting the basis for the product space coordiantes adapt itself to the local topology. there's a lot of papers on this including this one: http://www.worldscientific.com... [worldscientific.com] (perhaps not the best starting place but I had the link handy!).

      The other two hard problems are how one optimally re-uses old data which was taken in what is now the low probality region. one has to down weight it wrt to fresher samples but how is unclear. Finally there's the annealing schedule which reflects the trade between exploration of the surface versus exploitation of the available data. anneal quickly and you converge to local minimum and miss the global minimum. Usually these are arbitrary. But Probability collectives has some clever techniques to make those decisions more principled. In general setting the annealing rate by cross validation is good way to decide if you should anneal faster or slower.

    • by notea42 ( 926633 )
      Yep - it looks like they rediscovered known methods for optimization using a series of cheap approximations of the black-box. You find the optimal solution of the approximation and use that to choose one or more points to evaluate on the original black-box. Then you update your approximation with the results and repeat. I was doing this in college back in 1999. I wrote to generate pseudo-random functions to test on, and we used the same class of functions as approximations for the harder, more expensiv
      • by notea42 ( 926633 )
        Found the other relevant paper - "Model-assisted pattern search methods for optimizing expensive computer simulations" by Seifert, Torczon, and Trosset at The College of William and Mary in VA
      • So, it's the computerized version of the back of the envelope method for zeroing in on a correct answer?
    • by plopez ( 54068 )

      I'm curious about the claim "any complex model". Really? ANY complex model?[1] Is this the silver bullet? The Holy Grail? After testing it on two "complex" (definition please) models?

      So yeah, I was curious and it looks like they are doing sparse parameter estimation of sorts. I wonder how they deal with non-uniqueness and convergence on local maxima.

      What is just as import is knowing parameter sensitivity. If a parameter does not add much to the accuracy of the model, throw it out. This has it's own pitfall

  • Somewhat hyped (Score:5, Informative)

    by GlobalEcho ( 26240 ) on Monday November 16, 2015 @12:00PM (#50940853)
    It looks to me that the new algorithm [arxiv.org] offers no real speed advantage over gaussian process optimization [wikipedia.org]. Rather, by making similar local approximations it gains some convergence proofs. Those are nice and all, but not often relevant to real-world applications.
    • It looks to me that the new algorithm offers no real speed advantage over gaussian process optimization. Rather, by making similar local approximations it gains some convergence proofs. Those are nice and all, but not often relevant to real-world applications.

      Huh. You think they would have realized that more quickly.

      • Huh. You think they would have realized that more quickly.

        I'm sure the authors are well aware of it. It's the press hype that I'm pointing out.

    • by xiox ( 66483 )

      How is this relevant? The algorithm is for speeding up Markov Chain Monte Carlo (MCMC) analyses, which are very commonly used in science to examine posterior probability distributions for a particular model. I run MCMC codes most weeks and they're becoming increasingly more common in my field of science. Being able to speed up the analysis by a large factor to avoid model likelihood evaluations would be very valuable.

      • How is this relevant? The algorithm is for speeding up Markov Chain Monte Carlo (MCMC) analyses

        The paper is phrasing it in terms of MCMC but it's more generally applicable if you think of it as an optimizer.

    • by durdur ( 252098 )

      As far as I know the state of the art for nonlinear optimization when the objective is a very expensive process to run (or simulation of a process) is Radial Basis Function modeling/estimation. Much of the work on this was done at Cornell University - see for example Rommel Regis's papers [sju.edu]. These algorithms do some random sampling, then build a model of the objective function, then based on that pick one or more new points to sample, and repeat.

      • I'm not familiar with it (and I'm a dilettante in the domain), but it sounds interesting. Can you recommend a starting point?
    • It's hard to say without doing all the implementation work, but the paper does say that the algo is "...general enough to describe both local polynomial and Gaussian process approximations..." and there is a section called "Local Gaussian process surrogates". So, they do in fact incorporate this in the larger framework of their algo.

      In fact, they claim "...that the accuracy is nearly identical for all the cases, but the approximate chains use fewer evaluations of the true model, reducing costs by more than

      • It's hard to say without doing all the implementation work, but the paper does say that the algo is "...general enough to describe both local polynomial and Gaussian process approximations..." and there is a section called "Local Gaussian process surrogates". So, they do in fact incorporate this in the larger framework of their algo.

        In fact, they claim "...that the accuracy is nearly identical for all the cases, but the approximate chains use fewer evaluations of the true model, reducing costs by more than an order of magnitude for quadratic or Gaussian process approximations (Figure 10b)."

        Indeed, though that quote is simply pointing out that the relative performance of their algorithm is at its best with its mode set to local Gaussian approximations.

  • by leftover ( 210560 ) on Monday November 16, 2015 @12:01PM (#50940855) Homepage

    Is this not hill-climbing on an assumed-flat-ish response surface? There is history and a whole forest of approaches using approximation when the "cost" function is expensive to evaluate. Solving a honking big set of PDEs certainly qualifies for that.

    A big part of the history is work to predict whether the degree of assumed flatness is not in conflict with reality. That in itself can become computationally expensive.

    Fun stuff though!

  • IANACS (Score:2, Interesting)

    by Anonymous Coward

    So I'm a lay-idiot, but whats to stop this getting stuck on a local maxima / minima for a given parameter if it's not doing a comprehensive evaluation?

    • by Anonymous Coward

      Probably some amount of random retries...

  • by ai4px ( 1244212 ) on Monday November 16, 2015 @01:18PM (#50941577)
    Isn't this shrinking bullseye the same strategy that was used by Turning to throw out all the impossible solutions to the enigma machine? This left the team with many fewer possible solutions to try by brute force.
  • by Anonymous Coward

    Algorithm Speeds Up Complex Modeling From Days To Hours . . .

    Kirk: Kirk to Enterprise.

    Spock: Spock here.

    Kirk: Captain Spock, damage report.

    Spock: Admiral, if we go "by the book". like Lieutenant Saavik, hours could seem like days.

    Kirk: I read you captain. Let's have it.

    Spock: The situation is grave, Admiral. We won't have main power for six "days". Auxiliary power has temporarily failed. Restoration may be possible, in two "days". By the book, Admiral.

    . . .

    Saavik: You lied!

    Spock: I exaggerated.

    Kirk:

  • "What do you want to eat?"
    "I dunno, what do you want to eat?"

    Sounds like a perfect fit!

  • by Anonymous Coward

    Nothing to see here

A committee takes root and grows, it flowers, wilts and dies, scattering the seed from which other committees will bloom. -- Parkinson

Working...