Welcome to MilkyWay@home

What the new searches are doing

Message boards : Number crunching : What the new searches are doing
Message board moderation

To post messages, you must log in.

AuthorMessage
Profile Travis
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist

Send message
Joined: 30 Aug 07
Posts: 2046
Credit: 26,480
RAC: 0
Message 8369 - Posted: 15 Jan 2009, 17:05:27 UTC

You'll probably notice that there are workunits with new names available. All the different workunits are performing variations of the newton method ([html]http://en.wikipedia.org/wiki/Newton's_method_in_optimization[/html]) on the different stripes we're doing our tests on. In general what the newton method does is take the first and second derivatives of a function (in our case the likelihood function), and use those to find its minimum. This tends to converge to the minimum of the function VERY fast if it's close to the minimum, and in the case of a quadratic function it only takes one step (our likelihood function is not quadratic however).

What we're doing is taking all the points you're returning, and performing a regression analysis on them to estimate the first derivative (the gradient) and the second derivative (the hessian). If you remember any calculus, a point - gradient/hessian = root (if it's quadratic). In our case, we use point - gradient/hessian as a step to calculate the center of the next area we'll use for another regression. This process repeats until we can't optimize the function anymore.

Now, since our function is a lot more complex than a quadratic function, and since we're estimating the hessian/gradient through regression, the step calculated isn't necessarily the best we can do. This leads us to the different types of searches we're testing right now:

UR - The step taken is the same, however we update the range that parameters are generated in for the next regression on each step. The new range used is simply the new point +/- the step calculated. As we converge to an optimum, this method is rather effective (and simple), as the smaller the step, the smaller our range and more accurate our hessian/gradient calculations.

ER - This version also keeps the same step, but instead of updating the range based on the step, we calculate error values for each parameter in our function (based on the regression), and the next range used is 1-2 standard deviations of our calculations. We're interested to see if this lets us converge quicker and get better hessian/gradient estimates.

URLS - After calculating the hessian/gradient and using this to calculate the step for our search, instead of just taking the step, we use it as a direction to search along. In this case, we generate points along a line from the current point in the search along the calculated step. Using these we do another (more simpler one dimensional regression) to find the minimum along this line. This minimum is used as a more accurate version of the gradient/hessian step. Following this the range is updated as in UR.

ERLS - This performs a line search the same as URLS, except the range is updated as in ER.
ID: 8369 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote

Message boards : Number crunching : What the new searches are doing

©2024 Astroinformatics Group