Welcome to MilkyWay@home

Hopefully...

Message boards : Number crunching : Hopefully...
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3

AuthorMessage
cwhyl

Send message
Joined: 11 Nov 07
Posts: 41
Credit: 1,000,181
RAC: 0
Message 4131 - Posted: 15 Jul 2008, 7:38:02 UTC
Last modified: 15 Jul 2008, 8:01:44 UTC

Got a gs_3730382 completed and it took 2h 38min, the gs_3720282 used 5h.
Both fine with me.
AMD X2 2.8gig/Linux.
ID: 4131 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Westsail and *Pyxey*
Avatar

Send message
Joined: 22 Mar 08
Posts: 65
Credit: 15,715,071
RAC: 0
Message 4135 - Posted: 15 Jul 2008, 8:09:21 UTC

Finally snagged a 382 series from the host I quoted above before it purged.
So just to sum up:
6,206 sec for a 382 resulting in 139.29 credits
11,658 sec for the 282 series equals 260
and 194 seconds for the 182s with 4.06 granted.

These are all from the same host. It is an X2 5000+ at 3.2g running 6.2.11 under Linux 64.
Peace and happy crunching!
ID: 4135 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Lord Tedric
Avatar

Send message
Joined: 9 Nov 07
Posts: 151
Credit: 8,391,608
RAC: 0
Message 4136 - Posted: 15 Jul 2008, 10:55:59 UTC
Last modified: 15 Jul 2008, 11:01:25 UTC

wow

It has taken 4hr 38min 05sec (16685 secs) to reach the 50% mark
and just 4hr 38min 26 sec (16706 secs) to complete

a dif. of only 21 sec for the second 50%

AMD x2 Dual Core 6000+
Win XP SP3 32bit
2GB DDR2 667 MHz RAM
ID: 4136 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Van Fanel

Send message
Joined: 27 Mar 08
Posts: 5
Credit: 2,232,048
RAC: 0
Message 4137 - Posted: 15 Jul 2008, 11:15:49 UTC

Just 2 cents:
If the crunching time is 3 hours at least, then having that 20 minutes buffer time to fetch more WUs doesn't make much sense, does it?
I mean: there is a 20 WU limit, and if the machine asks for more, the server will tell it to sit back and wait for 20 minutes before asking again for more work. If the machine already has 20 WUs, there is no way that after only 20 minutes (each WU takes over 3 hours...) the machine will be able to download something.
So, in order to decrease the number of pointless connections to the servers, why not increase this amount of buffer time to, say... 3 hours? :D
ID: 4137 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
voltron
Avatar

Send message
Joined: 30 Mar 08
Posts: 50
Credit: 11,593,755
RAC: 0
Message 4139 - Posted: 15 Jul 2008, 14:19:22 UTC - in response to Message 4137.  

Just 2 cents:
If the crunching time is 3 hours at least, then having that 20 minutes buffer time to fetch more WUs doesn't make much sense, does it?



Econo fix is quick and dirty. Legacy code is to be treated as a "quaint" tourist
attraction. Consider it a free memento of an optimistic past . Shoe strings are cheap and often break.

Voltron
ID: 4139 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Nathan
Project scientist
Avatar

Send message
Joined: 4 Oct 07
Posts: 43
Credit: 53,898
RAC: 0
Message 4144 - Posted: 15 Jul 2008, 16:45:29 UTC

Again, thanks for all the feedback guys. I do apologize for the suddenness of the longer WUs, but weighing all the problems with the server and the lack of work I figured it was worth the risk. As has been mentioned the server is performing much better than before, which was a big problem when you couldn't even connect to the site for how bogged down it was.

As was commented on by someone above, we seem to have found a happy place with the length of the WUs now, so there shouldn't be any more "big" instances in time change like this.

To address a couple of the comments above: yes the science will benefit, we will gets increased accuracy within all of our numerical calculations.

Yes slower machines still contribute a lot to the project. Some tests have shown that even the slowest machines have a relatively high chance of their work bettering the population thereby improving the results.

The quickness of the WUs past the 50% mark I believe is phenomenon associated with the code: I believe that the first 50% is the calculation of the integral and the other half of the WU is the calculation of the actual likelihood given the data (the final result of your crunching): the problem with this is that there are about 100,000,000 calculations in the integral now, and... about 100,000 for the data. You can see the speed issue here. Again, this is just my theory given what I've seen and the structure of the code, but it seems to fit.

We'll get the delays and deadlines fixed as soon as I'm able to talk to Travis. Probably, tomorrow (Wednesday).

Keep up the good work guys!




~Nate~
ID: 4144 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Alinator

Send message
Joined: 7 Jun 08
Posts: 464
Credit: 56,639,936
RAC: 0
Message 4148 - Posted: 15 Jul 2008, 18:05:58 UTC - in response to Message 4144.  
Last modified: 15 Jul 2008, 18:08:07 UTC

Again, thanks for all the feedback guys.

<snip>

Keep up the good work guys!





LOL...

No problemo! ;-)

I was relieved to hear that even slugs (like mine) do perform useful science here at MW. I was kind of concerned given that I recently wandered over and haven't had the time to fully absorb how the modeling and simulation actually works, and whether antiques were helping or hurting the project in the grand scheme of things. The part I liked about MW is it supports PPC Mac's running less than Leopard (Tiger?, Macs aren't my strong suit), and the runtimes were still in the range older CPU's can handle and still let you run more than one project smoothly.

One thing to think about was given today's range of host capability, on the old work, 5 days made the project fairly loose in terms of deadline tightness factor. I'm pretty sure this had a lot to do with the constant pounding on the backend for new work by the hosts. Even my old timers were pestering the project for work far more than they really needed to. I would suggest trying something like 2 weeks for the new deadline as a starting point. This should tighten project significantly for the fast hosts, but not make it so tight that MW hogs the machine for slow hosts running more than one project with even resource splits.

As far as credit rate goes, MW is pretty new and the natural evolution of host speed caused some significant 'credit deflation' in the early days of BOINC when Benchmark-Time was the primary scoring methodology. I'm pretty sure this is the root cause of a lot of the credit 'soapboxing' which goes on from time to time.

So in my case, I don't have a problem when the newer projects set their rates somewhat higher than SAH or EAH for example. I've argued the case there that they should have made a rate increase to get back to what the rates were when they first went production, rather than de-rate to what the projects were currently running which hadn't implemented a more constant scoring system than BM-T.

Alinator
ID: 4148 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Idefix

Send message
Joined: 19 Apr 08
Posts: 7
Credit: 3,067
RAC: 0
Message 4155 - Posted: 15 Jul 2008, 23:11:11 UTC - in response to Message 4119.  

Hi,

Now to the immediate problem of short deadlines and d/l too much work....remember everyone when you think of dumping some....they have to be resent by the server ....then crunched......which gives you even more time to report before your resend does .....and yours gets credit if reported 1st.......so those teetering on the edge of deadlines should get you credit anyways :) Don't be too quick to abort!

Some people are playing a similar game every Wednesday and Saturday. It's called Lotto ;-)

Regards,
Carsten
ID: 4155 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2 · 3

Message boards : Number crunching : Hopefully...

©2024 Astroinformatics Group