Welcome to MilkyWay@home

new workunit queue size (6)

Message boards : Number crunching : new workunit queue size (6)
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 3 · 4 · 5 · 6

AuthorMessage
Profile magyarficko

Send message
Joined: 22 Jan 09
Posts: 35
Credit: 46,731,190
RAC: 0
Message 13396 - Posted: 28 Feb 2009, 21:24:45 UTC - in response to Message 13356.  

Should be even better now that the workunits should take around twice as long to crunch.


Wonderful, we've solved the problem of WU availability, but now you've effectively cut WU credits almost in half (for me) yet again!

ID: 13396 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
[B^S] Beremat

Send message
Joined: 19 Feb 09
Posts: 33
Credit: 1,134,826
RAC: 0
Message 13397 - Posted: 28 Feb 2009, 21:35:54 UTC - in response to Message 13396.  

Should be even better now that the workunits should take around twice as long to crunch.


Wonderful, we've solved the problem of WU availability, but now you've effectively cut WU credits almost in half (for me) yet again!


Just because they take longer to crunch doesn't mean that they don't give more credits. :D

ID: 13397 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile banditwolf
Avatar

Send message
Joined: 12 Nov 07
Posts: 2425
Credit: 524,164
RAC: 0
Message 13398 - Posted: 28 Feb 2009, 21:37:00 UTC - in response to Message 13397.  

Unless it's the same credits as the previous amount of work.
Doesn't expecting the unexpected make the unexpected the expected?
If it makes sense, DON'T do it.
ID: 13398 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Brickhead
Avatar

Send message
Joined: 20 Mar 08
Posts: 108
Credit: 2,607,924,860
RAC: 0
Message 13399 - Posted: 28 Feb 2009, 21:37:05 UTC - in response to Message 13396.  

Should be even better now that the workunits should take around twice as long to crunch.


Wonderful, we've solved the problem of WU availability, but now you've effectively cut WU credits almost in half (for me) yet again!


No. The longer WUs are granted more credit each.
ID: 13399 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile magyarficko

Send message
Joined: 22 Jan 09
Posts: 35
Credit: 46,731,190
RAC: 0
Message 13400 - Posted: 28 Feb 2009, 21:38:18 UTC - in response to Message 13396.  

but now you've effectively cut WU credits almost in half (for me) yet again!


Apologies! Problem seems to have been rectified now, but earlier my WU's were running twice as long and still getting same amount of credits.
ID: 13400 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Brickhead
Avatar

Send message
Joined: 20 Mar 08
Posts: 108
Credit: 2,607,924,860
RAC: 0
Message 13401 - Posted: 28 Feb 2009, 21:42:10 UTC - in response to Message 13341.  

I know have work on all computers and including the GPU where I was getting practically nothing.


Server-side, it looks like the change helped work availability quite a bit.



The only problem- and it's a big one- is when you have 'got 0 new tasks' multiple times in a row, which does still happen, and run dry and the reconnect time goes to 2 or 3 hours and you have nothing to crunch until then... any way to change/fix(?) that?


I think the increasing back-off times are built into the BOINC core client, so there's nothing anyone at MW can do about it. Berkeley's idea behind this design was to ease the burden for project servers after being offline for a while (as seems to be common over at SAH), making clients contact them over a longer period of time instead of all at once.
ID: 13401 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile caferace
Avatar

Send message
Joined: 4 Aug 08
Posts: 46
Credit: 8,255,900
RAC: 0
Message 13402 - Posted: 28 Feb 2009, 21:52:09 UTC

Travis, thanks for keeping my beasties fed.

-jim
ID: 13402 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile The Gas Giant
Avatar

Send message
Joined: 24 Dec 07
Posts: 1947
Credit: 240,884,648
RAC: 0
Message 13417 - Posted: 28 Feb 2009, 23:07:33 UTC

My quady appears to be much happier now, as does my 2 * work C2D's and my old P4. I wouldn't have thought that decreasing the cached wu's per cpu would work, but this and extending the crunching time of the wu's appears to have done the job! Well done Travis.
ID: 13417 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Brickhead
Avatar

Send message
Joined: 20 Mar 08
Posts: 108
Credit: 2,607,924,860
RAC: 0
Message 13420 - Posted: 28 Feb 2009, 23:11:42 UTC

I second the two previous posts. Thanks, Travis (also for the work you've done *before* visible success).
ID: 13420 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Zanth
Avatar

Send message
Joined: 18 Feb 09
Posts: 158
Credit: 110,699,054
RAC: 0
Message 13427 - Posted: 28 Feb 2009, 23:34:25 UTC

My i7 is just fine, but my Core2Quad running on GPU is almost never running at its full potential. I usually only get 10 WUs when a request goes in, it crunches 8 in about 5-10 minutes, starts up two, gets a few more, maybe 8 more, sometimes 4, sometimes none. But at any rate, quite often today I've seen my system crunching 6 WUs and having none complete, so it made a request and just got nothing, or perhaps only 6. But I'm pretty positive its not had 24 WUs at any time today.
ID: 13427 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Temujin

Send message
Joined: 12 Oct 07
Posts: 77
Credit: 404,471,187
RAC: 0
Message 13429 - Posted: 28 Feb 2009, 23:49:09 UTC - in response to Message 13427.  

All running well here, thanks Travis
ID: 13429 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
BarryAZ

Send message
Joined: 1 Sep 08
Posts: 520
Credit: 302,524,931
RAC: 2
Message 13437 - Posted: 1 Mar 2009, 0:49:08 UTC - in response to Message 13334.  

OK -- what I see is that this works ok *if* the only application running is Milkyway -- I'm doing that as a test on a batch of computers and it does work. However, if one is doing multiple projects (like I normally do), then the small cache tends to result in other projects *with lower resource shares* actually getting a larger proportion of CPU cycles because they have larger caches with similar due dates (examples in particular would be Spinhenge and Poem for me, but it also seems to apply to a lesser degree with SETI, Einstein, Rosetta). The only project which stays reasonable is Climate -- but that's because the due dates are so far out.

I know have work on all computers and including the GPU where I was getting practically nothing.


Server-side, it looks like the change helped work availability quite a bit.


ID: 13437 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Cluster Physik

Send message
Joined: 26 Jul 08
Posts: 627
Credit: 94,940,203
RAC: 0
Message 13440 - Posted: 1 Mar 2009, 1:05:25 UTC - in response to Message 13382.  
Last modified: 1 Mar 2009, 1:16:37 UTC

Hey now, some of the GPU apps are taking a whole 3 seconds.

Not really. But the latest test app does not use a full CPU core to poll the GPU all the time. That lowers the CPU load quite a bit and therefore the reported time, which is actually the CPU time. Ice has posted already one result that took him mere 0.96 CPU seconds to crunch (but a bit more on the GPU of course).

It is a bit hard to get some meaningful timing for a GPU app. If only one WU at a time would run, one could report the wall clock time. but my app tries to overlap several WUs to increase the GPU load. Therefore the wall clock time is also not reliable (you can have a look in the stderr.txt output). Actually I can let the app report any time you want. So if you have a wish... If you want we can skew the cpcs values (they are skewed anyway with GPU apps) on the stats sites with some creative timing ;)
ID: 13440 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GalaxyIce
Avatar

Send message
Joined: 6 Apr 08
Posts: 2018
Credit: 100,142,856
RAC: 0
Message 13474 - Posted: 1 Mar 2009, 14:22:59 UTC - in response to Message 13440.  
Last modified: 1 Mar 2009, 14:23:38 UTC

Hey now, some of the GPU apps are taking a whole 3 seconds.

Not really.
...<snip>...
Ice has posted already one result that took him mere 0.96 CPU seconds to crunch (but a bit more on the GPU of course).
...<snip>...
If you want we can skew the cpcs values (they are skewed anyway with GPU apps) on the stats sites with some creative timing ;)

Yes of course. I have a stopwatch on my mobile, not too accurate for timing but, prior to the increased length WU since yesterday, they were taking around 8 seconds GMT time.

As for the sub-second you quoted above - I caught one even quicker ;)

CPU: Intel(R) Pentium(R) D CPU 2.80GHz (2 cores/threads) 2.79297 GHz (347ms)

CAL Runtime: 1.3.145
Found 1 CAL device

Device 0: ATI Radeon HD 4800 (RV770) 512 MB local RAM (remote 28 MB cached + 512 MB uncached)
GPU core clock: 680 MHz, memory clock: 750 MHz
800 shader units organized in 10 SIMDs with 16 VLIW units (5-issue)
supporting double precision

0 WUs already running on GPU 0
Starting WU on GPU 0
Calculated about 1.85078e+012 floatingpoint ops on GPU, 6.18221e+007 on FPU.
Calculated about 8.03964e+008 floatingpoint ops on FPU (stars).
WU completed. It took 0.953125 seconds CPU time and 25.528 seconds wall clock time @ 2.79307 GHz.


ID: 13474 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
John Clark

Send message
Joined: 4 Oct 08
Posts: 1734
Credit: 64,228,409
RAC: 0
Message 13635 - Posted: 2 Mar 2009, 15:38:36 UTC
Last modified: 2 Mar 2009, 15:43:48 UTC

Thank you Travis for the other requested changes.

It looks like the changes you and Dave have made are keeping things going smoothly - at least from this end they seem to be. Crunchers are fully fed and have some morsels waiting to gnaw on.

I wonder if any of the GPU crunchers (nice RACs) are equally resplendent with the WU availability?
ID: 13635 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 . . . 3 · 4 · 5 · 6

Message boards : Number crunching : new workunit queue size (6)

©2024 Astroinformatics Group