Message boards :
Number crunching :
new workunit queue size (6)
Message board moderation
Previous · 1 . . . 3 · 4 · 5 · 6
Author | Message |
---|---|
Send message Joined: 22 Jan 09 Posts: 35 Credit: 46,731,190 RAC: 0 |
Should be even better now that the workunits should take around twice as long to crunch. Wonderful, we've solved the problem of WU availability, but now you've effectively cut WU credits almost in half (for me) yet again! |
Send message Joined: 19 Feb 09 Posts: 33 Credit: 1,134,826 RAC: 0 |
Should be even better now that the workunits should take around twice as long to crunch. Just because they take longer to crunch doesn't mean that they don't give more credits. :D |
Send message Joined: 12 Nov 07 Posts: 2425 Credit: 524,164 RAC: 0 |
Unless it's the same credits as the previous amount of work. Doesn't expecting the unexpected make the unexpected the expected? If it makes sense, DON'T do it. |
Send message Joined: 20 Mar 08 Posts: 108 Credit: 2,607,924,860 RAC: 0 |
Should be even better now that the workunits should take around twice as long to crunch. No. The longer WUs are granted more credit each. |
Send message Joined: 22 Jan 09 Posts: 35 Credit: 46,731,190 RAC: 0 |
but now you've effectively cut WU credits almost in half (for me) yet again! Apologies! Problem seems to have been rectified now, but earlier my WU's were running twice as long and still getting same amount of credits. |
Send message Joined: 20 Mar 08 Posts: 108 Credit: 2,607,924,860 RAC: 0 |
I know have work on all computers and including the GPU where I was getting practically nothing. I think the increasing back-off times are built into the BOINC core client, so there's nothing anyone at MW can do about it. Berkeley's idea behind this design was to ease the burden for project servers after being offline for a while (as seems to be common over at SAH), making clients contact them over a longer period of time instead of all at once. |
Send message Joined: 4 Aug 08 Posts: 46 Credit: 8,255,900 RAC: 0 |
Travis, thanks for keeping my beasties fed. -jim |
Send message Joined: 24 Dec 07 Posts: 1947 Credit: 240,884,648 RAC: 0 |
My quady appears to be much happier now, as does my 2 * work C2D's and my old P4. I wouldn't have thought that decreasing the cached wu's per cpu would work, but this and extending the crunching time of the wu's appears to have done the job! Well done Travis. |
Send message Joined: 20 Mar 08 Posts: 108 Credit: 2,607,924,860 RAC: 0 |
I second the two previous posts. Thanks, Travis (also for the work you've done *before* visible success). |
Send message Joined: 18 Feb 09 Posts: 158 Credit: 110,699,054 RAC: 0 |
My i7 is just fine, but my Core2Quad running on GPU is almost never running at its full potential. I usually only get 10 WUs when a request goes in, it crunches 8 in about 5-10 minutes, starts up two, gets a few more, maybe 8 more, sometimes 4, sometimes none. But at any rate, quite often today I've seen my system crunching 6 WUs and having none complete, so it made a request and just got nothing, or perhaps only 6. But I'm pretty positive its not had 24 WUs at any time today. |
Send message Joined: 12 Oct 07 Posts: 77 Credit: 404,471,187 RAC: 0 |
All running well here, thanks Travis |
Send message Joined: 1 Sep 08 Posts: 520 Credit: 302,524,931 RAC: 2 |
OK -- what I see is that this works ok *if* the only application running is Milkyway -- I'm doing that as a test on a batch of computers and it does work. However, if one is doing multiple projects (like I normally do), then the small cache tends to result in other projects *with lower resource shares* actually getting a larger proportion of CPU cycles because they have larger caches with similar due dates (examples in particular would be Spinhenge and Poem for me, but it also seems to apply to a lesser degree with SETI, Einstein, Rosetta). The only project which stays reasonable is Climate -- but that's because the due dates are so far out. I know have work on all computers and including the GPU where I was getting practically nothing. |
Send message Joined: 26 Jul 08 Posts: 627 Credit: 94,940,203 RAC: 0 |
Hey now, some of the GPU apps are taking a whole 3 seconds. Not really. But the latest test app does not use a full CPU core to poll the GPU all the time. That lowers the CPU load quite a bit and therefore the reported time, which is actually the CPU time. Ice has posted already one result that took him mere 0.96 CPU seconds to crunch (but a bit more on the GPU of course). It is a bit hard to get some meaningful timing for a GPU app. If only one WU at a time would run, one could report the wall clock time. but my app tries to overlap several WUs to increase the GPU load. Therefore the wall clock time is also not reliable (you can have a look in the stderr.txt output). Actually I can let the app report any time you want. So if you have a wish... If you want we can skew the cpcs values (they are skewed anyway with GPU apps) on the stats sites with some creative timing ;) |
Send message Joined: 6 Apr 08 Posts: 2018 Credit: 100,142,856 RAC: 0 |
Hey now, some of the GPU apps are taking a whole 3 seconds. Yes of course. I have a stopwatch on my mobile, not too accurate for timing but, prior to the increased length WU since yesterday, they were taking around 8 seconds GMT time. As for the sub-second you quoted above - I caught one even quicker ;)
|
Send message Joined: 4 Oct 08 Posts: 1734 Credit: 64,228,409 RAC: 0 |
Thank you Travis for the other requested changes. It looks like the changes you and Dave have made are keeping things going smoothly - at least from this end they seem to be. Crunchers are fully fed and have some morsels waiting to gnaw on. I wonder if any of the GPU crunchers (nice RACs) are equally resplendent with the WU availability? |
©2024 Astroinformatics Group