Message boards :
Number crunching :
No work
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 9 · Next
Author | Message |
---|---|
Send message Joined: 27 Aug 07 Posts: 647 Credit: 27,592,547 RAC: 0 |
So its 12 WU's/core/day and not 12 WU's/core at any given time (as in a 12 WU/core cache)? It's 12WUs/core at one go. Maximum daily WU quota per CPU is 5000/day. ;-) Lovely greetings, Cori |
Send message Joined: 11 Mar 08 Posts: 10 Credit: 10,647,326 RAC: 0 |
Can't help agreeing, this 12WU/core limit is a pain in the butt - why the limit? S@H allows a 10 day buffer, even given the number of regular SNAFU's they get. Even raising it to just 30/core would be more manageable - until you get the download scheduler problem sorted, we the grunts(the poor sods on the receiving end of Travis' whims), are gonna continue to run out of work, dropping RAC, etc, which is just gonna cause more frustration! How many more crunchers do you want to leave the project in disgust?? Get it sorted, PLEASE!! |
Send message Joined: 3 Jan 09 Posts: 270 Credit: 124,346 RAC: 0 |
So its 12 WU's/core/day and not 12 WU's/core at any given time (as in a 12 WU/core cache)? Ok, thats what I thought, I just wanted to be sure. |
Send message Joined: 6 Apr 08 Posts: 2018 Credit: 100,142,856 RAC: 0 |
Can't help agreeing, this 12WU/core limit is a pain in the butt - why the limit? S@H allows a 10 day buffer, even given the number of regular SNAFU's they get. Even raising it to just 30/core would be more manageable - until you get the download scheduler problem sorted, we the grunts(the poor sods on the receiving end of Travis' whims), are gonna continue to run out of work, dropping RAC, etc, which is just gonna cause more frustration! How many more crunchers do you want to leave the project in disgust?? Get it sorted, PLEASE!! I think you have to remember that this project is still in an alpha stage and WUs are issued according to the needs of the project. I'm sure Travis is aware of what you are asking, but working within what he sees as best for the project in conjunction with the other scientists he is working with. He is trying to resolve the problems, as he has recently said. |
Send message Joined: 12 Apr 08 Posts: 621 Credit: 161,934,067 RAC: 0 |
I know it is classism ... but larger caches allowed for the faster systems? I mean I should be able to hold 48 tasks and (4 CPU x 12) and I don't think I have seen that message even with the GPU running amok ... or maybe that is the reason I don't see it? But I am constantly banging on the server to upload, download, report, or request tasks ... I have little interest in caching days worth of work, but at 14 seconds max per task even a 0.5 day cache would be far more than 48 tasks ... |
Send message Joined: 30 Aug 07 Posts: 2046 Credit: 26,480 RAC: 0 |
mine is now constantly requesting -0- getting -0-.... even after multiple updates... I don't think that increasing the ready to send will do much, if anything. There's work available it's just that for some reason the clients aren't getting it. |
Send message Joined: 27 Aug 07 Posts: 647 Credit: 27,592,547 RAC: 0 |
Today was the first time I saw all my 4 puters at home running idle. *eek* Now I've requested work manually until every box downloaded it's cache of n cpus x 12... I think I need a coffee break now! *grin* Lovely greetings, Cori |
Send message Joined: 6 Apr 08 Posts: 2018 Credit: 100,142,856 RAC: 0 |
|
Send message Joined: 26 Sep 08 Posts: 12 Credit: 1,228,382 RAC: 0 |
Proof that Murphy is alive and well and living at MW: the only time my machine seems to run out of work is overnight when I'm asleep and can't do anything about it. @Travis, I don't know much about the BOINC system at your end, BUT . . (have you noticed that there is always a BUT in this type of statement?) I seem to remember that Matt Lebofsky (one of the admins over at Seti) once explained that work going out actually comes from a "feeder" which holds a small cache of wu's from the large store of those available for download. Even with plenty of work available, if this small feeder cache runs out the scheduler cannot send more work. He likened it to a single cashier in a store running out of change even though there was plenty of cash in the store's vault. With the new very fast GPU clients asking for big gobs of work, is it possible that the feeder is not "looking" at the "vault" often enough to stay full, or perhaps that small feeder cache needs to be increased a bit if possible? (I'm not knocking GPU processing, just pointing at a possible cause of the new phenomenon.) I imagine that this along with much else I don't understand is already being looked into, but just a thought . . . |
Send message Joined: 27 Aug 07 Posts: 647 Credit: 27,592,547 RAC: 0 |
Today was the first time I saw all my 4 puters at home running idle. *eek* *LOL* The fans were still running on the lowest level. ;-D Lovely greetings, Cori |
Send message Joined: 22 Dec 07 Posts: 51 Credit: 2,405,016 RAC: 0 |
Today was the first time I saw all my 4 puters at home running idle. *eek* AGAIN!! Got home from work, both boxes idle - Server Status page says there's work to be had, but I keep hitting "Update" till nothing happens - download a batch of PG PSP sieve, as they're D/Ling - BOOM!! - 28 WUs from MW - then another 24 for the quad and 6+6 for my old boat anchor. This is getting a little tiresome. I aborted the PG WUs and my boxes are back doing their thing. Are the GPU clients having the same problems? Seejay **Proud Member and Founder of BOINC Team Allprojectstats.com** |
Send message Joined: 12 Nov 07 Posts: 2425 Credit: 524,164 RAC: 0 |
I don't think that increasing the ready to send will do much, if anything. There's work available it's just that for some reason the clients aren't getting it. Well it wouldn't be had to try it right? Doesn't expecting the unexpected make the unexpected the expected? If it makes sense, DON'T do it. |
Send message Joined: 3 Jan 09 Posts: 270 Credit: 124,346 RAC: 0 |
So it seems that the work has pretty much disappeared? Its really irrelevant if its there and not being distributed or not there at all, the result is the same--NO WORK!! So much for sneaking back and giving it another go; theres nothing to do!!! |
Send message Joined: 9 Sep 08 Posts: 96 Credit: 336,443,946 RAC: 0 |
AAAAAAAAAARRRRRRRRRRRRRRGGGGGGGGGGGGGGGGGGGGGGGG!!!!!!!!!!!!!!!!!!!!! I would say the work available is decreasing each day- whatever the problem is, it's getting worse... 8,10,12 manual attempts in a row all get '0 new tasks'... machines are regularly running dry... I know you are working on it :) |
Send message Joined: 22 Dec 07 Posts: 51 Credit: 2,405,016 RAC: 0 |
Looks like the scheduler just can't keep up with demand.... Seejay **Proud Member and Founder of BOINC Team Allprojectstats.com** |
Send message Joined: 27 Aug 07 Posts: 647 Credit: 27,592,547 RAC: 0 |
Don't know if that's the true solution but after installing the new recommended BOINC version 6.4.6 the manager tried to fetch work immediately again and I got several new WUs downloaded on all my boxes after just one request! ;-))) Lovely greetings, Cori |
Send message Joined: 16 Feb 09 Posts: 109 Credit: 11,089,510 RAC: 0 |
Thanks for the tip, Cori! :) Unfortunately I get an error when I try to install the new version. "Error reading setup initialization file" I've tried downloading it again but get the same message when I try to install it. :( |
Send message Joined: 27 Aug 07 Posts: 647 Credit: 27,592,547 RAC: 0 |
Thanks for the tip, Cori! :) Maybe you have to un-install the old BOINC version before? I'm not sure... EDIT: I've downloaded it from here: http://boinc.berkeley.edu/dl/?C=M;O=D and it worked for me. Anyway the new recommenden version did not help to keep my comps busy overnight. :-( When I woke up two comps were idling and not requesting work and two were about to run dry. Had to hit the "update" button several times before the work was flowing again. I really don't get it... *scratches head* Lovely greetings, Cori |
Send message Joined: 22 Dec 07 Posts: 51 Credit: 2,405,016 RAC: 0 |
Travis wrote:
Looks like following our advice works sometimes, eh Travis? ;^). Things are running much smoother now that : gomeyer wrote: or perhaps that small feeder cache needs to be increased a bit if possible? and I wrote: Those of us that are running CPU apps. must each be sending hundreds of requests to the server every day, what with this cache limit of 12WUs x Core at any one time. Might this not get the scheduler a bit racked-off, and therefore lengthens its request times after X number of consecutive HTTP requests? Thanks for listening!! Seejay **Proud Member and Founder of BOINC Team Allprojectstats.com** |
Send message Joined: 22 Feb 09 Posts: 20 Credit: 105,156,399 RAC: 0 |
Welcome the uprise from 12 to 20 units in queue but for GPU-crunchers it would be necessary that it be raised to 1000 or more.... With 20 units/core in a dual core PC means reserve of 4 minutes of work.... |
©2024 Astroinformatics Group