Message boards :
Number crunching :
(reached limit of 6 tasks)
Message board moderation
Author | Message |
---|---|
Send message Joined: 8 Sep 09 Posts: 62 Credit: 61,330,584 RAC: 0 |
new to the project. searched by reached limit and 6 taks, nothing. have little time to dig further. is this a new user thing? if so, how long? thanks. |
Send message Joined: 1 Sep 08 Posts: 520 Credit: 302,524,931 RAC: 15 |
|
Send message Joined: 8 Sep 09 Posts: 62 Credit: 61,330,584 RAC: 0 |
thank you for the quick reply. wow. that is not much even for this pokey 1.7ghz SINGLE core. |
Send message Joined: 1 Sep 08 Posts: 520 Credit: 302,524,931 RAC: 15 |
The thing is, if you had a quad core, that would be 24 workunits in the queue. You think this is bad now -- at least now (for CPU's) the process cycle is something like 45 minutes per workunit. There was a time last winter when the process time with the optimized application was less than 10 minutes. One thing which drove this big time (and still applies to a degree) is that the optimized application for double precision ATI GPU's (38xx, 48xx) could run thru a work unit in a minute or so. That meant a lot of processing power pushing work units very quickly and loading the server down with 'I want new work NOW'. thank you for the quick reply. wow. that is not much even for this pokey 1.7ghz SINGLE core. |
Send message Joined: 24 Dec 07 Posts: 1947 Credit: 240,884,648 RAC: 0 |
It'd be nice if the wu cached could be also based around the number of GPU's. I have 2 GPU's in my quady and the queue is drained very very quickly. 40 wu's per GPU would be nice - that'd be a queue of less than an hour....LOL. |
Send message Joined: 1 Sep 08 Posts: 520 Credit: 302,524,931 RAC: 15 |
I was under the impression that with the work specifically on GPU applications at the project (with the CUDA application) that some sort of special handling for GPU's would be set up -- that is, GPU specific work units and with that, a GPU specific cache limit. I realize the original thinking of a separate GPU only project went by the wayside, but I thought there was still work going on to address the different citizenship status of GPU clients. It'd be nice if the wu cached could be also based around the number of GPU's. I have 2 GPU's in my quady and the queue is drained very very quickly. 40 wu's per GPU would be nice - that'd be a queue of less than an hour....LOL. |
Send message Joined: 19 Feb 09 Posts: 29 Credit: 5,452,255 RAC: 0 |
hi with the limit of 6 tasks per processor, is there a way to stop Boinc giving messages every 3 to 6 minutes like this 14/09/2009 16:41:01 Milkyway@home Sending scheduler request: To fetch work. 14/09/2009 16:41:01 Milkyway@home Requesting new tasks 14/09/2009 16:41:11 Milkyway@home Scheduler request completed: got 0 new tasks 14/09/2009 16:41:11 Milkyway@home Message from server: No work sent 14/09/2009 16:41:11 Milkyway@home Message from server: (reached limit of 12 tasks) thanks Paul |
Send message Joined: 12 Apr 08 Posts: 621 Credit: 161,934,067 RAC: 0 |
hi To put is as simply as possible: No ... :) |
Send message Joined: 18 Apr 09 Posts: 7 Credit: 5,005,579 RAC: 0 |
hi There is a way - but you might not like it. Just switch to "no new tasks" for a few hours - long enough to allow one or two units to report. I find that the cycle of asking for more units, when I already have my ration, is broken and the problem has gone. At least, for a few days - although it does eventually come back. There is, of course, the new problem of remembering to switch back to "allow new tasks"! |
Send message Joined: 15 Jul 08 Posts: 383 Credit: 729,293,740 RAC: 0 |
This limit is a problem with ATI cards. On a dual processor machine only 10 minutes of work is cached. With the frequent website slowdowns & outages that's not enough. I think this limit needs to be increased so the ATI cards aren't running dry and sitting idle too often. |
Send message Joined: 13 Jul 08 Posts: 33 Credit: 21,285,010 RAC: 0 |
make 9 workunits per processor for the queue. |
Send message Joined: 2 Jan 08 Posts: 79 Credit: 365,471,675 RAC: 0 |
Travis, we need a bigger cache. |
©2024 Astroinformatics Group