Message boards :
Number crunching :
More Work !!! Please :)
Message board moderation
Previous · 1 . . . 4 · 5 · 6 · 7 · 8 · Next
Author | Message |
---|---|
Send message Joined: 5 Feb 08 Posts: 236 Credit: 49,648 RAC: 0 |
Things have been flowing nicely since the db purge :) 20 in total...or 20 per cpu? Because it should be 20 per cpu, i think we have it capped out at 50 total right now. Which is easily changeable. Dave Przybylo MilkyWay@home Developer Department of Computer Science Rensselaer Polytechnic Institute |
Send message Joined: 9 Nov 07 Posts: 131 Credit: 180,454 RAC: 0 |
Things have been flowing nicely since the db purge :) I have an E6600 (2 cores) and I'm getting 20 'in total', not per cpu/core. CLICK TO HELP BUILD |
Send message Joined: 8 Oct 07 Posts: 289 Credit: 3,690,838 RAC: 0 |
Dave its still 20 total-I have a lot of 4 thread machines....more work for Dave ;) |
Send message Joined: 29 Aug 07 Posts: 327 Credit: 116,463,193 RAC: 0 |
Things have been flowing nicely since the db purge :) I've got a Q6600 that is only getting 20 WUs in total, not per core. Calm Chaos Forum...Join Calm Chaos Now |
Send message Joined: 9 Nov 07 Posts: 131 Credit: 180,454 RAC: 0 |
|
Send message Joined: 2 Mar 08 Posts: 5 Credit: 50,383,094 RAC: 0 |
From Soduko Forums in a post from POV: To change max_wus_in_progress to apply per processor and not per host, you need to edit just one line. In boinc/sched/sched_send.C, line 729 (and 733, although that's just logging) change: config.max_wus_in_progress to config.max_wus_in_progress*host.p_ncpus then go back to 'boinc' and run 'make' to recompile. Run tools/upgrade as usual, or manually copy the new cgi to the PROJECTROOT/cgi-bin/ directory. Fish |
Send message Joined: 5 Oct 07 Posts: 33 Credit: 3,189,992 RAC: 0 |
Things have been flowing nicely since the db purge :) Well im not getting that at all i have three computers crunching just milkway. and each of them gets ONE wu each crunch it send it report then wait about 30 sec before getting a new work unit and this has been happening since the upgrade. |
Send message Joined: 18 Nov 07 Posts: 280 Credit: 2,442,757 RAC: 0 |
Well im not getting that at all. i have three computers crunching just milkway. and each of them gets ONE wu each crunch it send it report then wait about 30 sec before getting a new work unit and this has been happening since the upgrade. That is odd. What are your connection settings? |
Send message Joined: 5 Oct 07 Posts: 33 Credit: 3,189,992 RAC: 0 |
Well im not getting that at all. i have three computers crunching just milkway. and each of them gets ONE wu each crunch it send it report then wait about 30 sec before getting a new work unit and this has been happening since the upgrade. Yes it is the settings are connect every five days get enough work for 10 day. Here is some of the messages from just one of the computers 22/03/2008 23:26:13|Milkyway@home|Sending scheduler request: To fetch work. Requesting 1728000 seconds of work, reporting 0 completed tasks 22/03/2008 23:26:23|Milkyway@home|Scheduler request succeeded: got 1 new tasks 22/03/2008 23:26:25|Milkyway@home|Started download of parameters_generated_1206238666_368365 22/03/2008 23:26:27|Milkyway@home|Finished download of parameters_generated_1206238666_368365 22/03/2008 23:26:28|Milkyway@home|Starting gs_345_1206238666_368365_0 22/03/2008 23:26:28|Milkyway@home|Starting task gs_345_1206238666_368365_0 using astronomy version 122 22/03/2008 23:26:33|Milkyway@home|Sending scheduler request: To report completed tasks. Requesting 0 seconds of work, reporting 1 completed tasks 22/03/2008 23:26:38|Milkyway@home|Scheduler request succeeded: got 0 new tasks 22/03/2008 23:34:05|Milkyway@home|Computation for task gs_345_1206238666_368365_0 finished 22/03/2008 23:34:07|Milkyway@home|Started upload of gs_345_1206238666_368365_0_0 22/03/2008 23:34:08|Milkyway@home|Sending scheduler request: To fetch work. Requesting 1728000 seconds of work, reporting 0 completed tasks 22/03/2008 23:34:12|Milkyway@home|Finished upload of gs_345_1206238666_368365_0_0 22/03/2008 23:34:13|Milkyway@home|Scheduler request succeeded: got 1 new tasks 22/03/2008 23:34:15|Milkyway@home|Started download of parameters_generated_1206239150_370471 22/03/2008 23:34:18|Milkyway@home|Finished download of parameters_generated_1206239150_370471 22/03/2008 23:34:19|Milkyway@home|Starting gs_347_1206239150_370471_0 22/03/2008 23:34:19|Milkyway@home|Starting task gs_347_1206239150_370471_0 using astronomy version 122 22/03/2008 23:34:24|Milkyway@home|Sending scheduler request: To report completed tasks. Requesting 0 seconds of work, reporting 1 completed tasks 22/03/2008 23:34:29|Milkyway@home|Scheduler request succeeded: got 0 new tasks |
Send message Joined: 27 Aug 07 Posts: 647 Credit: 27,592,547 RAC: 0 |
Try a lower work cache. I've noticed that to some point you're not getting more but less WUs when the cache is set too high. My settings are usually 0.5 or 1 day and I have no probs with getting enough work. ;-) Lovely greetings, Cori |
Send message Joined: 5 Oct 07 Posts: 33 Credit: 3,189,992 RAC: 0 |
Try a lower work cache. I've noticed that to some point you're not getting more but less WUs when the cache is set too high. Thanks that worked.. back up to 20 wu per computer now |
Send message Joined: 9 Nov 07 Posts: 131 Credit: 180,454 RAC: 0 |
Well im not getting that at all. i have three computers crunching just milkway. and each of them gets ONE wu each crunch it send it report then wait about 30 sec before getting a new work unit and this has been happening since the upgrade. I'm not receiving 20 per core either and I'm sure my setting are correct. CLICK TO HELP BUILD |
Send message Joined: 5 Feb 08 Posts: 236 Credit: 49,648 RAC: 0 |
@fish: I took at look at that before. I think you posted a link to it previously? However, that code was already implemented in a later revision of BOINC after that person posted that code. Also, there's been a recent increase in WUs being given out. And it went from 35,000 to about 45,000 at any given time. Our assimilator is having a hard time handling that many units so we can't let you guys crunch more just yet. When Travis implements the line search to make the WUs longer then we'll definitely let you guys have 20 per core. However now, i'm afraid our lonely server just couldn't take it the added strain and the workunits would end up in an infinitly increasing backup. :( But we're working on this! As i said Travis is implementing a line search and I'm trying to link one of the other servers on our network to the current one to run a second assimilator. Since the assimilator is a disk bound process it basically needs a fast hard drive. So either we're going to have to move the sql server to a separate machine or start one just to run a second feeder, deleter, assimilator...etc. Dave Przybylo MilkyWay@home Developer Department of Computer Science Rensselaer Polytechnic Institute |
Send message Joined: 22 Nov 07 Posts: 285 Credit: 1,076,786,368 RAC: 0 |
Also, there's been a recent increase in WUs being given out. And it went from 35,000 to about 45,000 at any given time. Our assimilator is having a hard time handling that many units so we can't let you guys crunch more just yet. When Travis implements the line search to make the WUs longer then we'll definitely let you guys have 20 per core. However now, i'm afraid our lonely server just couldn't take it the added strain and the workunits would end up in an infinitly increasing backup. :( 20 Per machine is working fine for the most part right now. If it aint broke, don't fix it. As long as I have a couple other project for BOINC to grind away on - things are fine. We certainly don't want the servers to break during the weekends. Better to have a little than nothing at all... on another note: When/Where are the published results (when there are any) going to be. |
Send message Joined: 28 Aug 07 Posts: 31 Credit: 86,152,236 RAC: 0 |
But we're working on this! As i said Travis is implementing a line search and I'm trying to link one of the other servers on our network to the current one to run a second assimilator. Since the assimilator is a disk bound process it basically needs a fast hard drive. So either we're going to have to move the sql server to a separate machine or start one just to run a second feeder, deleter, assimilator...etc. Can you give us a bit more info on what ahrdware is Milky running? I have had a question on how close we are to limit of the server before...so now I had answer - it's close. |
Send message Joined: 8 Oct 07 Posts: 289 Credit: 3,690,838 RAC: 0 |
Looks like we need new searches....bone dry again. |
Send message Joined: 9 Nov 07 Posts: 131 Credit: 180,454 RAC: 0 |
|
Send message Joined: 3 Oct 07 Posts: 71 Credit: 33,212,009 RAC: 0 |
Looks like were out of work :( we need work!!! we need work!!!! we need work!!!!! |
Send message Joined: 9 Nov 07 Posts: 131 Credit: 180,454 RAC: 0 |
|
Send message Joined: 8 Oct 07 Posts: 289 Credit: 3,690,838 RAC: 0 |
Thank-you Travis (that was quick :) |
©2024 Astroinformatics Group