Welcome to MilkyWay@home

More Work !!! Please :)

Message boards : Number crunching : More Work !!! Please :)
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 4 · 5 · 6 · 7 · 8 · Next

AuthorMessage
Profile Dave Przybylo
Avatar

Send message
Joined: 5 Feb 08
Posts: 236
Credit: 49,648
RAC: 0
Message 2518 - Posted: 22 Mar 2008, 19:40:38 UTC - in response to Message 2517.  

Things have been flowing nicely since the db purge :)


I think it is more due to upgrading the Boinc server platform....Hey didn't ya'll say you were going to try the per cpu method of handing out work with the upgrade? Coming soon?



Well,

max_wus_in_progress
Maximum results in progress per CPU. Setting this to something (like 2 for instance) will limit the number of results a given host can simultaneously have registered as 'in progress'.


We have that option set to 20. So you should be getting 20 wus per cpu i believe.


That's what I'm getting, 20.




20 in total...or 20 per cpu? Because it should be 20 per cpu, i think we have it capped out at 50 total right now. Which is easily changeable.
Dave Przybylo
MilkyWay@home Developer
Department of Computer Science
Rensselaer Polytechnic Institute
ID: 2518 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Philadelphia
Avatar

Send message
Joined: 9 Nov 07
Posts: 131
Credit: 180,454
RAC: 0
Message 2519 - Posted: 22 Mar 2008, 19:47:59 UTC - in response to Message 2518.  
Last modified: 22 Mar 2008, 19:48:35 UTC

Things have been flowing nicely since the db purge :)


I think it is more due to upgrading the Boinc server platform....Hey didn't ya'll say you were going to try the per cpu method of handing out work with the upgrade? Coming soon?



Well,

max_wus_in_progress
Maximum results in progress per CPU. Setting this to something (like 2 for instance) will limit the number of results a given host can simultaneously have registered as 'in progress'.


We have that option set to 20. So you should be getting 20 wus per cpu i believe.


That's what I'm getting, 20.




20 in total...or 20 per cpu? Because it should be 20 per cpu, i think we have it capped out at 50 total right now. Which is easily changeable.


I have an E6600 (2 cores) and I'm getting 20 'in total', not per cpu/core.
CLICK TO HELP BUILD
ID: 2519 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Jayargh
Avatar

Send message
Joined: 8 Oct 07
Posts: 289
Credit: 3,690,838
RAC: 0
Message 2520 - Posted: 22 Mar 2008, 20:08:36 UTC

Dave its still 20 total-I have a lot of 4 thread machines....more work for Dave ;)
ID: 2520 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Labbie
Avatar

Send message
Joined: 29 Aug 07
Posts: 327
Credit: 116,463,193
RAC: 0
Message 2521 - Posted: 22 Mar 2008, 20:10:09 UTC - in response to Message 2519.  

Things have been flowing nicely since the db purge :)


I think it is more due to upgrading the Boinc server platform....Hey didn't ya'll say you were going to try the per cpu method of handing out work with the upgrade? Coming soon?



Well,

max_wus_in_progress
Maximum results in progress per CPU. Setting this to something (like 2 for instance) will limit the number of results a given host can simultaneously have registered as 'in progress'.


We have that option set to 20. So you should be getting 20 wus per cpu i believe.


That's what I'm getting, 20.




20 in total...or 20 per cpu? Because it should be 20 per cpu, i think we have it capped out at 50 total right now. Which is easily changeable.


I have an E6600 (2 cores) and I'm getting 20 'in total', not per cpu/core.


I've got a Q6600 that is only getting 20 WUs in total, not per core.


Calm Chaos Forum...Join Calm Chaos Now
ID: 2521 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Philadelphia
Avatar

Send message
Joined: 9 Nov 07
Posts: 131
Credit: 180,454
RAC: 0
Message 2522 - Posted: 22 Mar 2008, 20:33:14 UTC

As Cheech and Chong said "Dave's not here" :)
CLICK TO HELP BUILD
ID: 2522 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Fish
Avatar

Send message
Joined: 2 Mar 08
Posts: 5
Credit: 50,383,094
RAC: 0
Message 2523 - Posted: 22 Mar 2008, 20:38:51 UTC - in response to Message 2518.  



20 in total...or 20 per cpu? Because it should be 20 per cpu, i think we have it capped out at 50 total right now. Which is easily changeable.

From Soduko Forums in a post from POV:

To change max_wus_in_progress to apply per processor and not per host, you need to edit just one line. In boinc/sched/sched_send.C, line 729 (and 733, although that's just logging) change:
config.max_wus_in_progress
to
config.max_wus_in_progress*host.p_ncpus


then go back to 'boinc' and run 'make' to recompile. Run tools/upgrade as usual, or manually copy the new cgi to the PROJECTROOT/cgi-bin/ directory.


Fish
ID: 2523 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile nickth
Avatar

Send message
Joined: 5 Oct 07
Posts: 33
Credit: 3,189,992
RAC: 0
Message 2524 - Posted: 22 Mar 2008, 20:40:04 UTC - in response to Message 2518.  

Things have been flowing nicely since the db purge :)


I think it is more due to upgrading the Boinc server platform....Hey didn't ya'll say you were going to try the per cpu method of handing out work with the upgrade? Coming soon?



Well,

max_wus_in_progress
Maximum results in progress per CPU. Setting this to something (like 2 for instance) will limit the number of results a given host can simultaneously have registered as 'in progress'.


We have that option set to 20. So you should be getting 20 wus per cpu i believe.


That's what I'm getting, 20.




20 in total...or 20 per cpu? Because it should be 20 per cpu, i think we have it capped out at 50 total right now. Which is easily changeable.



Well im not getting that at all i have three computers crunching just milkway. and each of them gets ONE wu each crunch it send it report then wait about 30 sec before getting a new work unit and this has been happening since the upgrade.
ID: 2524 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Emanuel

Send message
Joined: 18 Nov 07
Posts: 280
Credit: 2,442,757
RAC: 0
Message 2525 - Posted: 22 Mar 2008, 23:11:14 UTC - in response to Message 2524.  

Well im not getting that at all. i have three computers crunching just milkway. and each of them gets ONE wu each crunch it send it report then wait about 30 sec before getting a new work unit and this has been happening since the upgrade.


That is odd. What are your connection settings?
ID: 2525 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile nickth
Avatar

Send message
Joined: 5 Oct 07
Posts: 33
Credit: 3,189,992
RAC: 0
Message 2526 - Posted: 22 Mar 2008, 23:38:37 UTC - in response to Message 2525.  

Well im not getting that at all. i have three computers crunching just milkway. and each of them gets ONE wu each crunch it send it report then wait about 30 sec before getting a new work unit and this has been happening since the upgrade.


That is odd. What are your connection settings?

Yes it is the settings are connect every five days get enough work for 10 day.
Here is some of the messages from just one of the computers

22/03/2008 23:26:13|Milkyway@home|Sending scheduler request: To fetch work. Requesting 1728000 seconds of work, reporting 0 completed tasks
22/03/2008 23:26:23|Milkyway@home|Scheduler request succeeded: got 1 new tasks
22/03/2008 23:26:25|Milkyway@home|Started download of parameters_generated_1206238666_368365
22/03/2008 23:26:27|Milkyway@home|Finished download of parameters_generated_1206238666_368365
22/03/2008 23:26:28|Milkyway@home|Starting gs_345_1206238666_368365_0
22/03/2008 23:26:28|Milkyway@home|Starting task gs_345_1206238666_368365_0 using astronomy version 122
22/03/2008 23:26:33|Milkyway@home|Sending scheduler request: To report completed tasks. Requesting 0 seconds of work, reporting 1 completed tasks
22/03/2008 23:26:38|Milkyway@home|Scheduler request succeeded: got 0 new tasks
22/03/2008 23:34:05|Milkyway@home|Computation for task gs_345_1206238666_368365_0 finished
22/03/2008 23:34:07|Milkyway@home|Started upload of gs_345_1206238666_368365_0_0
22/03/2008 23:34:08|Milkyway@home|Sending scheduler request: To fetch work. Requesting 1728000 seconds of work, reporting 0 completed tasks
22/03/2008 23:34:12|Milkyway@home|Finished upload of gs_345_1206238666_368365_0_0
22/03/2008 23:34:13|Milkyway@home|Scheduler request succeeded: got 1 new tasks
22/03/2008 23:34:15|Milkyway@home|Started download of parameters_generated_1206239150_370471
22/03/2008 23:34:18|Milkyway@home|Finished download of parameters_generated_1206239150_370471
22/03/2008 23:34:19|Milkyway@home|Starting gs_347_1206239150_370471_0
22/03/2008 23:34:19|Milkyway@home|Starting task gs_347_1206239150_370471_0 using astronomy version 122
22/03/2008 23:34:24|Milkyway@home|Sending scheduler request: To report completed tasks. Requesting 0 seconds of work, reporting 1 completed tasks
22/03/2008 23:34:29|Milkyway@home|Scheduler request succeeded: got 0 new tasks

ID: 2526 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Cori
Avatar

Send message
Joined: 27 Aug 07
Posts: 647
Credit: 27,592,547
RAC: 0
Message 2527 - Posted: 22 Mar 2008, 23:47:10 UTC
Last modified: 22 Mar 2008, 23:47:45 UTC

Try a lower work cache. I've noticed that to some point you're not getting more but less WUs when the cache is set too high.
My settings are usually 0.5 or 1 day and I have no probs with getting enough work. ;-)
Lovely greetings, Cori
ID: 2527 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile nickth
Avatar

Send message
Joined: 5 Oct 07
Posts: 33
Credit: 3,189,992
RAC: 0
Message 2529 - Posted: 23 Mar 2008, 0:13:42 UTC - in response to Message 2527.  

Try a lower work cache. I've noticed that to some point you're not getting more but less WUs when the cache is set too high.
My settings are usually 0.5 or 1 day and I have no probs with getting enough work. ;-)


Thanks that worked.. back up to 20 wu per computer now
ID: 2529 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Philadelphia
Avatar

Send message
Joined: 9 Nov 07
Posts: 131
Credit: 180,454
RAC: 0
Message 2530 - Posted: 23 Mar 2008, 0:23:18 UTC - in response to Message 2525.  

Well im not getting that at all. i have three computers crunching just milkway. and each of them gets ONE wu each crunch it send it report then wait about 30 sec before getting a new work unit and this has been happening since the upgrade.


That is odd. What are your connection settings?


I'm not receiving 20 per core either and I'm sure my setting are correct.

CLICK TO HELP BUILD
ID: 2530 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Dave Przybylo
Avatar

Send message
Joined: 5 Feb 08
Posts: 236
Credit: 49,648
RAC: 0
Message 2531 - Posted: 23 Mar 2008, 4:13:17 UTC

@fish: I took at look at that before. I think you posted a link to it previously? However, that code was already implemented in a later revision of BOINC after that person posted that code.

Also, there's been a recent increase in WUs being given out. And it went from 35,000 to about 45,000 at any given time. Our assimilator is having a hard time handling that many units so we can't let you guys crunch more just yet. When Travis implements the line search to make the WUs longer then we'll definitely let you guys have 20 per core. However now, i'm afraid our lonely server just couldn't take it the added strain and the workunits would end up in an infinitly increasing backup. :(

But we're working on this! As i said Travis is implementing a line search and I'm trying to link one of the other servers on our network to the current one to run a second assimilator. Since the assimilator is a disk bound process it basically needs a fast hard drive. So either we're going to have to move the sql server to a separate machine or start one just to run a second feeder, deleter, assimilator...etc.
Dave Przybylo
MilkyWay@home Developer
Department of Computer Science
Rensselaer Polytechnic Institute
ID: 2531 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Kevint
Avatar

Send message
Joined: 22 Nov 07
Posts: 285
Credit: 1,076,786,368
RAC: 0
Message 2532 - Posted: 23 Mar 2008, 4:25:11 UTC - in response to Message 2531.  

Also, there's been a recent increase in WUs being given out. And it went from 35,000 to about 45,000 at any given time. Our assimilator is having a hard time handling that many units so we can't let you guys crunch more just yet. When Travis implements the line search to make the WUs longer then we'll definitely let you guys have 20 per core. However now, i'm afraid our lonely server just couldn't take it the added strain and the workunits would end up in an infinitly increasing backup. :(



20 Per machine is working fine for the most part right now. If it aint broke, don't fix it.
As long as I have a couple other project for BOINC to grind away on - things are fine.

We certainly don't want the servers to break during the weekends. Better to have a little than nothing at all...


on another note: When/Where are the published results (when there are any) going to be.
ID: 2532 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Honza

Send message
Joined: 28 Aug 07
Posts: 31
Credit: 86,152,236
RAC: 0
Message 2533 - Posted: 23 Mar 2008, 8:14:20 UTC - in response to Message 2531.  

But we're working on this! As i said Travis is implementing a line search and I'm trying to link one of the other servers on our network to the current one to run a second assimilator. Since the assimilator is a disk bound process it basically needs a fast hard drive. So either we're going to have to move the sql server to a separate machine or start one just to run a second feeder, deleter, assimilator...etc.

Can you give us a bit more info on what ahrdware is Milky running?
I have had a question on how close we are to limit of the server before...so now I had answer - it's close.
ID: 2533 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Jayargh
Avatar

Send message
Joined: 8 Oct 07
Posts: 289
Credit: 3,690,838
RAC: 0
Message 2535 - Posted: 23 Mar 2008, 13:46:06 UTC

Looks like we need new searches....bone dry again.
ID: 2535 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Philadelphia
Avatar

Send message
Joined: 9 Nov 07
Posts: 131
Credit: 180,454
RAC: 0
Message 2536 - Posted: 23 Mar 2008, 13:49:27 UTC

Looks like were out of work :(
CLICK TO HELP BUILD
ID: 2536 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Cappy [Team Musketeers]
Avatar

Send message
Joined: 3 Oct 07
Posts: 71
Credit: 33,212,009
RAC: 0
Message 2538 - Posted: 23 Mar 2008, 14:14:38 UTC - in response to Message 2536.  

Looks like were out of work :(



we need work!!! we need work!!!! we need work!!!!!
ID: 2538 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Philadelphia
Avatar

Send message
Joined: 9 Nov 07
Posts: 131
Credit: 180,454
RAC: 0
Message 2543 - Posted: 23 Mar 2008, 15:15:55 UTC - in response to Message 2538.  

Looks like were out of work :(



we need work!!! we need work!!!! we need work!!!!!


Hopefully some soon.

CLICK TO HELP BUILD
ID: 2543 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Jayargh
Avatar

Send message
Joined: 8 Oct 07
Posts: 289
Credit: 3,690,838
RAC: 0
Message 2544 - Posted: 23 Mar 2008, 15:19:15 UTC

Thank-you Travis (that was quick :)
ID: 2544 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 . . . 4 · 5 · 6 · 7 · 8 · Next

Message boards : Number crunching : More Work !!! Please :)

©2024 Astroinformatics Group