Welcome to MilkyWay@home

Cache Limit

Message boards : Number crunching : Cache Limit
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Profile Kevint
Avatar

Send message
Joined: 22 Nov 07
Posts: 285
Credit: 1,076,786,368
RAC: 0
Message 1981 - Posted: 6 Mar 2008, 18:52:33 UTC
Last modified: 6 Mar 2008, 18:55:07 UTC

I understand about the reasoning behind limiting the number of WU's a single computer can cache.

I have a couple of machines that I would like to set to crunch exclusively for MW, but also would like a back up project so it when MW has issues, the machine is not dry.
I set my prefs on these machines to 1000 for MW and 2 for the backup project. and my prefs to .1 for days of cache.

The problem. With the low cache, my crunchers are still requesting work from the backup project.

If we could get a slightly higher max WU cache setting from MW, then BOINC would only ask for work from the backup project if MW cache went dry. Or instead of 20 per HOST, maybe setting it to 10 or 15 per core.
Since I don't seem to have this issue on single or dual core machines, only quad core and above...

I guess another way to fix this problem is for the RPC timer (the timer setting from the last call to the server) be shortened from 20 minutes to 5 minutes... but this could cause a higher network bandwidth and server side usage.
.
ID: 1981 · Rating: 1 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Dave Przybylo
Avatar

Send message
Joined: 5 Feb 08
Posts: 236
Credit: 49,648
RAC: 0
Message 1982 - Posted: 6 Mar 2008, 19:11:15 UTC - in response to Message 1981.  

We realize this is an issue. I just sent Travis the code to reduce the RPC call time to 10 minutes. I feel that quartering the call time will put too much load on the server. We'll see how 10 goes first and if it needs to be lowered we'll try that.



I understand about the reasoning behind limiting the number of WU's a single computer can cache.

I have a couple of machines that I would like to set to crunch exclusively for MW, but also would like a back up project so it when MW has issues, the machine is not dry.
I set my prefs on these machines to 1000 for MW and 2 for the backup project. and my prefs to .1 for days of cache.

The problem. With the low cache, my crunchers are still requesting work from the backup project.

If we could get a slightly higher max WU cache setting from MW, then BOINC would only ask for work from the backup project if MW cache went dry. Or instead of 20 per HOST, maybe setting it to 10 or 15 per core.
Since I don't seem to have this issue on single or dual core machines, only quad core and above...

I guess another way to fix this problem is for the RPC timer (the timer setting from the last call to the server) be shortened from 20 minutes to 5 minutes... but this could cause a higher network bandwidth and server side usage.


Dave Przybylo
MilkyWay@home Developer
Department of Computer Science
Rensselaer Polytechnic Institute
ID: 1982 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Kevint
Avatar

Send message
Joined: 22 Nov 07
Posts: 285
Credit: 1,076,786,368
RAC: 0
Message 1986 - Posted: 6 Mar 2008, 20:36:50 UTC - in response to Message 1982.  

We realize this is an issue. I just sent Travis the code to reduce the RPC call time to 10 minutes. I feel that quartering the call time will put too much load on the server. We'll see how 10 goes first and if it needs to be lowered we'll try that.



Thank you for your quick response!!

network/server impact was why my first suggestion was to increase the max wu to max # per core, rather than max # per host.

It would not impact the server and would allow the above scenario to work without issues.


ID: 1986 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile aztylen

Send message
Joined: 18 Dec 07
Posts: 4
Credit: 12,564,246
RAC: 0
Message 1988 - Posted: 6 Mar 2008, 21:01:52 UTC

Please raise the wu limit from 20!!!!
My computers runns dry if i dont run other projects at the same time....

Regards

Bjørn lindberg
ID: 1988 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Travis
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist

Send message
Joined: 30 Aug 07
Posts: 2046
Credit: 26,480
RAC: 0
Message 1995 - Posted: 6 Mar 2008, 23:21:13 UTC - in response to Message 1988.  

Please raise the wu limit from 20!!!!
My computers runns dry if i dont run other projects at the same time....

Regards

Bjørn lindberg


we can't raise the wu limit above 20 (even 20 is a bit high unfortunately). future work units will take much more time however.

ID: 1995 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Jayargh
Avatar

Send message
Joined: 8 Oct 07
Posts: 289
Credit: 3,690,838
RAC: 0
Message 1996 - Posted: 6 Mar 2008, 23:48:15 UTC - in response to Message 1995.  

Please raise the wu limit from 20!!!!
My computers runns dry if i dont run other projects at the same time....

Regards

Bjørn lindberg


we can't raise the wu limit above 20 (even 20 is a bit high unfortunately). future work units will take much more time however.


Travis what about the per core idea? 5-7 per core instead of 20 a host....is that feasible? It is preferred over a per host limit to a cruncher.
ID: 1996 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Cori
Avatar

Send message
Joined: 27 Aug 07
Posts: 647
Credit: 27,592,547
RAC: 0
Message 1997 - Posted: 6 Mar 2008, 23:49:27 UTC - in response to Message 1996.  

Please raise the wu limit from 20!!!!
My computers runns dry if i dont run other projects at the same time....

Regards

Bjørn lindberg


we can't raise the wu limit above 20 (even 20 is a bit high unfortunately). future work units will take much more time however.


Travis what about the per core idea? 5-7 per core instead of 20 a host....is that feasible? It is preferred over a per host limit to a cruncher.

But that way I'd get less for example as I have no quad cores. ;-)
Lovely greetings, Cori
ID: 1997 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Kevint
Avatar

Send message
Joined: 22 Nov 07
Posts: 285
Credit: 1,076,786,368
RAC: 0
Message 2000 - Posted: 7 Mar 2008, 0:40:20 UTC - in response to Message 1997.  

Please raise the wu limit from 20!!!!
My computers runns dry if i dont run other projects at the same time....

Regards

Bjørn lindberg


we can't raise the wu limit above 20 (even 20 is a bit high unfortunately). future work units will take much more time however.


Travis what about the per core idea? 5-7 per core instead of 20 a host....is that feasible? It is preferred over a per host limit to a cruncher.

But that way I'd get less for example as I have no quad cores. ;-)



10 per core would work fine(15 better).... even with a single core machine. 20(for beta/testing projects) too much IMO... -
That would give single cores 10/15 WU cache - when 1 WU is returned, you get replaced with 1 Wu.
Duo Cores would have 20/30
Quads 40/60

With the RPC call moved from 20 min to 15 or 10 and per core limit instead of host limit, this should work just fine, and we "should" not run out of work, nor have the backup project take over the resources. In the instance that we do run out of work, - then the back up project could kick in an pick up the slack.


currently I am only running 2 quads, 1 90%, 1 50% and 3 Duo's each at 20% but would like to start to contribute more.
ID: 2000 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Dave Przybylo
Avatar

Send message
Joined: 5 Feb 08
Posts: 236
Credit: 49,648
RAC: 0
Message 2001 - Posted: 7 Mar 2008, 0:43:15 UTC - in response to Message 2000.  

Please raise the wu limit from 20!!!!
My computers runns dry if i dont run other projects at the same time....

Regards

Bjørn lindberg


we can't raise the wu limit above 20 (even 20 is a bit high unfortunately). future work units will take much more time however.


Travis what about the per core idea? 5-7 per core instead of 20 a host....is that feasible? It is preferred over a per host limit to a cruncher.

But that way I'd get less for example as I have no quad cores. ;-)



10 per core would work fine(15 better).... even with a single core machine. 20(for beta/testing projects) too much IMO... -
That would give single cores 10/15 WU cache - when 1 WU is returned, you get replaced with 1 Wu.
Duo Cores would have 20/30
Quads 40/60

With the RPC call moved from 20 min to 15 or 10 and per core limit instead of host limit, this should work just fine, and we "should" not run out of work, nor have the backup project take over the resources. In the instance that we do run out of work, - then the back up project could kick in an pick up the slack.


currently I am only running 2 quads, 1 90%, 1 50% and 3 Duo's each at 20% but would like to start to contribute more.



Well boinc doesn't have an option to do "per_core" that i can find. I believe if it possible we would have to code it ourselves. Can anyone confirm this?
Dave Przybylo
MilkyWay@home Developer
Department of Computer Science
Rensselaer Polytechnic Institute
ID: 2001 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Kevint
Avatar

Send message
Joined: 22 Nov 07
Posts: 285
Credit: 1,076,786,368
RAC: 0
Message 2002 - Posted: 7 Mar 2008, 0:45:03 UTC - in response to Message 2001.  



Well boinc doesn't have an option to do "per_core" that i can find. I believe if it possible we would have to code it ourselves. Can anyone confirm this?



I can not remember the project that was using this, but I do believe I have seen it used on a couple projects... TSP maybe ??
ID: 2002 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Jayargh
Avatar

Send message
Joined: 8 Oct 07
Posts: 289
Credit: 3,690,838
RAC: 0
Message 2003 - Posted: 7 Mar 2008, 0:45:58 UTC - in response to Message 2001.  
Last modified: 7 Mar 2008, 0:47:17 UTC



Well boinc doesn't have an option to do "per_core" that i can find. I believe if it possible we would have to code it ourselves. Can anyone confirm this?


All I can confirm is some projects do/have done this per cpu.LHC,Nano-hive and Cosmology have used this.Someone else will I am sure be able to tell you how :)
ID: 2003 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Travis
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist

Send message
Joined: 30 Aug 07
Posts: 2046
Credit: 26,480
RAC: 0
Message 2005 - Posted: 7 Mar 2008, 0:47:18 UTC - in response to Message 2003.  



Well boinc doesn't have an option to do "per_core" that i can find. I believe if it possible we would have to code it ourselves. Can anyone confirm this?


All I can confirm is some projects do/have done this per cpu.Someone else will I am sure be able to tell you how :)


i'll take a look into it.

something else i want to do is add a line search capability to the work units. with that in place the amount of time for a WU should increase in the range of 10x - 50x, which should keep almost everyone happy i'd hope :)
ID: 2005 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile aztylen

Send message
Joined: 18 Dec 07
Posts: 4
Credit: 12,564,246
RAC: 0
Message 2016 - Posted: 7 Mar 2008, 5:55:32 UTC
Last modified: 7 Mar 2008, 5:56:07 UTC

oki, thx for the quick replay :)
If the work units gets bigger it will be no problem. Today my dual quadcores is running out of work all the time. It shoud be an option in the boinc program to set the communication defferd to 5 minutes insted of 20, and the problem woud never been there.

regards
Bjørn
ID: 2016 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Odysseus

Send message
Joined: 10 Nov 07
Posts: 96
Credit: 29,931,027
RAC: 0
Message 2023 - Posted: 7 Mar 2008, 7:21:04 UTC

Einstein@home’s quotas are per-CPU; see the recent thread from the NC forum there, Changed daily quota.
ID: 2023 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Conan
Avatar

Send message
Joined: 2 Jan 08
Posts: 123
Credit: 69,522,293
RAC: 1,631
Message 2031 - Posted: 7 Mar 2008, 12:55:58 UTC - in response to Message 2023.  

Einstein@home’s quotas are per-CPU; see the recent thread from the NC forum there, Changed daily quota.


That is true but Einstein only goes as high as 4 cores, so 8 and 16 way machines get as many work units a day as a 4 core machine (currently 16 per core per day, equals 64 total a day).
I believe this can be changed with the latest Boinc client versions?
ID: 2031 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Travis
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist

Send message
Joined: 30 Aug 07
Posts: 2046
Credit: 26,480
RAC: 0
Message 2042 - Posted: 7 Mar 2008, 16:40:46 UTC - in response to Message 2031.  


i changed the next_rpc_delay to 600 (which should be 10 minutes), which i believe is down from the default of 20. let me know if this helps.
ID: 2042 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Kevint
Avatar

Send message
Joined: 22 Nov 07
Posts: 285
Credit: 1,076,786,368
RAC: 0
Message 2051 - Posted: 7 Mar 2008, 18:32:07 UTC - in response to Message 2042.  


i changed the next_rpc_delay to 600 (which should be 10 minutes), which i believe is down from the default of 20. let me know if this helps.



still seems to be a 20 minute delay....
ID: 2051 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile UBT - NaRyan

Send message
Joined: 7 Mar 08
Posts: 2
Credit: 2,089,897
RAC: 0
Message 2101 - Posted: 8 Mar 2008, 2:10:39 UTC - in response to Message 2042.  

ID: 2101 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Jayargh
Avatar

Send message
Joined: 8 Oct 07
Posts: 289
Credit: 3,690,838
RAC: 0
Message 2102 - Posted: 8 Mar 2008, 2:17:37 UTC

I am still seeing a 20 min rpc call too.
ID: 2102 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile banditwolf
Avatar

Send message
Joined: 12 Nov 07
Posts: 2425
Credit: 524,164
RAC: 0
Message 2103 - Posted: 8 Mar 2008, 2:20:27 UTC - in response to Message 2102.  

I am still seeing a 20 min rpc call too.


I did see in one post that Travis said it wasn't working and he was taking that change out.
ID: 2103 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
1 · 2 · Next

Message boards : Number crunching : Cache Limit

©2024 Astroinformatics Group