Message boards :
Number crunching :
Cache Limit
Message board moderation
Author | Message |
---|---|
Send message Joined: 22 Nov 07 Posts: 285 Credit: 1,076,786,368 RAC: 0 |
I understand about the reasoning behind limiting the number of WU's a single computer can cache. I have a couple of machines that I would like to set to crunch exclusively for MW, but also would like a back up project so it when MW has issues, the machine is not dry. I set my prefs on these machines to 1000 for MW and 2 for the backup project. and my prefs to .1 for days of cache. The problem. With the low cache, my crunchers are still requesting work from the backup project. If we could get a slightly higher max WU cache setting from MW, then BOINC would only ask for work from the backup project if MW cache went dry. Or instead of 20 per HOST, maybe setting it to 10 or 15 per core. Since I don't seem to have this issue on single or dual core machines, only quad core and above... I guess another way to fix this problem is for the RPC timer (the timer setting from the last call to the server) be shortened from 20 minutes to 5 minutes... but this could cause a higher network bandwidth and server side usage. . |
Send message Joined: 5 Feb 08 Posts: 236 Credit: 49,648 RAC: 0 |
We realize this is an issue. I just sent Travis the code to reduce the RPC call time to 10 minutes. I feel that quartering the call time will put too much load on the server. We'll see how 10 goes first and if it needs to be lowered we'll try that. I understand about the reasoning behind limiting the number of WU's a single computer can cache. Dave Przybylo MilkyWay@home Developer Department of Computer Science Rensselaer Polytechnic Institute |
Send message Joined: 22 Nov 07 Posts: 285 Credit: 1,076,786,368 RAC: 0 |
We realize this is an issue. I just sent Travis the code to reduce the RPC call time to 10 minutes. I feel that quartering the call time will put too much load on the server. We'll see how 10 goes first and if it needs to be lowered we'll try that. Thank you for your quick response!! network/server impact was why my first suggestion was to increase the max wu to max # per core, rather than max # per host. It would not impact the server and would allow the above scenario to work without issues. |
Send message Joined: 18 Dec 07 Posts: 4 Credit: 12,564,246 RAC: 0 |
Please raise the wu limit from 20!!!! My computers runns dry if i dont run other projects at the same time.... Regards Bjørn lindberg |
Send message Joined: 30 Aug 07 Posts: 2046 Credit: 26,480 RAC: 0 |
Please raise the wu limit from 20!!!! we can't raise the wu limit above 20 (even 20 is a bit high unfortunately). future work units will take much more time however. |
Send message Joined: 8 Oct 07 Posts: 289 Credit: 3,690,838 RAC: 0 |
Please raise the wu limit from 20!!!! Travis what about the per core idea? 5-7 per core instead of 20 a host....is that feasible? It is preferred over a per host limit to a cruncher. |
Send message Joined: 27 Aug 07 Posts: 647 Credit: 27,592,547 RAC: 0 |
Please raise the wu limit from 20!!!! But that way I'd get less for example as I have no quad cores. ;-) Lovely greetings, Cori |
Send message Joined: 22 Nov 07 Posts: 285 Credit: 1,076,786,368 RAC: 0 |
Please raise the wu limit from 20!!!! 10 per core would work fine(15 better).... even with a single core machine. 20(for beta/testing projects) too much IMO... - That would give single cores 10/15 WU cache - when 1 WU is returned, you get replaced with 1 Wu. Duo Cores would have 20/30 Quads 40/60 With the RPC call moved from 20 min to 15 or 10 and per core limit instead of host limit, this should work just fine, and we "should" not run out of work, nor have the backup project take over the resources. In the instance that we do run out of work, - then the back up project could kick in an pick up the slack. currently I am only running 2 quads, 1 90%, 1 50% and 3 Duo's each at 20% but would like to start to contribute more. |
Send message Joined: 5 Feb 08 Posts: 236 Credit: 49,648 RAC: 0 |
Please raise the wu limit from 20!!!! Well boinc doesn't have an option to do "per_core" that i can find. I believe if it possible we would have to code it ourselves. Can anyone confirm this? Dave Przybylo MilkyWay@home Developer Department of Computer Science Rensselaer Polytechnic Institute |
Send message Joined: 22 Nov 07 Posts: 285 Credit: 1,076,786,368 RAC: 0 |
I can not remember the project that was using this, but I do believe I have seen it used on a couple projects... TSP maybe ?? |
Send message Joined: 8 Oct 07 Posts: 289 Credit: 3,690,838 RAC: 0 |
All I can confirm is some projects do/have done this per cpu.LHC,Nano-hive and Cosmology have used this.Someone else will I am sure be able to tell you how :) |
Send message Joined: 30 Aug 07 Posts: 2046 Credit: 26,480 RAC: 0 |
i'll take a look into it. something else i want to do is add a line search capability to the work units. with that in place the amount of time for a WU should increase in the range of 10x - 50x, which should keep almost everyone happy i'd hope :) |
Send message Joined: 18 Dec 07 Posts: 4 Credit: 12,564,246 RAC: 0 |
oki, thx for the quick replay :) If the work units gets bigger it will be no problem. Today my dual quadcores is running out of work all the time. It shoud be an option in the boinc program to set the communication defferd to 5 minutes insted of 20, and the problem woud never been there. regards Bjørn |
Send message Joined: 10 Nov 07 Posts: 96 Credit: 29,931,027 RAC: 0 |
Einstein@home’s quotas are per-CPU; see the recent thread from the NC forum there, Changed daily quota. |
Send message Joined: 2 Jan 08 Posts: 123 Credit: 69,762,022 RAC: 1,541 |
Einstein@home’s quotas are per-CPU; see the recent thread from the NC forum there, Changed daily quota. That is true but Einstein only goes as high as 4 cores, so 8 and 16 way machines get as many work units a day as a 4 core machine (currently 16 per core per day, equals 64 total a day). I believe this can be changed with the latest Boinc client versions? |
Send message Joined: 30 Aug 07 Posts: 2046 Credit: 26,480 RAC: 0 |
i changed the next_rpc_delay to 600 (which should be 10 minutes), which i believe is down from the default of 20. let me know if this helps. |
Send message Joined: 22 Nov 07 Posts: 285 Credit: 1,076,786,368 RAC: 0 |
still seems to be a 20 minute delay.... |
Send message Joined: 7 Mar 08 Posts: 2 Credit: 2,089,897 RAC: 0 |
|
Send message Joined: 8 Oct 07 Posts: 289 Credit: 3,690,838 RAC: 0 |
I am still seeing a 20 min rpc call too. |
Send message Joined: 12 Nov 07 Posts: 2425 Credit: 524,164 RAC: 0 |
I am still seeing a 20 min rpc call too. I did see in one post that Travis said it wasn't working and he was taking that change out. |
©2024 Astroinformatics Group