Message boards :
Number crunching :
Is there a server-limited maximum at 100 CPU workunits?
Message board moderation
Author | Message |
---|---|
EXT64 Send message Joined: 23 Jun 14 Posts: 1 Credit: 4,122,499,497 RAC: 635 ![]() ![]() |
I recently ran into an issue while running cpu separation work units: it does not seem possible to download more than 100 WU concurrently. Attempting to buffer more than 100 the server returns a "no new work available" message for the CPU slot, however it will always download new work units as they finish to keep the total buffer at around 99-100. This even occurs on a large host with more than 100 hardware threads (meaning it's impossible to run a full set of cpu separation threads). (No such issues with GPU separation units--can easily buffer 200-300.) Am I correct in assuming this is a server-side limit imposed on max downloads? If so, would it be possible to increase this? |
![]() ![]() Send message Joined: 24 Jan 11 Posts: 687 Credit: 536,600,990 RAC: 27 ![]() ![]() ![]() |
I recently asked the same question at Seti. The answer was that it was not a BOINC issue but rather a Project level server side constraint. I questioned whether the cpu job limit of 100 was practical anymore when modern processors have 128 or more now. The Seti cpu job limit was recently doubled. So all it takes is a project scientist to change the limits. ![]() |
hericks Send message Joined: 31 Jul 11 Posts: 3 Credit: 73,650,377 RAC: 0 ![]() ![]() |
Hi, this has apparently been lifted to 300 but even that is sucked though the GPU within 2,5 hours. I think at times where (halfway) modern PCs with a (halfway) modern GPU have more power than am IBM ASCI RED, worlds fasted Supercomputer at the 2000 millennia new years eve, the work units should be larger. This would probably drive some old hardware out of the game, but that would certainly be also a more sustainable move. Cheers |
©2023 Astroinformatics Group