Message boards :
Number crunching :
8 Workunit limit
Message board moderation
Previous · 1 · 2 · 3 · Next
Author | Message |
---|---|
Send message Joined: 29 Aug 07 Posts: 486 Credit: 576,548,171 RAC: 0 |
I mean do Duals and single core cpus still get 20 WU's at a time, Or If 20 was working, Why the shift to 8? Are they against having Quads work here? |
Send message Joined: 29 Jul 08 Posts: 267 Credit: 188,848,188 RAC: 0 |
On all 3 Of My quads I'm almost out of work, I almost believe they do want the users with quads to leave or at least I'm beginning to and I hope this is not really what is actually the Devs motivation behind this. |
Send message Joined: 9 Jul 08 Posts: 85 Credit: 44,842,651 RAC: 0 |
I must admit I don't understand why this is causing a problem, since the previous 20 WU "limit" just meant 20 WU's at any given time. In other words, if you had 20 WU's on a box, you had to complete at least one of them to get another. My experience has always been that within a few minutes of returning one, I got another (depends on how soon the BOINC client initiates a request once it's reported a complete one). The limit for established machines that have the "good" history of returning completed WU's is still set for 700/day per core so unless they really buggered up this new setting, there's not a good reason why at least a quad-core shouldn't always have a couple WU's waiting to run, 4 in progress and a couple ready to report. What message is the scheduler sending back when your machines are requesting new work and how many WU's total do you have at the time? The only suggestion that I might have is that if you've left your BOINC settings on "Computer is connected to the Internet about every X days" at 1, you might be in a situation were WU's are uploaded, but not reported as complete. Setting that to 0 (or a small number like .1 if non-integers are accepted there) should allow results to be reported as soon as they're uploaded. (I think) Bear in mind that I'm just speculating how it 'should' work. I'm afraid I can't really speak towards the issue of running out of work since all of my machines are 2 core or less and all work on 3-4 projects at a time. Heh... the last time I had a core go 'idle' would have had to have been in the days that SETI, CPDN and Predictor were the only running projects. [edit]For what it's worth, I do agree that the limit should be set PER CORE, not computer, but there's probably not a way to set it that way on the server[/edit] |
Send message Joined: 29 Aug 07 Posts: 486 Credit: 576,548,171 RAC: 0 |
[edit]For what it's worth, I do agree that the limit should be set PER CORE, not computer, but there's probably not a way to set it that way on the server[/edit] There is a way it can be set Per Core because it's been done at some of the other Projects but usually the Projects just set it to Max Cache size or something like that, JMV probably knows what it's called exactly. That may depend on the Project's Server Version too whether it can be done or not but seeing the way this Project is headed they'd probably set it to 1 Per Core or less anyway ... 0_0 |
Send message Joined: 29 Jul 08 Posts: 6 Credit: 10,991,883 RAC: 0 |
Maybe I'm missing something, probably since I just switched over to this project yesterday, but on all 10 machines I can see in my boincview all have at least 10 WUs waiting in cache, some up to 19 WUs. Are you guys sure there is still an 8 unit max? I'm not seeing it. |
Send message Joined: 9 Jul 08 Posts: 85 Credit: 44,842,651 RAC: 0 |
Maybe I'm missing something, probably since I just switched over to this project yesterday, but on all 10 machines I can see in my boincview all have at least 10 WUs waiting in cache, some up to 19 WUs. Are you guys sure there is still an 8 unit max? I'm not seeing it. Alyx, I'm guessing that change was made very shortly after you connected. In the future you shouldn't have more than 8 WU's on any computer. |
Send message Joined: 29 Aug 07 Posts: 486 Credit: 576,548,171 RAC: 0 |
Maybe I'm missing something, probably since I just switched over to this project yesterday, but on all 10 machines I can see in my boincview all have at least 10 WUs waiting in cache, some up to 19 WUs. Are you guys sure there is still an 8 unit max? I'm not seeing it. The latest Update from one of my Box's: Thu 31 Jul 2008 04:13:04 PM EDT|Milkyway@home|Message from server: (reached per-host limit of 8 tasks) Maybe you're not running the Wu's enough to run them down to 8 yet ... ??? |
Send message Joined: 10 Jan 08 Posts: 7 Credit: 209,012,105 RAC: 11,957 |
Why do they change something and write on the HP...
How should we do this, if no admin/engineer is on the forum to see the results of their change?? This is not very professional but demotivates the people who spend their cpu cycles and money... Huhu, somebody out there?? |
Send message Joined: 29 Jul 08 Posts: 267 Credit: 188,848,188 RAC: 0 |
Sure It causes problems, Frequently My Quads operate on just one or two of their cores, 20 was better, 8's a waste of time, Mine. |
Send message Joined: 27 Aug 07 Posts: 46 Credit: 8,529,766 RAC: 0 |
Sure It causes problems, Frequently My Quads operate on just one or two of their cores, 20 was better, 8's a waste of time, Mine. I don't quite understand how this is possible, unless your clients are not connected all the time. Why don't they ask for new work when it's running dry? Is it because communication is deferred? BOINC.BE: For Belgians who love the smell of glowing red cpu's in the morning Tutta55's Lair |
Send message Joined: 29 Aug 07 Posts: 486 Credit: 576,548,171 RAC: 0 |
Sure It causes problems, Frequently My Quads operate on just one or two of their cores, 20 was better, 8's a waste of time, Mine. The BOINC Manager doesn't really care one way or the other if you have work or not Tutta, it just goes by your Settings and the Servers settings. I've seen this wonderful phenomenon (NOT) myself & not only @ this Project. Usually the call for work space is 20 Minutes @ this Project but I have seen it a lot higher especially since the longer Wu's came out. So say the call for work period somehow got up to 2 hours & the last load of work you got from the Project were all short Wu's your Box is going to be sitting Idle for most of the 2 hr's until the Manager calls for work again. Like I said, the BOINC Manager has a mind of it's own and really doesn't care if you have any work or not. Thats the chance you take running just 1 Peoject that will only give you a short supply of Wu's ... :) |
Send message Joined: 27 Aug 07 Posts: 85 Credit: 405,705 RAC: 0 |
Sure It causes problems, Frequently My Quads operate on just one or two of their cores, 20 was better, 8's a waste of time, Mine. The communications deferral grows each time that the project is asked for work and does not supply the work. It shrinks back to nothing when it actually gets work. This means that on this project you have to have no queue and be always connected. I would suggest that you set your Connect Every X to 0 and your Extra Work to 0.05 if you have 4 or fewer cores or 0 and 0 if ou hae 8 cores. The idea is to not ask unless you are fairly certain that you will actually get a task to work on. The project administrators should be using the deadline to get work back quickly rather than reducing the number of tasks. Any tasks I get on my multi project machines will be returned at about the deadline as BOINC is not necessarily going to get around to them until they are almost late. BOINC WIKI |
Send message Joined: 29 Aug 07 Posts: 486 Credit: 576,548,171 RAC: 0 |
The communications deferral grows each time that the project is asked for work and does not supply the work. It shrinks back to nothing when it actually gets work. Okay, then I tink I know how it could get up to 2 or 3 hour's before it will call for work again. The Project starts out by calling for work every 20 Minutes for a few times and then that time starts to grow as it calls each time. If you have a Quad Core & the first 4 Wu's your doing take 3-4 hours during that 3-4 hours the call time could grow considerably. So those first 4 Wu's finally finish and you start the next 4 Wu's but the problems is those 4 Wu's could be 3-5 Minute ones & all of a sudden your out of work if MilkyWay is the only Project your running on the Box & the BOINC Manager won't call for more for maybe an hour or more now because of the length of the first 4 Wu's. Thats just a possible case where you could run out and be sitting Idle ... :) |
Send message Joined: 2 Jan 08 Posts: 123 Credit: 69,761,111 RAC: 1,584 |
Why can't this 8 per HOST be changed to 8 or 6 PER CORE or a split decision with 10 to 12 PER CPU. I have a de-facto quad with 2 dual core, dual cpu Opteron computer, so have 4 cores on 2 dual core cpu's. Now I am not as fast as some of you but even I am having trouble getting enough work and this is not the only project running. I am currently running Milkyway and Rosetta with a small contribution from superlink and ralph when work is available. At the moment no superlink or ralph so MilkyWay and Rosetta have control, MilkyWay has a much larger resource share but I now have a much larger work unit cache for Rosetta than I do for MilkyWay. MilkyWay should have a bigger work cache as the work units are shorter. Rosetta is set to run 6 hour WU's so MilkyWay should be able to do much more work with the shorter WU's (even the longest take less than 5 1/2 hours). As an example from my logs, from 10.17 PM to 10.58 PM (41 minutes), my computer asked for work 7 times but only received 1 work unit in that time. |
Send message Joined: 29 Aug 07 Posts: 486 Credit: 576,548,171 RAC: 0 |
Why can't this 8 per HOST be changed to 8 or 6 PER CORE or a split decision with 10 to 12 PER CPU. It can be if the Dev's want to do it, but even though 10-12 would be an improvement it should go back to 20 where nobody was at least complaining about it. Higher would even be better but thats not likely to happen so the best we can hope for is to get back to 20 again. |
Send message Joined: 29 Jul 08 Posts: 267 Credit: 188,848,188 RAC: 0 |
Sure It causes problems, Frequently My Quads operate on just one or two of their cores, 20 was better, 8's a waste of time, Mine. As an example 1.22 was causing problems w/Thunderbird(My email client, I couldn't click on a link in an email message in more than one message(one after the other) under 1.22, Under Seti no problem), The Seti Optimized app I was using didn't cause that, Nor did the stock 5.27 Seti app, Now as to work Like I said frequently It'll be down to just one or two WU's and have no others to work on and those two will be the ones the cpu is working on currently, while the other two Intel cores go hungry(QX6700, Q6600 & Q9300). And yes their all connected to the Net 24/7. I see that a lot, Nothing I can do about that as that comes from the server and I've clicked on update when I have noticed the well about to go totally dry and It made no difference, It would just refuse to give Me anymore, Saying No Work, I knew that when the last WU was uploaded the Server would download 8 more WU's and the cycle would repeat, This wasn't as much of a problem when 20 WU's were downloaded. Oh and My OS is XP x64 sp2. |
Send message Joined: 29 Aug 07 Posts: 486 Credit: 576,548,171 RAC: 0 |
I'm beginning to kinda like this new 8 Wu Limit, it gives my Box's a chance to cool down when they run out of work & before they call for the next load of 2-3 min Wu's from the project. |
Send message Joined: 2 Jan 08 Posts: 123 Credit: 69,761,111 RAC: 1,584 |
I'm beginning to kinda like this new 8 Wu Limit, it gives my Box's a chance to cool down when they run out of work & before they call for the next load of 2-3 min Wu's from the project. Your getting short work units ?? Nearly all I seem to be getting are the long buggers. Not realy a problem though as they still pay alright. I will have to say that now it has been running a while the 8 work unit limit seems to be working ok, I get enough to keep me going. I am also running other projects on the my computers, so they don't get a chance to cool down. |
Send message Joined: 29 Aug 07 Posts: 486 Credit: 576,548,171 RAC: 0 |
I'm beginning to kinda like this new 8 Wu Limit, it gives my Box's a chance to cool down when they run out of work & before they call for the next load of 2-3 min Wu's from the project. I'm getting a mixture of short & long Wu's, I only made that post because yesterday I noticed 1 Box only running 2 of 4 Cores because it was getting a lot of short ones I guess @ the moment & was spitting them out faster than it could get them. As far as running another Project goes why would anyone want to do that with the great Communications with the Dev's here & their responsiveness to the Participants concerns I don't see any need to run another Project. Besides it helps the CPU Thermal Paste to set better if the CPU cools down now & then ... LOL ... I got a bridge I'll sell you too if you believe all that malarkey ... :) |
Send message Joined: 30 Aug 07 Posts: 2046 Credit: 26,480 RAC: 0 |
I keep looking and really don't see any option for a per-core WU limit. Theres basically one option in the config file that is max_wus_in_progress which basically says how many workunits per machine are allowed. I've sent an email to Dave about what I can do about this. I'm going to try and increase the max_wus_in_progress back to 20, and lower the deadline to a day for the workunits and see if this helps you guys out any. |
©2024 Astroinformatics Group