Message boards :
Number crunching :
Quads waiting on the Server
Message board moderation
Author | Message |
---|---|
Send message Joined: 30 Mar 08 Posts: 50 Credit: 11,593,755 RAC: 0 |
I just attached a new build to the project. It's a Q6600 running at 3.6 ghz. While I appreciate the project's quota of 20 WU's, this rig rips through the 20 in less time than the scheduler allows to refill my cache. Is there a machine to machine learning curve in progress or can I expect this dead air between the 20 packs? My other option is to throttle back to pace with the server. What's the skinny? Voltron |
Send message Joined: 5 Feb 08 Posts: 236 Credit: 49,648 RAC: 0 |
I just attached a new build to the project. It's a Q6600 running at 3.6 ghz. While I appreciate the project's quota of 20 WU's, this rig rips through the 20 in less time than the scheduler allows to refill my cache. Is there a machine to machine learning curve in progress or can I expect this dead air between the 20 packs? My other option is to throttle back to pace with the server. Well, we're trying to build a scheduler that allows for wu's per core. This was supposed to be done n the upgrade, however some files did not get upgraded properly as reverted to the old version. We'll keep you updated on when we get it. Shouldn't be too long. Until then, you're free to throttle back and place your resources on other servers where they will be used. Dave Przybylo MilkyWay@home Developer Department of Computer Science Rensselaer Polytechnic Institute |
Send message Joined: 8 Oct 07 Posts: 289 Credit: 3,690,838 RAC: 0 |
I just attached a new build to the project. It's a Q6600 running at 3.6 ghz. While I appreciate the project's quota of 20 WU's, this rig rips through the 20 in less time than the scheduler allows to refill my cache. Is there a machine to machine learning curve in progress or can I expect this dead air between the 20 packs? My other option is to throttle back to pace with the server. Dead air...consider the behaviour of an 8 or 16 core here.....most people actually use a 2nd project so there is no dead air but how efficient you can be in tweaking your Boinc manager determines how little the 2nd project crunches. |
Send message Joined: 21 Dec 07 Posts: 69 Credit: 7,048,412 RAC: 0 |
This question keeps rearing its head. But if you can't set the server up to issue 10, 20 or whatever number of work units per CORE, why not reduce (or eliminate) the "defering communication for 20 minutes" (as I mentioned in a post of 20 January) Set it to something like 5 minutes perhaps, rather than the current 20 minutes. It's a setting in the server's BOINC configuration. See also this post (quoted below) I'm no expert on the server-side options of BOINC, but a search of the BOINC site shows the following seemingly relevant config options (at http://boinc.berkeley.edu/trac/wiki/ProjectOptions): Join the #1 Aussie Alliance on MilkyWay! |
Send message Joined: 8 Oct 07 Posts: 289 Credit: 3,690,838 RAC: 0 |
Yoda -If I remember right Travis tried that change on the old server version and he said in a post (not going to look) that it didn't work or change anything...so either the old code or something was overriding it, Might be worth trying again :) |
Send message Joined: 30 Aug 07 Posts: 2046 Credit: 26,480 RAC: 0 |
Yoda -If I remember right Travis tried that change on the old server version and he said in a post (not going to look) that it didn't work or change anything...so either the old code or something was overriding it, Might be worth trying again :) the new server could SHOULD have fixed the communication deferral problem. i take it that it hasn't :( hopefully we're going to be swapping to a wu per core limit when everything is updated. When we do that i think we'll actually drop down the limit to maybe 5-10 per core (which should be enough to keep machines full), and then we'll get better search results because most results will have a faster turn around. |
Send message Joined: 8 Oct 07 Posts: 289 Credit: 3,690,838 RAC: 0 |
Yoda -If I remember right Travis tried that change on the old server version and he said in a post (not going to look) that it didn't work or change anything...so either the old code or something was overriding it, Might be worth trying again :) My opinion is a 5/per core and a 10 min rpc call would be ideal from the last few months' discussion. No new host built should run out that way with current wu length and if they did it could be slightly tweaked but this is probably about optimal for project and user :) How soon until we start working longer units? |
Send message Joined: 30 Mar 08 Posts: 50 Credit: 11,593,755 RAC: 0 |
Thanks for the feedback. Sounds like I eat it (the dead air) until RPI diddles the code. There is some compensation (cold), I have an E4500 on a pathetic Biostar board and they do not make nice together. So this rig runs cold. Time for a new (refurb) mobo. I appreciate your posts. Voltron |
Send message Joined: 30 Mar 08 Posts: 50 Credit: 11,593,755 RAC: 0 |
Thanks for the feedback. Sounds like I eat it (the dead air) until RPI diddles the code. There is some compensation (cold), I have an E4500 on a pathetic Biostar board and they do not make nice together. So this rig runs cold. Time for a new (refurb) mobo. I appreciate your posts. I throttled back the Q6600 to 2.8 ghz and this is approximate to the servers response for 20 packs. This is the processor involved in the electrical fire incident. Luckily, the only component damaged was the motherboard. I am still in the process of finding a new motherboard. The most recent replacement (non incendiary) was doa, so I switched the proc into one of my dual core rigs. It was a DFI 965P refurb from NE, it would spin up, but no post. De bios was not ready for the show. Voltron |
©2024 Astroinformatics Group