Message boards :
Number crunching :
Server Problems? - currently the U/L one
Message board moderation
Previous · 1 · 2 · 3 · Next
Author | Message |
---|---|
Send message Joined: 4 Oct 08 Posts: 1734 Credit: 64,228,409 RAC: 0 |
I don't think many will appear untill the next group is added. The total has gone from 36000 to 17700 wu's. You are right there banditwolf. The server status page, at 07.48 (and 11 seconds) UTC reports the stock of results in progress now down to 14,763. Although I have been looking for, and waiting, the next batch of WUs, to give MW crunching priority here. I think the Admins intend to run down the results in progress first. That way they can release any new tweaked faster client and a new WU looking at more science and accuracy (longer WU)? |
Send message Joined: 27 Nov 07 Posts: 39 Credit: 1,207,109 RAC: 0 |
Hi all ! Back to Milkyway, using Milksops` opt. app ( Thank You ! ) All went well ( except no new work, of course ), until I got this message : 02.11.2008 11:01:16|Milkyway@home|Fetching scheduler list 02.11.2008 11:01:21|Milkyway@home|Deferring communication for 1 days 0 hr 0 min 0 sec 02.11.2008 11:01:21|Milkyway@home|Reason: 4 consecutive failures fetching scheduler list Tried detaching and reattaching - to no avail. This happened on one of my comps, while my second comp ( same model ) works just fine. Any hints on this problem ( due to new app ?? ), or will it work out by itself ??? Thanks for any info in advance !!! All the best Kurt |
Send message Joined: 26 Sep 08 Posts: 11 Credit: 17,597 RAC: 0 |
That's most likely caused by the servers being overloaded |
Send message Joined: 27 Nov 07 Posts: 39 Credit: 1,207,109 RAC: 0 |
That's most likely caused by the servers being overloaded Thanks for Your answer....Then I`ll just sit it out !!! Kurt |
Send message Joined: 4 Oct 08 Posts: 1734 Credit: 64,228,409 RAC: 0 |
I have been keeping an eye on the server status to see if any new results ready to send are available. None have been there for the last 24 - 36 hours, and the results in progress numbers have dropped from over 24K to the current 12,644 (as of 2 Nov 2008 13:16:09 UTC). This is the reason I made my comment 2 posts below this one. As there is no WUs to crunch we have no choice but to sit and wait. I am using the time to redirect my rigs to other distributed projects (mainly Einstein) |
Send message Joined: 12 Nov 07 Posts: 2425 Credit: 524,164 RAC: 0 |
I don't think many will appear untill the next group is added. The total has gone from 36000 to 17700 wu's. Actually my guess now is... No new work untill the "official new app" is realeased sometime in the next hundred years. Doesn't expecting the unexpected make the unexpected the expected? If it makes sense, DON'T do it. |
Send message Joined: 21 Aug 08 Posts: 625 Credit: 558,425 RAC: 0 |
I don't think many will appear untill the next group is added. The total has gone from 36000 to 17700 wu's. Well, all 6 I picked up a little bit ago were resends... resultid 47150759 Created 2 Nov 2008 13:32:39 UTC Sent 2 Nov 2008 13:32:43 UTC You'll note that the creation date was a mere 3 minutes and 9 seconds after your message that I'm replying to, but the original replication had just timed out, so it's not a "new" task... |
Send message Joined: 5 Feb 08 Posts: 236 Credit: 49,648 RAC: 0 |
We are paying attention. We are always paying attention. ;) Dave Przybylo MilkyWay@home Developer Department of Computer Science Rensselaer Polytechnic Institute |
Send message Joined: 21 Aug 08 Posts: 625 Credit: 558,425 RAC: 0 |
We are paying attention. We are always paying attention. ;) I'm too poor to pay attention... |
Send message Joined: 23 Nov 07 Posts: 23 Credit: 1,181,270 RAC: 0 |
any that I've gotten last two days have been re sends and by looking at the other results from those computers it looks like they just suspended MW and let them time out. why not abort them or detach ??? seems some are boycotting the project. |
Send message Joined: 4 Oct 08 Posts: 1734 Credit: 64,228,409 RAC: 0 |
We are paying attention. We are always paying attention. ;) Now that is fantastic, considering Dave came in yesterday to sort the servers and it is still the week end (Sunday) On another subject - Prior to the Admins releasing the new faster MW client, they will need to build a reserve of WUs and take in to account the speed the WUs were mopped up by those that had installed the Milksop fast client. As all crunchers update to the new official faster MW client, then the Admins can adjust/reduce the credit given once this is known and tested. That way they can get some cross project conformance and prevent Dr Davis A hounding them. |
Send message Joined: 4 Oct 08 Posts: 1734 Credit: 64,228,409 RAC: 0 |
I have been keeping an inconsistent eye on the Server Status - results ready to send - to see if they are keeping up with the WU demand from those of us using Milksop's fast client. Since Dave brought the servers back on line, yesterday afternoon/evening, for new work, these have coped well so far. There is not a lot of "new work" buffer present, but it is consistent at about 350 to 550 WU available (at least whenever I look). I am aware Dave has plans to bring the servers down for some tweaking and/or adjustments prior to the release of the official MW version of the fast client. Possibly with more of slightly improved science. I am not sure whether Dave is watching the new work server demand to see how it keeps up, as a measure of the demand when the new MW client is released. Subject to the Admin/Dave's plans, and client release timing, it may be appropriate to let a couple of days (48 hours) pass to see how the servers cope?. |
Send message Joined: 5 Feb 08 Posts: 236 Credit: 49,648 RAC: 0 |
Currently everything is run off of one server. This may have to change though since generating the work now takes up an entire CPU (out of 2) of the system consistently. Dave Przybylo MilkyWay@home Developer Department of Computer Science Rensselaer Polytechnic Institute |
Send message Joined: 21 Aug 08 Posts: 625 Credit: 558,425 RAC: 0 |
Currently everything is run off of one server. This may have to change though since generating the work now takes up an entire CPU (out of 2) of the system consistently. Longer running work was mentioned, but my system barely registered a difference between the gs_372 and gs_620 tasks... Truly longer running work would lighten the load... |
Send message Joined: 4 Oct 08 Posts: 1734 Credit: 64,228,409 RAC: 0 |
Currently everything is run off of one server. This may have to change though since generating the work now takes up an entire CPU (out of 2) of the system consistently. You are probaby correct, Brian. But Dave's problem will be made worse when the MW fast stock client is released, hence the need for another server (a dedicated "work generation one"). As regards the longer WU, it needs to take in to account any additional science the project can usefully gain as a result. |
Send message Joined: 30 Oct 08 Posts: 32 Credit: 60,528 RAC: 0 |
Looks like our boxes were too much for the server to handle again. Not getting new work and the server status page shows only 38 WUs "ready to send" (which would last less than two hours on my box, so of course there's no way that's enough for even a few crunchers demanding work)... |
Send message Joined: 20 Sep 08 Posts: 1391 Credit: 203,563,566 RAC: 0 |
Message from server : server error : cant attach shared memory started 21:57 UK time Don't drink water, that's the stuff that rusts pipes |
Send message Joined: 30 Oct 08 Posts: 32 Credit: 60,528 RAC: 0 |
Looks like the feeder is not running. |
Send message Joined: 1 Sep 08 Posts: 520 Credit: 302,525,188 RAC: 0 |
|
Send message Joined: 19 Oct 08 Posts: 19 Credit: 1,463,876 RAC: 0 |
things are back to normal, except credit granted is half of what it was an hour ago. |
©2024 Astroinformatics Group