Message boards :
Number crunching :
work availability
Message board moderation
Author | Message |
---|---|
Send message Joined: 30 Aug 07 Posts: 2046 Credit: 26,480 RAC: 0 |
I think i found out the problem in why the work availability has been so poor. The transitioner was really backed up (probably from running 2 apps instead of 1), so while both assimilators were generating work, it wasn't getting transitioned to a place where you guys could download it fast enough, which only further slowed down the transitioner :P I took a few steps to speed that process up, so let me know how the work is flowing now. |
Send message Joined: 8 Nov 08 Posts: 178 Credit: 6,140,854 RAC: 0 |
I took a few steps to speed that process up, so let me know how the work is flowing now. I'm still getting the "No Work Sent" message with a custom application. As I know, my app_info is fine. Does this change affect the new app or just the old one? |
Send message Joined: 30 Aug 07 Posts: 2046 Credit: 26,480 RAC: 0 |
I took a few steps to speed that process up, so let me know how the work is flowing now. The assimilator is still generating them... looks like they're being snagged up as fast as they get out. Hopefully once the transitioner catches back up this will go away. |
Send message Joined: 10 Aug 08 Posts: 218 Credit: 41,846,854 RAC: 0 |
The assimilator is still generating them... looks like they're being snagged up as fast as they get out. Hopefully once the transitioner catches back up this will go away. Just wanted to say Happy Thanksgiving to you Travis. I can think of a lot of places I would rather be than kicking the server so we can get work. Hope you and yours get to have some quality time this afternoon! Arion |
Send message Joined: 27 Aug 07 Posts: 647 Credit: 27,592,547 RAC: 0 |
Just wanted to say Happy Thanksgiving to you Travis. I can think of a lot of places I would rather be than kicking the server so we can get work. Hope you and yours get to have some quality time this afternoon! I have to agree... So Happy Thanksgiving to the other side of the pond! ;-))) Lovely greetings, Cori |
Send message Joined: 4 Oct 08 Posts: 1734 Credit: 64,228,409 RAC: 0 |
I would like to echo Arion and Cori's sentiments. Have a good Thanksgiving Day |
Send message Joined: 25 Nov 08 Posts: 2 Credit: 21,223 RAC: 0 |
Hi, and Happy Thanksgiving to all my American freinds. Just wondering about wu availablity. I`m still getting "No work sent", even though there are over 5k wu available. Just got one on a forced update, and that`s it. TIA. |
Send message Joined: 7 Jun 08 Posts: 464 Credit: 56,639,936 RAC: 0 |
One thing to keep in mind about the server status is that it is not updated in real time. Also, even if there appears to be a lot of work available when you look, that can go poof in a hurry! ;-) Alinator |
Send message Joined: 1 Sep 08 Posts: 520 Credit: 302,528,196 RAC: 276 |
Realizing how difficult it is to keep queues filled, and recognizing that there is something of an artificial set limit on credit per cpu per day, I am a bit confused as to how any given computer can still generate a RAC of in excess of 10000 these days. Heck, there is one dual core AMD 3800 that is showing a current RAC of nearly 25000. There are also a handful of folks with RAC's of in excess of 100000 and with the current constraints as to hourly credits and available work, I really wonder how that happens. I mean during the gravy days, I actually did get 100K credits on one day, but these days I have to push things to get 20K. Then again, I don't run single project workstations -- never had. At one point about a month ago, MilkyWay was generating over 80% of my BOINC RAC, these days it is under 40%. The reduction is due to the change in credit schemes (and hopefully a more rational one will emerge which doesn't punish faster CPU's but which also doesn't provide excess credits to the point of waking up the DA beast), and to the difficulty in keeping work on systems (those less than 2 hour queue limits need to get fixed). |
Send message Joined: 7 Jun 08 Posts: 464 Credit: 56,639,936 RAC: 0 |
Hmmm... Agreed, the numbers don't appear to reconcile with what I could see for current performance for the current # 1 host. I managed to catch some outcome data for it before it got purged, and it was showing about 119 credits being granted for runtimes right around 4000 seconds. It would seem that to support a RAC of around 34500 you would have to run about 290 tasks per day. At 4 kSecs each, there doesn't seem to be enough seconds in the day to support that, even for a quad!? ;-) I just took a look at the Computer Summary page for it and it's now showing RAC at over 35000. Of course with insta-purge active, it's pretty hard to get any clear indication of what might be going on from our POV. :-( I can't even use my own hosts to help me figure this out, since I run them at equal shares with at least 2 other projects and the indications show what I expect to see for RAC given that the basis is high even with the compensation they use here. Alinator |
Send message Joined: 1 Sep 08 Posts: 520 Credit: 302,528,196 RAC: 276 |
Right, even if you could get all the workunits downloaded and processed (a serious constraint give work unit availability), there are two other constraints in the mix, first, as you noted, how many work units are needed for those 5 digit individual workstation RAC numbers, and second, (according to what Travis posted regarding credits) the project based 'governor' on maximum credits per hour -- such that if you are running a 3G CPU full tilt, you won't get any more credits than if you were running a 1GHz CPU full tilt. So, there are at least two tactics those ultra high RAC numbers for individual workstations suggest. First, some sort of script which pounds the server to keep the workstation filled with work units (that is something I suspect could be cobbled together without too much effort), and *second*, some capability to *spoof* the server into believing the work is being generated by some 8 or higher core computer so as to not bounce against the 'governor' which Travis has indicated is in place. Not a big deal to me, I typically exclude outlier RAC numbers from my view of the world. It just something I'm wondering about as I struggle to manually fill queues. Hmmm... |
Send message Joined: 4 Oct 08 Posts: 1734 Credit: 64,228,409 RAC: 0 |
Th Server Status page, at 09.32 UTC, says there is 4,089 WUs ready to send, but they are not available to crunchers as new downloads. Yet all the server running status is fine. The pending credit is as high as I have ever seen it. Something is preventing this work reaching rigs seeking more work. I guess one of the server scripts have fallen over and needs a kick? Am I not the only person with no MW work to crunch? I would guess not! |
Send message Joined: 21 Nov 08 Posts: 90 Credit: 2,601 RAC: 0 |
Th Server Status page, at 09.32 UTC, says there is 4,089 WUs ready to send, but they are not available to crunchers as new downloads. Yet all the server running status is fine. Well I'm getting handfuls of stripe nn and test nn work so perhaps your prefs aint correct? |
Send message Joined: 13 Jan 08 Posts: 19 Credit: 820,482 RAC: 0 |
No problems here both host now on NNT to process present downloads both on 20 WU per CPU limits. Have not looked at laptop yet. Activity always run. Network always available. Connection time 1.00 days. Michael |
Send message Joined: 15 Aug 08 Posts: 163 Credit: 3,876,869 RAC: 0 |
No problems with downloads here. By the moment... Best regards. Logan. BOINC FAQ Service (Ahora, también disponible en Español/Now available in Spanish) |
Send message Joined: 9 Nov 07 Posts: 151 Credit: 8,391,608 RAC: 0 |
Problems here............ 29/11/2008 10:27:26|Milkyway@home|Sending scheduler request: Requested by user. Requesting 488992 seconds of work, reporting 0 completed tasks 29/11/2008 10:27:31|Milkyway@home|Scheduler request completed: got 0 new tasks 29/11/2008 10:27:31|Milkyway@home|Message from server: No work sent 29/11/2008 10:29:41|Milkyway@home|Sending scheduler request: Requested by user. Requesting 489294 seconds of work, reporting 0 completed tasks 29/11/2008 10:29:46|Milkyway@home|Scheduler request completed: got 0 new tasks 29/11/2008 10:29:46|Milkyway@home|Message from server: No work sent 29/11/2008 10:30:42|Milkyway@home|Sending scheduler request: Requested by user. Requesting 489434 seconds of work, reporting 0 completed tasks 29/11/2008 10:30:47|Milkyway@home|Scheduler request completed: got 0 new tasks 29/11/2008 10:30:47|Milkyway@home|Message from server: No work sent 29/11/2008 10:34:07|Milkyway@home|Sending scheduler request: Requested by user. Requesting 489917 seconds of work, reporting 0 completed tasks 29/11/2008 10:34:12|Milkyway@home|Scheduler request completed: got 0 new tasks 29/11/2008 10:34:12|Milkyway@home|Message from server: No work sent |
Send message Joined: 6 Apr 08 Posts: 2018 Credit: 100,142,856 RAC: 0 |
|
Send message Joined: 16 Nov 07 Posts: 23 Credit: 4,774,710 RAC: 0 |
Couple of days now my (few) comps get only a handful every now and then. Are there more available with the test app? If so I'm going to switch over to that one. |
Send message Joined: 2 Jan 08 Posts: 123 Credit: 69,815,770 RAC: 723 |
Well with over 7,500 available work units and I am getting the message that 'No work sent', even though I have just 3 work units on one 4 core machine and 5 work units on another 4 core machine (the one with 5 also got the message that 'Not requesting new work', which I was requesting and still want). Perhaps they are all 'test' work units?, which I am not processing at the moment. |
Send message Joined: 22 Mar 08 Posts: 90 Credit: 501,728 RAC: 0 |
Well with over 7,500 available work units and I am getting the message that 'No work sent', even though I have just 3 work units on one 4 core machine and 5 work units on another 4 core machine (the one with 5 also got the message that 'Not requesting new work', which I was requesting and still want). Over 9k ready to send now but the transitioner backlog is at 3hrs and pendings are going through the roof! A clear conscience is usually the sign of a bad memory |
©2024 Astroinformatics Group