Welcome to MilkyWay@home

New WU Progress Bar

Message boards : Number crunching : New WU Progress Bar
Message board moderation

To post messages, you must log in.

AuthorMessage
MindCrime

Send message
Joined: 5 Mar 14
Posts: 24
Credit: 500,964,006
RAC: 0
Message 65817 - Posted: 15 Nov 2016, 2:24:08 UTC
Last modified: 15 Nov 2016, 2:26:20 UTC

If you pay attention to the progress bar of the new WU you might notice it resets. Don't fret I have a theory and I actually like the way it works.

My theory is that the new WUs are broke down in 5 parts. It doesn't seem like they're 5x longer than the last WU setup. I have a unique configuration so its hard to compare.

If you watch the WU run it will climb up to 20% and reset; then up to 40% and reset; then 60%..80% and at the end of the fifth pass 100%

Each time it resets the next pass climbs proportionally faster. If I run a single WU at once on a Tahiti gpu (1/4 FP32:64) I can see the GPU load drop and reset about every 9 seconds, and it's down for a good 2-3 seconds. I doubled up the concurrency and the load never falls below 99%. I believe it will greatly improve performance on cards that spend a relatively significant time cycling (high performing FP64 cards, [ie Titan*, Tesla*, FirePro*, Tahiti, Cayman, and Cypress gpus])

I like the new progress bar behavior, it's a bit offputting at first but the WUs run well and you get a bit of information on how it's running. Can anyone confirm/deny the new WUs are just 5x the old? I'll be back to comment on credit/performance observations.
ID: 65817 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
MindCrime

Send message
Joined: 5 Mar 14
Posts: 24
Credit: 500,964,006
RAC: 0
Message 65818 - Posted: 15 Nov 2016, 2:34:24 UTC - in response to Message 65817.  

Well, I'm back already. I guess I could have just checked old WU credit compared to the new WU.

New WU / Old WU
133.66/26.73 = 5.00037...

I now presume the new WUs are just 5 old WUs stitched together that run consecutively. I believe this is an improvement but I also believe it could be better where the WU itself is 5x longer and doesn't have numerous iterations. That said I have no idea of the science/programming going on, so thanks for the updates to the server and WU changes!
ID: 65818 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
mmonnin

Send message
Joined: 2 Oct 16
Posts: 162
Credit: 1,004,163,109
RAC: 887
Message 65826 - Posted: 15 Nov 2016, 11:01:45 UTC

Yes there are 5 WUs together and the progress bar goes back to 0 after each one. Jake is taking a look at making it progress through w/o going back to 0%.

http://milkyway.cs.rpi.edu/milkyway/forum_thread.php?id=4052
ID: 65826 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Wrend
Avatar

Send message
Joined: 4 Nov 12
Posts: 96
Credit: 251,528,484
RAC: 0
Message 65850 - Posted: 16 Nov 2016, 4:18:52 UTC
Last modified: 16 Nov 2016, 4:48:49 UTC

I like that it goes back to 0, knowing that it is running 5 WU in 1 in effect, to let me monitor the progress each WU package.

Currently I'm running 5 of the new 1.43 WUs in parallel at the same time on each of my 2 Titan Black cards, 10 in total. Crunching time per the new WU is 3:33 (93s) to 3:36 (96s) in total, loading each GPU up to about 78%, so total productivity does seem to have improved for me as well.

Very nice job, it seems, and it's probably easier bundling these both for the servers and most hosts with less down time and communication needs overall.
ID: 65850 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Wrend
Avatar

Send message
Joined: 4 Nov 12
Posts: 96
Credit: 251,528,484
RAC: 0
Message 65858 - Posted: 16 Nov 2016, 14:35:59 UTC - in response to Message 65850.  

I like that it goes back to 0, knowing that it is running 5 WU in 1 in effect, to let me monitor the progress each WU package.

Currently I'm running 5 of the new 1.43 WUs in parallel at the same time on each of my 2 Titan Black cards, 10 in total. Crunching time per the new WU is 3:33 (93s) to 3:36 (96s) in total, loading each GPU up to about 78%, so total productivity does seem to have improved for me as well.

Very nice job, it seems, and it's probably easier bundling these both for the servers and most hosts with less down time and communication needs overall.


Sorry, the times in seconds are not correct. I was thinking 1 minute instead of 3 for some reason, so the times in seconds posted should have been 213s to 216s.
ID: 65858 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Daedalus

Send message
Joined: 30 Dec 09
Posts: 21
Credit: 75,540,465
RAC: 0
Message 65938 - Posted: 21 Nov 2016, 11:59:56 UTC

Anyway i have much less tasks pending for validation.
ID: 65938 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote

Message boards : Number crunching : New WU Progress Bar

©2024 Astroinformatics Group