Welcome to MilkyWay@home

Finally getting new tasks only seconds after running out. May not be worth the hassle.

Message boards : Number crunching : Finally getting new tasks only seconds after running out. May not be worth the hassle.
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 · Next

AuthorMessage
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 990
Credit: 376,142,956
RAC: 2
Message 69710 - Posted: 14 Apr 2020, 17:30:52 UTC - in response to Message 69709.  
Last modified: 14 Apr 2020, 17:34:23 UTC

I am running Milkyway@home Separation on my GPU....

The "NVIDIA GPU task request deferred for 00:0x:xx" in conjunction with "NVIDIA GPU task request deferral interval for 00:10:00" is getting painfully problematic as it prevents me from downloading new work. Whenever I return tasks to the server this deferral gets reset and my computer gets no new tasks. On my GTX 1660 Ti i have gone to running four tasks simultaneously to increase the time it takes to send back to the server, now at around 12 minutes, still this is not enough as too many results are still returned and the deferral is yet again postponed. I have even tried settings to grab a larger number of tasks, but instead of helping it ended up messing up my other projects.

I don't want to download any special version of boinc as this is not my primary project. Maybe I should be running more than four tasks? ...although the four are already causing havoc with cpu loads... In any case MW's failure to keep my computer continually loaded is a boon to my other projects which are more than happy to fill a 10 minute gap with hours of work.


Yes, I ran Einstein tasks by setting its resources to 0.0. That caused Milkyway to get it full load of 900 work units and allowed Einstein to get a few when Milkyway was taking its 10 minute siesta. However the Einstein ran poorly on my AMD boards and was a waste of resources. Your 1660ti will handle Einstein or gpugrid nicely unlike my s9000 series boards.


Not all AMD suck at Einstein. I have four 280X which like either project. I bought them because they produced the most double flops per £, and they're almost the best at single flops per £. If it's Firepro S9000 boards you have, they should be just as happy with Einstein as mine are. Both our cards have a 1:4 ratio of DP:SP.
ID: 69710 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Cautilus

Send message
Joined: 29 Jul 14
Posts: 19
Credit: 3,451,802,406
RAC: 54
Message 69712 - Posted: 15 Apr 2020, 7:20:21 UTC - in response to Message 69710.  
Last modified: 15 Apr 2020, 7:22:41 UTC


Not all AMD suck at Einstein. I have four 280X which like either project. I bought them because they produced the most double flops per £, and they're almost the best at single flops per £. If it's Firepro S9000 boards you have, they should be just as happy with Einstein as mine are. Both our cards have a 1:4 ratio of DP:SP.


I agree that I would think the S9100's that Joeseph has should be theoretically fine for Einstein, but I just want to point out that they're not the same GPU as your 280X's. The S9100's are based on Hawaii, the core inside the 290/290X, but since they're server cards they have a DP ratio of 1:2.
ID: 69712 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 990
Credit: 376,142,956
RAC: 2
Message 69713 - Posted: 15 Apr 2020, 12:30:24 UTC - in response to Message 69712.  
Last modified: 15 Apr 2020, 12:41:19 UTC


Not all AMD suck at Einstein. I have four 280X which like either project. I bought them because they produced the most double flops per £, and they're almost the best at single flops per £. If it's Firepro S9000 boards you have, they should be just as happy with Einstein as mine are. Both our cards have a 1:4 ratio of DP:SP.


I agree that I would think the S9100's that Joeseph has should be theoretically fine for Einstein, but I just want to point out that they're not the same GPU as your 280X's. The S9100's are based on Hawaii, the core inside the 290/290X, but since they're server cards they have a DP ratio of 1:2.


A different ratio for the same GPU? I didn't think that was possible. Surely the insides of the GPU have a certain number of processors to run each type? Or can they be reprogrammed to work differently? If so, can they be reprogrammed by the end user?

Actually, mine are Tahiti. Are you thinking of the 290X? Mine are 280X.
And checking on https://www.techpowerup.com/gpu-specs/firepro-s9100.c2636 it's a different version of Hawaii GPU anyway. So perhaps the insides are indeed different.

And S9100! Wow, they cost a lot, even second hand. Two 280X is a fifth of the price of one S9100, and does the same DP and twice the SP.
ID: 69713 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Joseph Stateson
Avatar

Send message
Joined: 18 Nov 08
Posts: 291
Credit: 2,461,693,501
RAC: 0
Message 69721 - Posted: 16 Apr 2020, 13:59:09 UTC - in response to Message 69713.  
Last modified: 16 Apr 2020, 14:00:24 UTC


Actually, mine are Tahiti. Are you thinking of the 290X? Mine are 280X.
And checking on https://www.techpowerup.com/gpu-specs/firepro-s9100.c2636 it's a different version of Hawaii GPU anyway. So perhaps the insides are indeed different.

And S9100! Wow, they cost a lot, even second hand. Two 280X is a fifth of the price of one S9100, and does the same DP and twice the SP.


I have a single 9100 that I got used about 175 as I recall. However, boinc thinks I have all s9100 due to they way they report only one gpu.

The x280 is superior to all my "S's": s9050 (new $69) and s9000 (70 - 90) of which I have a mix and they have only 1792 cores unlike the x280 which has a full 2048 and better FP and DP performance. However, all of my boards have a single 8 pin power connector and run nowhere near the 225watt TP even with 5 concurrent tasks. However, Einstein runs hotter and tasks take longer on these boards than, for example a low power (6 pin) gtx1060

I can do 5 concurrent Milkyway tasks at about 39-41 seconds (click calculate button) each task but a Einstein runs 12 minutes on s9050 and under 10 on gtx1060 but cooler.

Shut down most of my rigs last month when texas weather got hot but a strange cold front in april allowed me to turn some back on.
ID: 69721 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 990
Credit: 376,142,956
RAC: 2
Message 69722 - Posted: 16 Apr 2020, 14:29:15 UTC - in response to Message 69721.  


Actually, mine are Tahiti. Are you thinking of the 290X? Mine are 280X.
And checking on https://www.techpowerup.com/gpu-specs/firepro-s9100.c2636 it's a different version of Hawaii GPU anyway. So perhaps the insides are indeed different.

And S9100! Wow, they cost a lot, even second hand. Two 280X is a fifth of the price of one S9100, and does the same DP and twice the SP.


I have a single 9100 that I got used about 175 as I recall. However, boinc thinks I have all s9100 due to they way they report only one gpu.

The x280 is superior to all my "S's": s9050 (new $69) and s9000 (70 - 90) of which I have a mix and they have only 1792 cores unlike the x280 which has a full 2048 and better FP and DP performance. However, all of my boards have a single 8 pin power connector and run nowhere near the 225watt TP even with 5 concurrent tasks. However, Einstein runs hotter and tasks take longer on these boards than, for example a low power (6 pin) gtx1060

I can do 5 concurrent Milkyway tasks at about 39-41 seconds (click calculate button) each task but a Einstein runs 12 minutes on s9050 and under 10 on gtx1060 but cooler.

Shut down most of my rigs last month when texas weather got hot but a strange cold front in april allowed me to turn some back on.


Two of my 280X are dual 8pin, two are 8+6pin. Power isn't a problem, I bought a 1kW 12V supply designed for LEDs from China for £30. They're about £80 just now due to shortages. I run the GPUs from that and use a PC supply for the motherboard etc.

I get a MW task done every 8.5 seconds between four 280X running two WUs at a time each. Einstein Gamma is about one every 2.5 minutes. Einstein Gravity is not possible for two reasons - the CPUs are old and can't keep up with their part of the calculations, and the cards only have 3GB RAM onboard,the new gravities are 3.6GB, so I can't fit even one on a card without it using system RAM which is 3 times slower.

I find MW runs hotter than Einstein on these 1:4 boards.

I ignore the weather and run everything all the time. In cold weather they heat the house, in hot weather I open a window. I'm trying to save up for a reversible heatpump which will be cheaper to heat the house (3 times less than gas) and also serve as an AC unit.
ID: 69722 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 990
Credit: 376,142,956
RAC: 2
Message 69726 - Posted: 17 Apr 2020, 16:51:53 UTC

Just got a PM from the developer Eric Mendelsohn who said he's looking into the simultaneous upload/download problem.
ID: 69726 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Keith Myers
Avatar

Send message
Joined: 24 Jan 11
Posts: 696
Credit: 539,995,078
RAC: 86,897
Message 69727 - Posted: 17 Apr 2020, 20:24:10 UTC - in response to Message 69726.  

Just got a PM from the developer Eric Mendelsohn who said he's looking into the simultaneous upload/download problem.

Great news. Hope he makes more progress than the previous staff.
ID: 69727 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 990
Credit: 376,142,956
RAC: 2
Message 69729 - Posted: 17 Apr 2020, 21:38:01 UTC - in response to Message 69727.  
Last modified: 17 Apr 2020, 21:43:11 UTC

Just got a PM from the developer Eric Mendelsohn who said he's looking into the simultaneous upload/download problem.

Great news. Hope he makes more progress than the previous staff.


If I was to place a bet on it I'd say it's something done on purpose. Perhaps something like limiting how many tasks each person gets so they get a wider variety of GPU models crunching? Since also they only give you 2.5 hours work at once. I simply can't believe this could be done accidentally.

By the way, watch out, I'm catching you up on Milkyway.
ID: 69729 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Keith Myers
Avatar

Send message
Joined: 24 Jan 11
Posts: 696
Credit: 539,995,078
RAC: 86,897
Message 69731 - Posted: 18 Apr 2020, 0:01:41 UTC - in response to Message 69729.  

When I explained this to Jake, he said he could never figure out what server configuration file was causing the problem. I suggested he contact the other project admins and post in the server code forum. But I never did see any post from him asking for help. So the issue was dropped and ignored still to this day. Maybe Eric can do better, I hope so.

I wouldn't have to kludge up the client to get around the server misconfiguration issue.

Well you are already beating me on host production. You are catching up fast in overall total credit though.
ID: 69731 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 990
Credit: 376,142,956
RAC: 2
Message 69732 - Posted: 18 Apr 2020, 1:11:26 UTC - in response to Message 69731.  

When I explained this to Jake, he said he could never figure out what server configuration file was causing the problem. I suggested he contact the other project admins and post in the server code forum. But I never did see any post from him asking for help. So the issue was dropped and ignored still to this day. Maybe Eric can do better, I hope so.

I wouldn't have to kludge up the client to get around the server misconfiguration issue.

Well you are already beating me on host production. You are catching up fast in overall total credit though.


From what you said Jake told you, it sounds like it is indeed a mistake rather than something on purpose. Let's hope Eric sorts it. I run my computers on Einstein and MW at once, so it's not that much of a problem, although it does mean that they don't adhere to my project weights very well. Boinc is trying to ask MW for more work, and when it refuses, it gets Einstein work, and Einstein has a habit of giving out twice what it's asked for, so MW ends up not getting a chance to process.

Your SETI position is damn impressive, and it looks like you do a lot of GPUgrid. I can't run that as I only have AMD GPUs. Am I correct in thinking you have TEN rather powerful GPUs? I don't think this is a fair fight. I only have four Radeon 280X I got for £50 each. I'm going for some more CPU power at the moment as my Rosetta, LHC, and Universe contributions aren't good enough. I'm building a dual 12 core = 24 core Xeon out of 2nd hand parts.
ID: 69732 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Keith Myers
Avatar

Send message
Joined: 24 Jan 11
Posts: 696
Credit: 539,995,078
RAC: 86,897
Message 69737 - Posted: 18 Apr 2020, 22:21:44 UTC - in response to Message 69732.  

I am running a custom client that our GPUUG developers figured out. It sidesteps the issue at Milkyway in an elegant manner with a simple configuration file. Something along the lines of what JStateson's client does. I simply delay reporting tasks on a 15 minute schedule avoiding the 10 minute dry period. I limit the cache to a fixed 600 tasks for Milkyway and 20/120 tasks cpu/gpu split for Einstein. Just setting the GPUGrid cache limit at what the project would normally send as two tasks per gpu. No spoofing anymore on any project. Just fixed task counts that I feel comfortable with.

Yes, total gpus currently running are 7 in two hosts. Down from 17 gpus when I was running Seti on five hosts. Oh guess I should include the Maxwell gpu in the Jetson Nano I have running the BRP cpu task on the gpu. So eight gpus in total.
ID: 69737 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 990
Credit: 376,142,956
RAC: 2
Message 69738 - Posted: 18 Apr 2020, 22:56:54 UTC - in response to Message 69737.  

I am running a custom client that our GPUUG developers figured out. It sidesteps the issue at Milkyway in an elegant manner with a simple configuration file. Something along the lines of what JStateson's client does. I simply delay reporting tasks on a 15 minute schedule avoiding the 10 minute dry period. I limit the cache to a fixed 600 tasks for Milkyway and 20/120 tasks cpu/gpu split for Einstein. Just setting the GPUGrid cache limit at what the project would normally send as two tasks per gpu. No spoofing anymore on any project. Just fixed task counts that I feel comfortable with.

Yes, total gpus currently running are 7 in two hosts. Down from 17 gpus when I was running Seti on five hosts. Oh guess I should include the Maxwell gpu in the Jetson Nano I have running the BRP cpu task on the gpu. So eight gpus in total.


What have you done with the other other GPUs?
ID: 69738 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Keith Myers
Avatar

Send message
Joined: 24 Jan 11
Posts: 696
Credit: 539,995,078
RAC: 86,897
Message 69739 - Posted: 18 Apr 2020, 23:42:06 UTC - in response to Message 69738.  

What have you done with the other other GPUs?

There still sitting in their hosts gathering dust.
ID: 69739 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 990
Credit: 376,142,956
RAC: 2
Message 69740 - Posted: 18 Apr 2020, 23:59:53 UTC - in response to Message 69739.  

What have you done with the other other GPUs?

There still sitting in their hosts gathering dust.


That's cruelty and I'm gonna call the RSPCG. Put them to work right now!
ID: 69740 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Hurr1cane78

Send message
Joined: 7 May 14
Posts: 57
Credit: 201,094,342
RAC: 23,796
Message 69795 - Posted: 10 May 2020, 8:45:08 UTC

hi all made vid on youtube for multiple instances instruction's and at full load on a Radeon VII
RADEON VII GIGABYTE// 3 Instances_ Milkyway@home WUs BOINC_ 3_instances
https://www.youtube.com/watch?v=4xKy9wGKmz4
all the best and welcome to earth
ID: 69795 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Hurr1cane78

Send message
Joined: 7 May 14
Posts: 57
Credit: 201,094,342
RAC: 23,796
Message 69796 - Posted: 10 May 2020, 8:45:22 UTC

hi all made vid on youtube for multiple instances instruction's and at full load on a Radeon VII
RADEON VII GIGABYTE// 3 Instances_ Milkyway@home WUs BOINC_ 3_instances
https://www.youtube.com/watch?v=4xKy9wGKmz4
all the best and welcome to earth
ID: 69796 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
bluestang

Send message
Joined: 13 Oct 16
Posts: 112
Credit: 1,174,293,644
RAC: 0
Message 69833 - Posted: 19 May 2020, 0:59:46 UTC

is there an Ubuntu version of this I can just copy to the folder and restart boinc? Or do I have to get it compiled first?
ID: 69833 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Keith Myers
Avatar

Send message
Joined: 24 Jan 11
Posts: 696
Credit: 539,995,078
RAC: 86,897
Message 69837 - Posted: 19 May 2020, 18:44:14 UTC - in response to Message 69833.  

is there an Ubuntu version of this I can just copy to the folder and restart boinc? Or do I have to get it compiled first?

Yes, the compiled Linux Ubuntu client binary is downloadable directly from his repository.

https://github.com/JStateson/MilkywayNewWork/blob/master/boinc_ubuntu
Just download it and set execute permissions. Also make sure you download the cc_config.xml to use along with the binary as it is necessary to use his modified client.

https://github.com/JStateson/MilkywayNewWork
ubuntu version be sure to use 0751 on program and 0664 on the xml

ID: 69837 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
bluestang

Send message
Joined: 13 Oct 16
Posts: 112
Credit: 1,174,293,644
RAC: 0
Message 69838 - Posted: 19 May 2020, 23:34:28 UTC

Thanks!
ID: 69838 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Toby Broom

Send message
Joined: 13 Jun 09
Posts: 24
Credit: 137,536,729
RAC: 0
Message 71192 - Posted: 30 Sep 2021, 16:48:06 UTC - in response to Message 69283.  

For me this time out script does not work as I can make a WU in less than 92 sec so each time it runs it reports the completed tasks and then does not get anymore.

Also, if I disable the network for 15 min, then report it the same sort of deal.

It seems like its only possible to get task in batches of 300 with the current setup of the project
ID: 71192 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · 5 · 6 · Next

Message boards : Number crunching : Finally getting new tasks only seconds after running out. May not be worth the hassle.

©2024 Astroinformatics Group