Welcome to MilkyWay@home

Posts by bluestang

1) Message boards : News : Separation Application Shutting Down on Tuesday, Jun 20th (Message 75950)
Posted 25 Jun 2023 by bluestang
Post:
WTF! We processed them...we should be given credit for them. Absolute horrible way to shut down an app!
It's just credit, it's meaningless, it's not like they stole money from you for goodness sake.


As usual you're being an ass like always.

That's not the point and you know it. It's about respecting the volunteers.

And it's not meaningless. The problem is that as I'm getting older, I'm seeing more projects starting to disrespect their volunteers more and more. And also the same volunteers coming to their defense over and over making excuses for them instead of acknowledging the issue at hand leading to these projects to take us for granted more and more.
2) Message boards : News : Separation Application Shutting Down on Tuesday, Jun 20th (Message 75931)
Posted 24 Jun 2023 by bluestang
Post:
Will all submitted tasks be validated, and points given for work completed, or will any unvalidtaed tasks just be left in limbo?


^ This? I have 3003 tasks in "Validation pending" and 1189 tasks in "Validation inconclusive".

WTF! We processed them...we should be given credit for them. Absolute horrible way to shut down an app!
3) Message boards : News : Separation Project Coming To An End (Message 75498)
Posted 13 Jun 2023 by bluestang
Post:
Hate to say it, but a lot of users will not stay/come back to just run CPU on this project as they are used to what their GPU can do. So expect a large drop off of users after.

However, maybe spend sometime optimizing the N-Body GPU app so that maybe they will stay/come back? Especially if you can find a way to utilize NVDIA CUDA more to bring those users in?


You're right, we expect a large drop-off in the number of users.

During our benchmark tests, other GPU N-body codes performed similarly to our GPU code. I don't think that it is an issue with optimization of the GPU code implementation, I think that there is just too much overhead with building the particle tree on the GPU that there is no substantial speed-up compared to running on CPU. It's possible that there are improvements that could be made, but it isn't really a feasible thing to support with our current group and infrastructure.


is the GPU n-body code on your github page? we have a very competent GPU developer on our team who is great at GPU optimization. we've produced optimized GPU applications for several other projects.

if it's not on your github, would you be willing to share what code you have already so that we can take a look?


As long as it is not Linux only :)
4) Message boards : News : Separation Project Coming To An End (Message 75491)
Posted 13 Jun 2023 by bluestang
Post:
Hate to say it, but a lot of users will not stay/come back to just run CPU on this project as they are used to what their GPU can do. So expect a large drop off of users after.

However, maybe spend sometime optimizing the N-Body GPU app so that maybe they will stay/come back? Especially if you can find a way to utilize NVDIA CUDA more to bring those users in?
5) Message boards : Number crunching : Future of Milkyway@Home (Message 75057)
Posted 16 Feb 2023 by bluestang
Post:
*Petri batsignal shines in the distance*


Sure, as long as it isn't Linux only like the Einstein optimized one :)
6) Message boards : Number crunching : New Benchmark Thread - times wanted for any hardware, CPU or GPU, old or new! (Message 75056)
Posted 16 Feb 2023 by bluestang
Post:
I've been running a RX 580 and a HD 7970 GHz edition. I recently bought a couple of second-hand GPUs to muck around with - a Quadro K6000 and a RX 5700XT.

The 7970 is a star, it was doing a task every 45-48 secs, slightly overclocked at 1125MHz. The 580 does a unit in about 100 sec at stock. The 5700 XT was a disappointment, slower than both, and drawing 90%+ load, so I'm not going to try running multiple tasks.

The Quadro - basically the same as a Titan Black, slightly down-clocked - takes 250 sec per task using about 15% GPU load. I've now got it happily running 6 tasks at a time, using 85-90% GPU in the same amount of time, so in the same league as the 7970. Alas, I thought it'd be better. I could probably get 7 units without a significant slowdown, but I'll stick with it as is. Unfortunately I can't increase the core speed, it seems locked at 900 MHz, though I can speed up the vram.

I've changed the 7970 to processing 2 units, and it is now doing 2 in 70-ish seconds, instead of 1 in 48.


For max points you should be running 3 or 4 concurrent tasks on that 7970 if you haven't tried already. On the K6000 you will most likely get errors if you go for more than the 6 concurrent you're doing now.
7) Message boards : News : Validator Outage (Message 71144)
Posted 22 Sep 2021 by bluestang
Post:
Thanks for the updates and quick solution!
8) Message boards : News : New Milkyway Badges Online (Message 70318)
Posted 5 Jan 2021 by bluestang
Post:
Great work...Thanks Tom!
9) Message boards : Number crunching : AMD FirePro S9150 (Message 70165)
Posted 6 Nov 2020 by bluestang
Post:
Well these Invalids everyone was having is not a driver or GPU issue. It's a BOINC and/or MilkyWay app issue. If I run 5 concurrent WUs on my S9100 then I get Invalids. But if I fire up five instances of BOINC each only running 1 WU per instance for 5 total WUs on the S9100, then I get no Invalids.
10) Message boards : Number crunching : AMD FirePro S9150 (Message 70164)
Posted 5 Nov 2020 by bluestang
Post:
Finally got my S9100 with a cooling solution to be able to run and installed in a Windows 10 system :)

Reading through this thread...did you guys ever figure out the "Invalids" issue? Is it the driver/GPUs or is it BOINC/MilkyWay that is the issue?

I'm on Radeon Pro 19.Q2 package with ECC enabled running 5 WUs. I've had issues the passed day trying to get the right drivers and the system/BOINC recognizing the S9100 so I'll need to wait a few days for my "Validation inconclusive" tasks to clear the old ones out and see how it's going.

I'm also using Sapphire Trix v5.2.1 and it sees the S9100 and I can change the Power Limit to 20% to get it at 824MHz. Can also change the GPU and Mem clocks, but haven't yet. Voltage setting is a no go and locked at 1.106 VDDC according to GPU-Z. Settles in at 73-75C for temps with VRMs at 59C and 70C according to HWiNFO64.

https://milkyway.cs.rpi.edu/milkyway/results.php?hostid=706999
11) Message boards : Number crunching : Number crunching with AMD S9100 (Message 70163)
Posted 5 Nov 2020 by bluestang
Post:
Finally got my S9100 with a cooling solution to be able to run and installed in a Windows 10 system :)
12) Message boards : Number crunching : Increased GPU speed? (Message 69839)
Posted 20 May 2020 by bluestang
Post:
What CPU was in the old one compared to the new computer you moved them to?
13) Message boards : Number crunching : Finally getting new tasks only seconds after running out. May not be worth the hassle. (Message 69838)
Posted 19 May 2020 by bluestang
Post:
Thanks!
14) Message boards : Number crunching : Finally getting new tasks only seconds after running out. May not be worth the hassle. (Message 69833)
Posted 19 May 2020 by bluestang
Post:
is there an Ubuntu version of this I can just copy to the folder and restart boinc? Or do I have to get it compiled first?
15) Message boards : Number crunching : AMD FirePro S9150 (Message 69766)
Posted 4 May 2020 by bluestang
Post:
Thanks, I should be good on the shim material as I have copper here at work. You know the thicknesses possibly needed? Also, what did you do for the memory chips and VRM? I'd rather not have to cement small heatsinks to them.
16) Message boards : Number crunching : AMD FirePro S9150 (Message 69761)
Posted 26 Apr 2020 by bluestang
Post:
I have a G12 coming this week to try on my S9100 with an Arctic Liquid Freezer 120 that I had on my 2600k CPU before I upgraded to a 3900X and new cooler. Curious as to how it's going to work. Wish I knew if 7950 or 280X coolers fit this card.

I'll have to try that 20q1 Pro driver as well as you seem to have it working well. Getting full speeds with it as well I hope.
17) Questions and Answers : Windows : Nvidia GPU tasks crashing after 2 seconds (Message 69618)
Posted 23 Mar 2020 by bluestang
Post:
Does your 980M GPU have Double Precicion (FP64) compute so it will not work on this project?

Nevermind, I see it has some ~100 GFLOPS.
18) Message boards : Number crunching : AMD FirePro S9150 (Message 69559)
Posted 21 Feb 2020 by bluestang
Post:
I had in working on Win 7 Pro 64 by telling Windows to update the driver from Device Manager and it got that July or Dec 2015 driver like you mentioned earlier. That didn't work this time.

I've tried about 6 or more driver versions including all the ones you've tried and got nowhere. Ran DDU after ever uninstall before installing a different drive too. I'll have a Win 10 Enterprise 64 system up in the next week hopefully and I'll try again then. Assuming it will play nice with another GPU installed in that machine lol

I've pulled all but one GPU from Milky to help hit a personal goal on PrimeGrid, so I'll be back here in about a week.


Side Note: You have to be one of the most helpful users on this forum by far getting some of these issues fixed. The modded BOINC is working great for me
19) Message boards : Number crunching : AMD FirePro S9150 (Message 69546)
Posted 18 Feb 2020 by bluestang
Post:
Arrgh! Have a S9100 installed in a 2008R2 server with dual E5-2670 and can't get BOINC to recognize the GPU no matter what driver I use. Even used a coproc_info.xml file for it when I had it in a Windows 7 machine working and even though that "lets" BOINC see a CPU, WUs error out in 2-3 seconds with Computation Error. Resetting project didn't help either.

I'm on 7.14.2 of BOINC. Would an older version help?
20) Message boards : Number crunching : Finally getting new tasks only seconds after running out. May not be worth the hassle. (Message 69539)
Posted 14 Feb 2020 by bluestang
Post:
Just implemented you modded boinc.exe and I have it working well on 3 machines right now (thank you for this!). They all also have the coproc_info.xml hack to show multi-GPU setups. So I should pick up some more PPD.

I'm using this in my cc_config.xml for all of them:
<cc_config>
  <options>
    <start_delay>30</start_delay>
    <report_results_immediately>1</report_results_immediately>
    <max_file_xfers>20</max_file_xfers>
    <max_file_xfers_per_project>20</max_file_xfers_per_project>
    <use_all_gpus>1</use_all_gpus>
    <allow_multiple_clients>1</allow_multiple_clients>
    <mw_low_water_pct>1</mw_low_water_pct>
    <mw_high_water_pct>16</mw_high_water_pct>
    <mw_wait_interval>256</mw_wait_interval>
  </options>
</cc_config>


Staying full with ~900 WUs per machine consistently now :)

Now to get my S9100 up and running next week too!


Next 20

©2024 Astroinformatics Group