Welcome to MilkyWay@home

Posts by curiously_indifferent

1) Message boards : Number crunching : Can a GPU be throttled? (Message 67846)
Posted 26 Oct 2018 by curiously_indifferent
Post:
When I moved to Mojave, I had to swap out my old GPU in my Mac Pro (2012). The new GPU (Radeon RX 580 8192 MB) is fast but offensively loud out at full power. Is there any way to run the GPU at something less than 100%? I am guessing around 30% power would be tolerable.
2) Message boards : Number crunching : All(?) results are Validate Errors (Message 47923)
Posted 16 Apr 2011 by curiously_indifferent
Post:
One thing you can do immediately, whatever the cause of the problems - and it may, for many reasons be just this - bring the memory speed down, its not needed high at MW, its just a waste of power and heats up for no reason.


I brought the memory speed down to 750MHZ (seems to be the lowest it will go) and it has taken the memory temperature down a few C. As you said, it does not have any effect on MW. Thanks Zydor!

That would be quite the slowdown. Increasing the target frequency or increasing the polling interval both have the effect of reducing GPU usage. Try setting --gpu-target-frequency to something like 60, and --gpu-polling-mode to something higher (in milliseconds). I'd guess if you want to get it down that much, maybe try setting the --gpu-target-frequency around 300.


After a lot of testing, I have settled on 300 for the gpu target and 350 for the gpu polling. These inputs definitely throttle the gpu: a file that would normally take just under 5 minutes to complete now takes about 40 minutes.

I may continue to experiment with the inputs, but these are giving me a tolerable noise level. I won't know for a couple of days, but I suspect the current setup has a slightly better performance output than the setup I had previous to last weekend. Thanks Matt!
3) Message boards : Number crunching : All(?) results are Validate Errors (Message 47868)
Posted 14 Apr 2011 by curiously_indifferent
Post:
To be clear, I have read the news posts on the new release and I am unable to throttle the GPU.

If I understand correctly,--gpu-target-frequency <number> should throttle the GPU. I have entered this in the <cmdline>. I do not see any change in the GPU when putting a any number than 30 in.

I know I am doing something wrong. Am I putting --gpu-target-frequency in the wrong place? What <number> would throttle the GPU back to say 10%?


4) Message boards : Number crunching : All(?) results are Validate Errors (Message 47862)
Posted 14 Apr 2011 by curiously_indifferent
Post:
Thanks. I detached/reattached and downloaded a new 0.62 file. I now have a new question: How do I throttle the GPU usage? Previously, I dialed the GPU back to about 10%. The 0.62 file seems to ignore the throttling which causes my computer to sound like a turbine and the GPU temperature went to 101C after 30 seconds of running. I needed to suspend the processing.

5) Message boards : Number crunching : All(?) results are Validate Errors (Message 47856)
Posted 14 Apr 2011 by curiously_indifferent
Post:
I need some guidance on what I need to do to get my GPU to actually be useful to MW again.

My setup:

Vista Home Premium x64 Edition, Service Pack 2
CAL ATI Radeon HD 4700/4800 (RV740/RV770) (512MB) driver: 1.4.1332
Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz

Until the change this past weekend, this setup was solid for well over a year. I upgraded the driver on the GPU (MW only uses the GPU) in the hopes that it would resolve all of the Validate Errors. It has not. Below is a sample error:

Name de_separation_13_3s_free_1_558584_1302647729_0
Workunit 1505084
Created 12 Apr 2011 | 22:35:31 UTC
Sent 12 Apr 2011 | 22:36:54 UTC
Received 14 Apr 2011 | 20:44:58 UTC
Server state Over
Outcome Validate error
Client state Done
Exit status 0 (0x0)
Computer ID 129540
Report deadline 20 Apr 2011 | 22:36:54 UTC
Run time 1,558.91
CPU time 1,586.64
Validate state Invalid
Credit 0.00
Application version MilkyWay@Home
Anonymous platform (ATI GPU)
Stderr output

<core_client_version>6.10.18</core_client_version>
<![CDATA[
<stderr_txt>
Running Milkyway@home ATI GPU application version 0.20b (Win64, CAL 1.4) by Gipsel
ignoring unknown input argument in app_info.xml: -np
ignoring unknown input argument in app_info.xml: 14
ignoring unknown input argument in app_info.xml: -p
ignoring unknown input argument in app_info.xml: 0.4872334339940767000000000
ignoring unknown input argument in app_info.xml: 1.8807146890464890000000000
ignoring unknown input argument in app_info.xml: 0.4343906314590382000000000
ignoring unknown input argument in app_info.xml: 220.3354939711574300000000000
ignoring unknown input argument in app_info.xml: 45.5980476520267150000000000
ignoring unknown input argument in app_info.xml: 5.1723717971970320000000000
ignoring unknown input argument in app_info.xml: -5.5680541993174480000000000
ignoring unknown input argument in app_info.xml: 19.9999499598045320000000000
ignoring unknown input argument in app_info.xml: 0.1605799132292079500000000
ignoring unknown input argument in app_info.xml: 193.7431431834900800000000000
ignoring unknown input argument in app_info.xml: 10.3163052637960280000000000
ignoring unknown input argument in app_info.xml: -4.1338652689211735000000000
ignoring unknown input argument in app_info.xml: 2.7940179261407447000000000
ignoring unknown input argument in app_info.xml: 6.6926061846224060000000000
scaling the wait times with 10
instructed by BOINC client to use device 0
APP: error reading search parameters file (for read): data_file == NULL
CPU: Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz (8 cores/threads) 2.65998 GHz (470ms)

CAL Runtime: 1.4.1332
Found 1 CAL device

Device 0: ATI Radeon HD4700/4800 (RV740/RV770) 512 MB local RAM (remote 2047 MB cached + 2047 MB uncached)
GPU core clock: 625 MHz, memory clock: 993 MHz
800 shader units organized in 10 SIMDs with 16 VLIW units (5-issue), wavefront size 64 threads
supporting double precision

Starting WU on GPU 0

main integral, 640 iterations
predicted runtime per iteration is 247 ms (33.3333 ms are allowed), dividing each iteration in 8 parts
borders of the domains at 0 200 400 600 800 1000 1200 1400 1600
Calculated about 2.39368e+013 floatingpoint ops on GPU, 2.47165e+008 on FPU. Approximate GPU time 1586.64 seconds.

probability calculation (stars)
Calculated about 3.11921e+009 floatingpoint ops on FPU.

WU completed.
CPU time: 1.32601 seconds, GPU time: 1586.64 seconds, wall clock time: 1587.94 seconds, CPU frequency: 2.66 GHz

</stderr_txt>

I assume all of my results are errors of some sort since my RAC has flatlined since the weekend. The reason I write 'assume' is because I am not really sure what is going on - I can't find many results for some reason.

From the error, I figure something is wrong with the app_info.xml file - but I am not sure what I need to modify. Any help would be appreciated.

6) Message boards : Number crunching : Need some guidance on app_info.xml (Message 34574)
Posted 16 Dec 2009 by curiously_indifferent
Post:
Thank you! You are correct - I was opening app_info.xml in Safari. (I was right to assume I was doing something stupid.) After deleting the corrupted files and replacing them with new files, MW now downloads (and runs) GPU based programs.

The GPU blows through these programs. It is loaded between 88% and 92% with one program running. The temperature under this load is between 90C and 92C with the fan spinning at about 8500 RPM - with 8 cores running, and a 'normal' Bonic program (no GPU,) the fan speed is about 2500 RPM. It is a touch loud...

I guess I will find out good the onboard Thermal Management is.
7) Message boards : Number crunching : Need some guidance on app_info.xml (Message 34564)
Posted 15 Dec 2009 by curiously_indifferent
Post:
Thanks. But I must be doing something else wrong - I can't modify the .xml file to make changes.

Any thoughts on what else I am doing wrong?
8) Message boards : Number crunching : Need some guidance on app_info.xml (Message 34559)
Posted 15 Dec 2009 by curiously_indifferent
Post:
I just attached to Milky Way in order to put my ATI Radeon HD 4850 to use.

Since my CPU is bogged down with other BONIC projects, I intend to run MW on the GPU only. Here is my issue:

My version of Bonic (6.10.18) recognizes the GPU which is good. In fact, it regularly requests GPU work from CPDN, SETI, Cosmology, WCG and MW. I understand that only MW will currently support ATI GPU's.

I also understand that this support is not native to MW, so an app must be downloaded.

Per this thread: http://milkyway.cs.rpi.edu/milkyway/forum_thread.php?id=1297

I downloaded the zip file Win64_0.20b_ati and unzipped it into C:\ProgramData\BOINC\projects\milkyway.cs.rpi.edu_milkyway

I read the readme file and then opened up app_info.xml - after opening it, I realized I would not be able to fix it since it did not seem to have any correlation to the readme file. My app_info.xml looks like this:

milkyway astronomy_0.20b_ATI_x64_ati.exe brook64.dll milkyway 20 1.0e11 0.05 1 ATI 1 astronomy_0.20b_ATI_x64_ati.exe brook64.dll

I am sure I am doing something stupid, so any help to set me straight would be greatly appreciated. Maybe a copy of a working app_info.xml?




©2024 Astroinformatics Group