Welcome to MilkyWay@home

experiences of a newby - gpu clock memory clock temperature

Message boards : Number crunching : experiences of a newby - gpu clock memory clock temperature
Message board moderation

To post messages, you must log in.

AuthorMessage
FruehwF

Send message
Joined: 28 Feb 10
Posts: 120
Credit: 109,840,492
RAC: 0
Message 46699 - Posted: 24 Mar 2011, 15:04:55 UTC

Hello!
I' m very newby by crunching with video cards.
I have two HD4850 (from second hand market each for about 35 Euro ;-) ).
I made some experiments with clocks of the card. (tools: ATI Tray Tools,Open Hardware Monitor).

Results:
Speed up or down memory clock has no effect to crunching speed of Milkyway. But speed up mem clock raises temperature significant.speed down mem clock lowers temperature.

Speed up GPU clock raises crunching speed of milkyway (and also Collatz) WU's. But also rais up temperature (not surprising)

Conclusio: for crunching a lower mem clock to 400 Mhz (original 993 MHz) and a raise GPU clock to 675 MHz (original 625 MHz) this leads to 78 C with 60 % fanspeed (100% would lead to lower Temperature, but to more noise).

Collatzs WU's wich are singel precission lead to 70 C.

I think I (and you) can save a lot of Energy (and money) by underclocking the Mem clock.
400 MHz are ok for GDDR3.
Mayby someone can say thomething to faster cards with GDDR5 Memory.

regards

Franz

PS.: When I get a powermeter I wîll post this results also.
ID: 46699 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Justin
Avatar

Send message
Joined: 8 Jul 10
Posts: 16
Credit: 11,058,624
RAC: 0
Message 46700 - Posted: 24 Mar 2011, 15:32:38 UTC - in response to Message 46699.  

Using both the 5970 and the 6990 with GDDR5 memory, I've always experienced that higher memory clocks do decrease WU crunch time.
ID: 46700 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Brickhead
Avatar

Send message
Joined: 20 Mar 08
Posts: 108
Credit: 2,607,924,860
RAC: 0
Message 46705 - Posted: 24 Mar 2011, 20:54:21 UTC

Memory clock has a noticeable effect on tasks that use the memory extensively, such as Collatz. For MW, one can just as well turn it down and save a few watts.
ID: 46705 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Zydor
Avatar

Send message
Joined: 24 Feb 09
Posts: 620
Credit: 100,587,625
RAC: 0
Message 46718 - Posted: 25 Mar 2011, 15:55:51 UTC

With MW, turn memory down as far as you can get it, I have seen cards in 4XXX series go as low as 190 with no issues at MW. Memory setting at MW has zero affect on the application because the data passing between the CPU and GPU on load/unload, and when the GPU asks for help, is very small, and a high bandwidth is not required. Keeping the memory setting high at MW only has the effect of burning more power to no effect, and acting as a more efficient space heater.

Collatz is the opposite because the data sets it passes from CPU to GPU and back are very big, and the memory setting has more effect than the the GPU setting. With Collatz, cards should be set to as high a memory bandwidth setting that the PC will take in terms of not overheating or causing invalids/errors.

Regards
Zy
ID: 46718 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Sunny129
Avatar

Send message
Joined: 25 Jan 11
Posts: 271
Credit: 346,072,284
RAC: 0
Message 46877 - Posted: 1 Apr 2011, 6:00:43 UTC - in response to Message 46718.  

With MW, turn memory down as far as you can get it, I have seen cards in 4XXX series go as low as 190 with no issues at MW. Memory setting at MW has zero affect on the application because the data passing between the CPU and GPU on load/unload, and when the GPU asks for help, is very small, and a high bandwidth is not required. Keeping the memory setting high at MW only has the effect of burning more power to no effect, and acting as a more efficient space heater.

Collatz is the opposite because the data sets it passes from CPU to GPU and back are very big, and the memory setting has more effect than the the GPU setting. With Collatz, cards should be set to as high a memory bandwidth setting that the PC will take in terms of not overheating or causing invalids/errors.

Regards
Zy

i understand what you are saying with regard to MW@H's minimal GPU memory bandwidth requirements. but can you honestly say that it makes absolutely no difference in the amount of time it takes to crunch a MW@H task? the reason i ask is b/c not only in Justin's experience does the GPU memory clock affect WU run times, but in my own experience i've found that changing the memory clock changes MW@H WU run times. now perhaps Justin wasn't referring to MW@H tasks, but i am referring specifically to MW@H tasks. i have a 5870 2GB GPU, and with the memory running at the factory 1200MHz, the various types of MW@H tasks take 70-110 seconds to finish. with the GPU memory downclocked to 600MHz (half the factory frequency), my range of run times is more like 70-130 seconds. so while the short tasks don't take any longer to finish, the ones that typically took ~110 seconds to finish now take ~130 seconds to finish. it should be noted that the core clock was at the factory 850MHz for the duration of the test. i don't know how pertinent it is, but perhaps its worth mentioning that 5 of my 6 CPU cores are busy crunching Einstein@Home tasks while my GPU crunches MW@H tasks. i leave one core free to handle GPU requests.
ID: 46877 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Zydor
Avatar

Send message
Joined: 24 Feb 09
Posts: 620
Credit: 100,587,625
RAC: 0
Message 46878 - Posted: 1 Apr 2011, 6:15:31 UTC - in response to Message 46877.  
Last modified: 1 Apr 2011, 6:45:44 UTC

You must have other factors coming into play - the memory bandwidth has no effect at MW because the datasets being sent to/from CPU to GPU and back, are very small, and not many of them. I've seen 4XXX cards go as low as 150 memory without a blip, although anything below 200, needs to be watched - there's no free lunch.

Its also been verified on the forum many times by the developer.

Situation at Collatz (for example) is completely the opposite. There the Project WUs send huge datasets CPU to GPU and back, which by default take a minimum of 30% of GPU Card memory, and equivelent size main system memory to assemble the data for sending to the GPU. Thats chunky, and Collatz needs as high memory bandwidth that can be given. Depending on the card/user 1100-1400 memory bandwidth is common there.

The use of the CPU is very small at MW, there is no need to reserve a full Core to "service" GPU requests. For sure, there is a hit of about 2 to 3 seconds per GPU WU by using all cores on CPU WU Tasks of any Project, but the gain of the additional Core by using all Cores, far outweighs that 2 to 3 second GPU WU hit. CPU use per GPU is miniscule something like 0.05 CPU per GPU, not worth tagging a full CPU core for that.

EDIT:
One factor may be coming into play with you. As an example the effect of Proth Prime Sieve WUs on my 1090T. Those WUs in the ATI app at Prime Grid take up 0.80 CPU per GPU. I have four GPUs running when I crunch there, so 3.2 CPU is taken up. Net result is I can only run three out of the six Cores on CPU WUs, as three are taken servicing the GPUs. If a GPU app needs a full Core or more, it will take them without you interveening, you just "lose" it, the GPU app blocks it out.

A similar effect can happen if the CPU WU being run is highly efficient (aka 95/96/97/98 or even 100% utilisation), in that case some heafty wait states can occur at the GPU whilst the CPU completes a task (microseconds each time, but it builds up). If the CPU is not a heafty one, or older type, then the effect is exaggerated and can effect the GPU WU more.

Regards
Zy
ID: 46878 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Sunny129
Avatar

Send message
Joined: 25 Jan 11
Posts: 271
Credit: 346,072,284
RAC: 0
Message 46887 - Posted: 1 Apr 2011, 14:51:44 UTC - in response to Message 46878.  
Last modified: 1 Apr 2011, 14:52:27 UTC

EDIT:
One factor may be coming into play with you. As an example the effect of Proth Prime Sieve WUs on my 1090T. Those WUs in the ATI app at Prime Grid take up 0.80 CPU per GPU. I have four GPUs running when I crunch there, so 3.2 CPU is taken up. Net result is I can only run three out of the six Cores on CPU WUs, as three are taken servicing the GPUs. If a GPU app needs a full Core or more, it will take them without you interveening, you just "lose" it, the GPU app blocks it out.

i don't think this is the source of my problem. that is, i only crunch MW@H on one GPU client, which only takes up 0.05 (or 5%) of a single CPU core. i should also note that none of my 5870's memory is taken up by my display, as that runs separately on my HD 3300 onboard video.

A similar effect can happen if the CPU WU being run is highly efficient (aka 95/96/97/98 or even 100% utilisation), in that case some heafty wait states can occur at the GPU whilst the CPU completes a task (microseconds each time, but it builds up). If the CPU is not a heafty one, or older type, then the effect is exaggerated and can effect the GPU WU more.

when i run E@H on all 6 CPU cores, they all run at 100% load. but like i said earlier, recently i've been lending only 5 of the 6 cores to E@H to see what a single free core might do for MW@H task run times. i have a Phenom II X6 1090T CPU like you. but ever since i installed the new Catalyst 11.3 drivers and gained the ability to downclock the GPU memory, i've been able to complete experimentation with my CPU cores. but despite the fact that, when all CPU cores are at or near 100% load, GPU task(s) may have to do some waiting, there is no difference on MW@H task run times whether i'm running E@H on 5 CPU cores or all 6 CPU cores.

i'll have to experiment some more this weekend by dialing the GPU memory clock back up to the default 1200MHz and monitoring MW@H task run times to see if they go back down to the original range of times i had before i downclocked the GPU memory to 600MHz. for all i know, the run times may stay the same and not decrease at all. perhaps the increase in run times i experienced may not be a direct result of downclocking the GPU memory. perhaps i received a bunch of new WU's that take a bit longer to complete at around the same time i decided to downclock my GPU memory yesterday, making my GPU memory frequency seem like the culprit. i'll post up when i have some more experimental results...

btw, thanks for the tips and giving me some direction. i appreciate it.
ID: 46887 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Zydor
Avatar

Send message
Joined: 24 Feb 09
Posts: 620
Credit: 100,587,625
RAC: 0
Message 46888 - Posted: 1 Apr 2011, 17:10:16 UTC - in response to Message 46887.  
Last modified: 1 Apr 2011, 17:11:26 UTC

........... been able to complete experimentation with my CPU cores. but despite the fact that, when all CPU cores are at or near 100% load, GPU task(s) may have to do some waiting, there is no difference on MW@H task run times whether i'm running E@H on 5 CPU cores or all 6 CPU cores...


Which is good - keep all six running. Running CPU as well as GPU, you are pretty well guaranteed all Cores will show 100% utilisation as what "spare capacity" there is on the Cores will be taken up by requests from the GPU. If in fact the CPU apps have a light(ish) utilisation by themselves, then the spare capacity would be greater, and little difference *probably* seen. Get a heavyweight CPU app like - say - Aqua, and it will take up virtually all the CPU you have, then you will notice a slight slowdown on the GPU.

On my 1090T I am seeing a slight slowdown on GPU of a couple of seconds or so with MW GPU WUs, but then again I am running four GPUs on the box, so there is more work from the GPUs for the 1090T to deal with.

It just depends how much Core capacity a CPU app takes. Most GPU apps only need either side of 0.04CPU - the Proth Prime Seive GPU (AMD) app taking 0.80 CPU is very much the exception.

Sounds very much like its going well running all 6 Cores on concurrent CPU app as well as GPU - great, let her rip :)

Regards
Zy
ID: 46888 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Sunny129
Avatar

Send message
Joined: 25 Jan 11
Posts: 271
Credit: 346,072,284
RAC: 0
Message 46889 - Posted: 1 Apr 2011, 17:28:27 UTC - in response to Message 46888.  

thanks for the reassurance with regard to utilizing all 6 CPU cores. at this point, i'm just curious as to what might be causing a slight increase in MW@H task run times. again, perhaps i got a bunch of WU's that take slightly longer to crunch at about the same time that i downclocked my GPU memory, making the change in GPU memory frequency seem like the culprit. but i won't know for sure until i play around with the GPU memory frequency some more and confirm it. i'll keep you posted...
ID: 46889 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Sunny129
Avatar

Send message
Joined: 25 Jan 11
Posts: 271
Credit: 346,072,284
RAC: 0
Message 46893 - Posted: 1 Apr 2011, 23:46:37 UTC

well i believe i've been duped by a statistical artifact. i brought my GPU memory clock back up to 1200MHz, and while it did seem like tasks generally crunched faster, there were still a few ~130s tasks in the mix. so i brought the GPU memory clock back down again to 600MHz...again, it did seem like tasks were generally crunching slower now, yet there were still some tasks finishing in as little as ~76s, just as there were when the GPU memory was clocked at 1200MHz. so i guess it was a figment of my imagination lol. its settled then - i'll run my memory at 600MHz, pick up a few °C, and reduce my GPU fan from 60% to 40%. it used to sound like a small vacuum haha, but now it sounds like an actual video card, which is much more pleasant to say the least...
ID: 46893 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Zydor
Avatar

Send message
Joined: 24 Feb 09
Posts: 620
Credit: 100,587,625
RAC: 0
Message 46894 - Posted: 1 Apr 2011, 23:55:43 UTC - in response to Message 46893.  

Good stuff ..... I'd sleep on it for a couple of days now and let things settle, and let whats happened so far sink in. You'd be better then to be able to approach any more tweeks with a fresh mind - always a good thing after a tweek session. Looks like you are getting close to a final solution thats right for you - nicely done :)

Regards
Zy
ID: 46894 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote

Message boards : Number crunching : experiences of a newby - gpu clock memory clock temperature

©2024 Astroinformatics Group