Welcome to MilkyWay@home

upgraded my gpu how to max it out?

Message boards : Number crunching : upgraded my gpu how to max it out?
Message board moderation

To post messages, you must log in.

AuthorMessage
Cliff

Send message
Joined: 26 Nov 09
Posts: 33
Credit: 62,675,234
RAC: 0
Message 63339 - Posted: 8 Apr 2015, 23:55:25 UTC

I was running SLI GTX 570s upgraded to a GTX 980 will go sli when I have the money for another card. now how do I max out this monster. I have it set up to work on 3 SETI WU at one time but when Bionic switches over to Milkyway it drops to working on one WU only on my gpu. how can I bump it to say 2 WU at one time? not a software guy so keep it simple please
thanks
ID: 63339 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile mikey
Avatar

Send message
Joined: 8 May 09
Posts: 3319
Credit: 520,298,906
RAC: 20,256
Message 63340 - Posted: 9 Apr 2015, 10:44:45 UTC - in response to Message 63339.  

I was running SLI GTX 570s upgraded to a GTX 980 will go sli when I have the money for another card. now how do I max out this monster. I have it set up to work on 3 SETI WU at one time but when Bionic switches over to Milkyway it drops to working on one WU only on my gpu. how can I bump it to say 2 WU at one time? not a software guy so keep it simple please
thanks


SLI does NOT help when crunching, only for gaming and other stuff that uses the gpu's.

You need an app_info.xml file, very similar to your Seti one, like this:

<app_info>
<coproc>
<type>Nvidia</type>
<count>0.5</count>
</coproc>
</app_info>

I do not remember if you really need the <app> lines or not if you only crunch one kind of unit. If so here is a very old one, for ATI cards, you can modify to work with the newer units:

<app_info>
<app>
<name>milkyway</name>
</app>
<file_info>
<name>milkyway_separation_1.02_windows_x86_64__opencl_amd_ati.exe</name>
<executable/>
</file_info>
<app_version>
<app_name>milkyway</app_name>
<version_num>102</version_num>
<flops>1.0e11</flops>
<avg_ncpus>0.05</avg_ncpus>
<max_ncpus>1</max_ncpus>
<plan_class>ati14ati</plan_class>
<coproc>
<type>ATI</type>
<count>0.5</count>
</coproc>
<cmdline>--gpu-target-frequency 10 --gpu-disable-checkpointing</cmdline>
<file_ref>
<file_name>milkyway_separation_1.02_windows_x86_64__opencl_amd_ati.exe</file_name>
<main_program/>
</file_ref>
</app_version>
</app_info>

It would go into your MilkyWay directory so it doesn't affect your other projects.
ID: 63340 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Cliff

Send message
Joined: 26 Nov 09
Posts: 33
Credit: 62,675,234
RAC: 0
Message 63342 - Posted: 10 Apr 2015, 0:00:40 UTC - in response to Message 63340.  

when I was running SLI it let me have a WU working on each of the GPUs so that I could put out two at a time.
ID: 63342 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
swiftmallard
Avatar

Send message
Joined: 18 Jul 09
Posts: 300
Credit: 303,562,776
RAC: 0
Message 63343 - Posted: 10 Apr 2015, 1:21:41 UTC - in response to Message 63342.  

when I was running SLI it let me have a WU working on each of the GPUs so that I could put out two at a time.

That is not the same as crunching two WU per card.
ID: 63343 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile mikey
Avatar

Send message
Joined: 8 May 09
Posts: 3319
Credit: 520,298,906
RAC: 20,256
Message 63345 - Posted: 10 Apr 2015, 12:20:56 UTC - in response to Message 63343.  

when I was running SLI it let me have a WU working on each of the GPUs so that I could put out two at a time.


That is not the same as crunching two WU per card.


Exactly, you were running one workunit per card, a simple cc_config.xml file will do that too.

<cc_config>
<options>
<use_all_gpus>1</use_all_gpus>
</options>
</cc_config>

An SLI cable tells games to use both gpu's to process the data at the same time on each card, making the game faster. But Boinc does not do gpu workunits that way, it does them on only one gpu at a time, no matter if you have one gpu or seven of them. Boinc is not designed, right now, for sharing data across gpu's like games and photoshop can, I have no idea if they are even working on it.
ID: 63345 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Wrend
Avatar

Send message
Joined: 4 Nov 12
Posts: 96
Credit: 251,528,484
RAC: 0
Message 63379 - Posted: 15 Apr 2015, 6:31:59 UTC
Last modified: 15 Apr 2015, 7:30:50 UTC

Many games also don't make use of SLIed cards. I SLI my EVGA Titan Black Superclocked cards for use with some games.

Basically, SLIing mirrors your graphics memory per GPU across all of your cards, so it holds the same info on all of them. If you have more than enough graphics memory (which my cards do, having 6GB each), then this isn't an issue and you can run multiple work units per GPU while they're SLIed.

That being said, I haven't been running MilkyWay@Home GPU work units for quite a while, though I did do a test run of them not long ago. I primarily use my GPUs to crunch for GPUGrid (two long run work units per GPU, four total) since it can make better use of my GPUs (see my signature below), loading them up more and keeping them at a higher clock rate. To try and get anywhere near that with MilkyWay@Home, I had to try and load up more than five work units per GPU (more than ten total) simultaneously and because of the way I have the scheduler set up, and perhaps due to my internet connection speed, it wasn't very feasible to be able to get enough work units fast enough to keep up with how quickly they were completed.

After some goals in other projects are reached, I do plan on prioritizing MilkyWay@Home GPU work units, as well as prioritizing my GPUs through the NVIDIA Control Panel for double precision computation. When that time comes, I'll look more into how to get more GPU work units downloaded from MilkyWay@Home at the same time.

Here are the config files I used when testing this out. In the second file, the "gpu usage" number determines how many GPU tasks run per GPU. "1" is one per GPU, "0.5" is two per GPU, "0.2" is five per GPU, and so on. (The app_config.xml file is rather old, so I'm not sure that the rest of the syntax in it is correct, though it did still work as intended as it is.)

C:\ProgramData\BOINC\cc_config.xml

<cc_config>
<options>
<use_all_gpus>1</use_all_gpus>
</options>
</cc_config>


C:\ProgramData\BOINC\projects\milkyway.cs.rpi.edu_milkyway\app_config.xml

<app_config>
<app>
<name>milkyway</name>
<max_concurrent>0</max_concurrent>
<gpu_versions>
<gpu_usage>0.2</gpu_usage>
<cpu_usage>0.2</cpu_usage>
</gpu_versions>
</app>
</app_config>

ID: 63379 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Wrend
Avatar

Send message
Joined: 4 Nov 12
Posts: 96
Credit: 251,528,484
RAC: 0
Message 63395 - Posted: 17 Apr 2015, 16:05:22 UTC
Last modified: 17 Apr 2015, 16:44:11 UTC

OK, below is the new app_config.xml file I'm testing out for MilkyWay@Home while prioritizing double precision computing on my cards. I'm currently running 10 GPU tasks per card, 20 total.

5 tasks per card doesn't seem to load up the GPUs enough while 10 tasks per card fully loads/overloads them. Any thoughts on how I might be able to run maybe 6 or 7 tasks ber GPU? Changing the hundredths place quantity doesn't seem to have any effect on how many tasks are run.

Anyway, Cliff, I'd probably try running 5 MilkyWay@Home tasks per GPU on your 980(s) and see how that goes, though I'm not sure of the 980s double precision performance.

<app_config>

<app>
<name>milkyway</name>
<max_concurrent>0</max_concurrent>
<gpu_versions>
<gpu_usage>0.1</gpu_usage>
<cpu_usage>0.1</cpu_usage>
</gpu_versions>
</app>

<app>
<name>milkyway_separation__modified_fit</name>
<max_concurrent>0</max_concurrent>
<gpu_versions>
<gpu_usage>0.1</gpu_usage>
<cpu_usage>0.1</cpu_usage>
</gpu_versions>
</app>

</app_config>

ID: 63395 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote

Message boards : Number crunching : upgraded my gpu how to max it out?

©2024 Astroinformatics Group