Welcome to MilkyWay@home

CUDA Application for 32 bit Windows

Message boards : Number crunching : CUDA Application for 32 bit Windows
Message board moderation

To post messages, you must log in.

1 · 2 · 3 · 4 . . . 7 · Next

AuthorMessage
Anthony Waters

Send message
Joined: 16 Jun 09
Posts: 85
Credit: 172,476
RAC: 0
Message 29742 - Posted: 26 Aug 2009, 22:07:43 UTC
Last modified: 26 Aug 2009, 22:50:21 UTC

The CUDA application for 32 bit Windows has been deployed on BOINC. In terms of hardware a NVIDIA GPU supporting Compute Capability 1.3 is required. The following GPUs: GeForce GTX 295, 285, 280, 260, Tesla S1070, C1060, Quadro Plex 2200 D2, Quadro FX 5800, 4800 are known to have CUDA 1.3 support. The GPUs also need to have 256 MB of Video RAM and NVIDIA Driver 190.xx or higher needs to be installed. If these prerequisites are not met the CUDA application will not be downloaded through BOINC.

A new preference has been added to the Milkyway@Home preferences page that specifies whether the project is allowed to use your GPU for computation, this has to be enabled in order to receive work from the CUDA application. This preference does not affect the anonymous plan class used by the ATI application.

While the CUDA application is running there will be a slowdown in the responsiveness of the user interface of the operating system, more specifically the explorer.exe process will use a lot of CPU. This is because the GPU is performing computations therefore it does not have a lot of free time to update the user interface, therefore the CPU has to do more work. There is a preference in BOINC to only use the GPU when the computer is idle.

If computation errors are being generated by the CUDA application please reference the results and the error message so the problem can be diagnosed. With the tremendous amount of variety in products, drivers, and operating systems it is high unlikely for everything to go smoothly these first few weeks, the Milkyway@Home team sincerely appreciates your patience in reaching this milestone and we look forward to your feedback.

This has been successfully tested on Windows XP x64 Professional Edition, BOINC 6.6.36 (32 bit), NVIDIA Drivers 190.62 (64 bit), and with a NVIDIA GeForce GTX 285 that was generously donated from NVIDIA to the Milkway@Home research team.
ID: 29742 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
zpm

Send message
Joined: 27 Feb 09
Posts: 41
Credit: 123,828
RAC: 0
Message 29745 - Posted: 26 Aug 2009, 22:16:55 UTC - in response to Message 29742.  
Last modified: 26 Aug 2009, 22:29:41 UTC

ugh, 3 minutes RT and no progress... how long does it take before it starts to process?

Vista, sp2, 6.10.0, 190.62

edit: make that 5 minutes.

ok, 7:45 secs to complete, but no updating of the boinc progress bar is not a way to start the app off...

fyi, i'm not seeing any processor usage at all under explorer or milkyway cuda app.

Check pointing????

I recommend Secunia PSI: http://secunia.com/vulnerability_scanning/personal/
ID: 29745 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ztmike

Send message
Joined: 4 Jun 09
Posts: 45
Credit: 447,355
RAC: 0
Message 29746 - Posted: 26 Aug 2009, 22:45:16 UTC

The CUDA application for 32 bit Windows has been deployed on BOINC. In terms of hardware a NVIDIA GPU supporting CUDA 1.3 is required. The following GPUs: GeForce GTX 295, 285, 280, 260, Tesla S1070, C1060, Quadro Plex 2200 D2, Quadro FX 5800, 4800 are known to have CUDA 1.3 support. The GPUs also need to have 256 MB of Video RAM


Um..the 8800GT 512 meets that.
ID: 29746 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
zpm

Send message
Joined: 27 Feb 09
Posts: 41
Credit: 123,828
RAC: 0
Message 29747 - Posted: 26 Aug 2009, 22:51:18 UTC - in response to Message 29746.  
Last modified: 26 Aug 2009, 22:51:57 UTC

The CUDA application for 32 bit Windows has been deployed on BOINC. In terms of hardware a NVIDIA GPU supporting CUDA 1.3 is required. The following GPUs: GeForce GTX 295, 285, 280, 260, Tesla S1070, C1060, Quadro Plex 2200 D2, Quadro FX 5800, 4800 are known to have CUDA 1.3 support. The GPUs also need to have 256 MB of Video RAM


Um..the 8800GT 512 meets that.



1.3 thats the key.

some me where it says it in the messages that the 8800 gt is 1.3.
ID: 29747 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ztmike

Send message
Joined: 4 Jun 09
Posts: 45
Credit: 447,355
RAC: 0
Message 29748 - Posted: 26 Aug 2009, 22:52:04 UTC - in response to Message 29747.  

The CUDA application for 32 bit Windows has been deployed on BOINC. In terms of hardware a NVIDIA GPU supporting CUDA 1.3 is required. The following GPUs: GeForce GTX 295, 285, 280, 260, Tesla S1070, C1060, Quadro Plex 2200 D2, Quadro FX 5800, 4800 are known to have CUDA 1.3 support. The GPUs also need to have 256 MB of Video RAM


Um..the 8800GT 512 meets that.



1.3 thats the key.


According to Nvidias driver download page, it has it.
ID: 29748 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Anthony Waters

Send message
Joined: 16 Jun 09
Posts: 85
Credit: 172,476
RAC: 0
Message 29749 - Posted: 26 Aug 2009, 22:52:14 UTC

My fault it is referred to as compute capability 1.3, which is hardware specific and only the GT200 series of GPUs support it. Also, I'm in the process of updating the application to support the progress bar, check pointing will not be supported.
ID: 29749 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ztmike

Send message
Joined: 4 Jun 09
Posts: 45
Credit: 447,355
RAC: 0
Message 29750 - Posted: 26 Aug 2009, 22:54:53 UTC - in response to Message 29749.  

My fault it is referred to as compute capability 1.3, which is hardware specific and only the GT200 series of GPUs support it. Also, I'm in the process of updating the application to support the progress bar, check pointing will not be supported.


Not sure what the difference is, but you guys just singled out alot of nvidia people.
ID: 29750 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Liuqyn

Send message
Joined: 29 Aug 07
Posts: 9
Credit: 110,408,156
RAC: 0
Message 29751 - Posted: 26 Aug 2009, 23:05:29 UTC - in response to Message 29750.  

they need the double precision of the GT200 cards, just like the ATI cards are limited for the same reason.
ID: 29751 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Anthony Waters

Send message
Joined: 16 Jun 09
Posts: 85
Credit: 172,476
RAC: 0
Message 29752 - Posted: 26 Aug 2009, 23:06:29 UTC - in response to Message 29750.  

Compute capability refers to the hardware specific features that a GPU supports, currently there are three version, 1.0, 1.1, and 1.3. Version 1.3 is required because it is the only one that supports double precision calculations, as opposed to signal precision. In this specific scientific application double precision support is very important so there was no choice but to use compute capability 1.3, and hence any GT200 series GPU from NVIDIA. Please see the NVIDIA CUDA 2.3 Programming Guide Appendix A.1 [1] for more information.

[1] http://developer.download.nvidia.com/compute/cuda/2_3/toolkit/docs/NVIDIA_CUDA_Programming_Guide_2.3.pdf
ID: 29752 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
zpm

Send message
Joined: 27 Feb 09
Posts: 41
Credit: 123,828
RAC: 0
Message 29753 - Posted: 26 Aug 2009, 23:18:36 UTC - in response to Message 29749.  

My fault it is referred to as compute capability 1.3, which is hardware specific and only the GT200 series of GPUs support it. Also, I'm in the process of updating the application to support the progress bar, check pointing will not be supported.



thank you for adding the progress bar, it's sound that checkpointing won't be a part of the app at this time.
ID: 29753 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile [AF>HFR>RR] ThierryH

Send message
Joined: 2 Jan 08
Posts: 23
Credit: 495,882,464
RAC: 0
Message 29754 - Posted: 26 Aug 2009, 23:20:39 UTC

Seems very slow comparing to ATI application :(
But it's working well.
ID: 29754 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
satanetv

Send message
Joined: 30 Mar 09
Posts: 1
Credit: 9,594,054
RAC: 0
Message 29755 - Posted: 26 Aug 2009, 23:26:13 UTC - in response to Message 29754.  

How long ??
ID: 29755 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile [AF>HFR>RR] ThierryH

Send message
Joined: 2 Jan 08
Posts: 23
Credit: 495,882,464
RAC: 0
Message 29756 - Posted: 26 Aug 2009, 23:30:41 UTC

8mn30 on GTX295@640/1400 and 15mn on GTX260.
ID: 29756 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Liuqyn

Send message
Joined: 29 Aug 07
Posts: 9
Credit: 110,408,156
RAC: 0
Message 29757 - Posted: 26 Aug 2009, 23:32:47 UTC - in response to Message 29742.  
Last modified: 26 Aug 2009, 23:33:21 UTC

A new preference has been added to the Milkyway@Home preferences page that specifies whether the project is allowed to use your GPU for computation, this has to be enabled in order to receive work from the CUDA application.



how about a new preference for not running CPU work for any of us that want to run GPU only?
ID: 29757 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Anthony Waters

Send message
Joined: 16 Jun 09
Posts: 85
Credit: 172,476
RAC: 0
Message 29758 - Posted: 26 Aug 2009, 23:39:42 UTC - in response to Message 29756.  

To be honest the CUDA application is not running at its full potential, the computation is sliced into pieces so that it does not occupy the whole GPU in order to decrease the slowdown of the user interface. There are two ways the application can be executed

1. Using all of the GPU's power, work completes faster but the OS is pretty much unusable because it takes ~2s for the user interface to update, meaning it can only run when the computer is idle
2. Using a partial amount of the GPU's power, work takes longer, updates to the OS user interface are still slow, but hardly noticeable in most situations, meaning it can be run when the computer is in use

I chose to go with #2, but which one makes more sense? I'll look into the feasibility of having this as a project specific preference if there is a 50/50 divide between the two choices.
ID: 29758 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Ross*

Send message
Joined: 17 May 09
Posts: 22
Credit: 161,135,083
RAC: 0
Message 29759 - Posted: 26 Aug 2009, 23:39:59 UTC - in response to Message 29756.  

I note 32 bit windows but been tested on xp64. what about vista 64 ulimate OS with 2 295 ?
Ross
ID: 29759 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Anthony Waters

Send message
Joined: 16 Jun 09
Posts: 85
Credit: 172,476
RAC: 0
Message 29760 - Posted: 26 Aug 2009, 23:45:07 UTC - in response to Message 29759.  

I note 32 bit windows but been tested on xp64. what about vista 64 ulimate OS with 2 295 ?
Ross


It should work correctly, I'm not sure how BOINC will react to the multi-gpu setup. If there are compute errors please let us know.
ID: 29760 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile [AF>HFR>RR] ThierryH

Send message
Joined: 2 Jan 08
Posts: 23
Credit: 495,882,464
RAC: 0
Message 29761 - Posted: 26 Aug 2009, 23:52:28 UTC - in response to Message 29758.  

To be honest the CUDA application is not running at its full potential, the computation is sliced into pieces so that it does not occupy the whole GPU in order to decrease the slowdown of the user interface. There are two ways the application can be executed

1. Using all of the GPU's power, work completes faster but the OS is pretty much unusable because it takes ~2s for the user interface to update, meaning it can only run when the computer is idle
2. Using a partial amount of the GPU's power, work takes longer, updates to the OS user interface are still slow, but hardly noticeable in most situations, meaning it can be run when the computer is in use

I chose to go with #2, but which one makes more sense? I'll look into the feasibility of having this as a project specific preference if there is a 50/50 divide between the two choices.


As you said, #2 still slow down the computer. Best way is perhaps to uncheck "Use GPU while computer is in use" in BoincManager preferences. If users must do that, #1 is better.
For me, GPUs are on crunch boxes with no screen. So #1 solution is the best, but it's just my personal case.
ID: 29761 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Anthony Waters

Send message
Joined: 16 Jun 09
Posts: 85
Credit: 172,476
RAC: 0
Message 29762 - Posted: 26 Aug 2009, 23:52:29 UTC - in response to Message 29757.  

A new preference has been added to the Milkyway@Home preferences page that specifies whether the project is allowed to use your GPU for computation, this has to be enabled in order to receive work from the CUDA application.



how about a new preference for not running CPU work for any of us that want to run GPU only?


I've added it to the project specific preferences page and I'm almost 100% positive I've made it so that the default is to use the CPU.
ID: 29762 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile [AF>HFR>RR] ThierryH

Send message
Joined: 2 Jan 08
Posts: 23
Credit: 495,882,464
RAC: 0
Message 29763 - Posted: 26 Aug 2009, 23:54:34 UTC - in response to Message 29760.  

It should work correctly, I'm not sure how BOINC will react to the multi-gpu setup. If there are compute errors please let us know.


It's working well. I have 2 GTX295 (4 GPUs) on a winXP64 box. No problem.
ID: 29763 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
1 · 2 · 3 · 4 . . . 7 · Next

Message boards : Number crunching : CUDA Application for 32 bit Windows

©2024 Astroinformatics Group