Message boards :
Number crunching :
CUDA Application for 32 bit Windows
Message board moderation
Author | Message |
---|---|
Send message Joined: 16 Jun 09 Posts: 85 Credit: 172,476 RAC: 0 |
The CUDA application for 32 bit Windows has been deployed on BOINC. In terms of hardware a NVIDIA GPU supporting Compute Capability 1.3 is required. The following GPUs: GeForce GTX 295, 285, 280, 260, Tesla S1070, C1060, Quadro Plex 2200 D2, Quadro FX 5800, 4800 are known to have CUDA 1.3 support. The GPUs also need to have 256 MB of Video RAM and NVIDIA Driver 190.xx or higher needs to be installed. If these prerequisites are not met the CUDA application will not be downloaded through BOINC. A new preference has been added to the Milkyway@Home preferences page that specifies whether the project is allowed to use your GPU for computation, this has to be enabled in order to receive work from the CUDA application. This preference does not affect the anonymous plan class used by the ATI application. While the CUDA application is running there will be a slowdown in the responsiveness of the user interface of the operating system, more specifically the explorer.exe process will use a lot of CPU. This is because the GPU is performing computations therefore it does not have a lot of free time to update the user interface, therefore the CPU has to do more work. There is a preference in BOINC to only use the GPU when the computer is idle. If computation errors are being generated by the CUDA application please reference the results and the error message so the problem can be diagnosed. With the tremendous amount of variety in products, drivers, and operating systems it is high unlikely for everything to go smoothly these first few weeks, the Milkyway@Home team sincerely appreciates your patience in reaching this milestone and we look forward to your feedback. This has been successfully tested on Windows XP x64 Professional Edition, BOINC 6.6.36 (32 bit), NVIDIA Drivers 190.62 (64 bit), and with a NVIDIA GeForce GTX 285 that was generously donated from NVIDIA to the Milkway@Home research team. |
Send message Joined: 27 Feb 09 Posts: 41 Credit: 123,828 RAC: 0 |
ugh, 3 minutes RT and no progress... how long does it take before it starts to process? Vista, sp2, 6.10.0, 190.62 edit: make that 5 minutes. ok, 7:45 secs to complete, but no updating of the boinc progress bar is not a way to start the app off... fyi, i'm not seeing any processor usage at all under explorer or milkyway cuda app. Check pointing???? I recommend Secunia PSI: http://secunia.com/vulnerability_scanning/personal/ |
Send message Joined: 4 Jun 09 Posts: 45 Credit: 447,355 RAC: 0 |
The CUDA application for 32 bit Windows has been deployed on BOINC. In terms of hardware a NVIDIA GPU supporting CUDA 1.3 is required. The following GPUs: GeForce GTX 295, 285, 280, 260, Tesla S1070, C1060, Quadro Plex 2200 D2, Quadro FX 5800, 4800 are known to have CUDA 1.3 support. The GPUs also need to have 256 MB of Video RAM Um..the 8800GT 512 meets that. |
Send message Joined: 27 Feb 09 Posts: 41 Credit: 123,828 RAC: 0 |
The CUDA application for 32 bit Windows has been deployed on BOINC. In terms of hardware a NVIDIA GPU supporting CUDA 1.3 is required. The following GPUs: GeForce GTX 295, 285, 280, 260, Tesla S1070, C1060, Quadro Plex 2200 D2, Quadro FX 5800, 4800 are known to have CUDA 1.3 support. The GPUs also need to have 256 MB of Video RAM 1.3 thats the key. some me where it says it in the messages that the 8800 gt is 1.3. |
Send message Joined: 4 Jun 09 Posts: 45 Credit: 447,355 RAC: 0 |
The CUDA application for 32 bit Windows has been deployed on BOINC. In terms of hardware a NVIDIA GPU supporting CUDA 1.3 is required. The following GPUs: GeForce GTX 295, 285, 280, 260, Tesla S1070, C1060, Quadro Plex 2200 D2, Quadro FX 5800, 4800 are known to have CUDA 1.3 support. The GPUs also need to have 256 MB of Video RAM According to Nvidias driver download page, it has it. |
Send message Joined: 16 Jun 09 Posts: 85 Credit: 172,476 RAC: 0 |
My fault it is referred to as compute capability 1.3, which is hardware specific and only the GT200 series of GPUs support it. Also, I'm in the process of updating the application to support the progress bar, check pointing will not be supported. |
Send message Joined: 4 Jun 09 Posts: 45 Credit: 447,355 RAC: 0 |
My fault it is referred to as compute capability 1.3, which is hardware specific and only the GT200 series of GPUs support it. Also, I'm in the process of updating the application to support the progress bar, check pointing will not be supported. Not sure what the difference is, but you guys just singled out alot of nvidia people. |
Send message Joined: 29 Aug 07 Posts: 9 Credit: 110,408,156 RAC: 0 |
they need the double precision of the GT200 cards, just like the ATI cards are limited for the same reason. |
Send message Joined: 16 Jun 09 Posts: 85 Credit: 172,476 RAC: 0 |
Compute capability refers to the hardware specific features that a GPU supports, currently there are three version, 1.0, 1.1, and 1.3. Version 1.3 is required because it is the only one that supports double precision calculations, as opposed to signal precision. In this specific scientific application double precision support is very important so there was no choice but to use compute capability 1.3, and hence any GT200 series GPU from NVIDIA. Please see the NVIDIA CUDA 2.3 Programming Guide Appendix A.1 [1] for more information. [1] http://developer.download.nvidia.com/compute/cuda/2_3/toolkit/docs/NVIDIA_CUDA_Programming_Guide_2.3.pdf |
Send message Joined: 27 Feb 09 Posts: 41 Credit: 123,828 RAC: 0 |
My fault it is referred to as compute capability 1.3, which is hardware specific and only the GT200 series of GPUs support it. Also, I'm in the process of updating the application to support the progress bar, check pointing will not be supported. thank you for adding the progress bar, it's sound that checkpointing won't be a part of the app at this time. |
Send message Joined: 2 Jan 08 Posts: 23 Credit: 495,882,464 RAC: 0 |
Seems very slow comparing to ATI application :( But it's working well. |
Send message Joined: 30 Mar 09 Posts: 1 Credit: 9,594,054 RAC: 0 |
How long ?? |
Send message Joined: 2 Jan 08 Posts: 23 Credit: 495,882,464 RAC: 0 |
8mn30 on GTX295@640/1400 and 15mn on GTX260. |
Send message Joined: 29 Aug 07 Posts: 9 Credit: 110,408,156 RAC: 0 |
A new preference has been added to the Milkyway@Home preferences page that specifies whether the project is allowed to use your GPU for computation, this has to be enabled in order to receive work from the CUDA application. how about a new preference for not running CPU work for any of us that want to run GPU only? |
Send message Joined: 16 Jun 09 Posts: 85 Credit: 172,476 RAC: 0 |
To be honest the CUDA application is not running at its full potential, the computation is sliced into pieces so that it does not occupy the whole GPU in order to decrease the slowdown of the user interface. There are two ways the application can be executed 1. Using all of the GPU's power, work completes faster but the OS is pretty much unusable because it takes ~2s for the user interface to update, meaning it can only run when the computer is idle 2. Using a partial amount of the GPU's power, work takes longer, updates to the OS user interface are still slow, but hardly noticeable in most situations, meaning it can be run when the computer is in use I chose to go with #2, but which one makes more sense? I'll look into the feasibility of having this as a project specific preference if there is a 50/50 divide between the two choices. |
Send message Joined: 17 May 09 Posts: 22 Credit: 161,135,083 RAC: 0 |
I note 32 bit windows but been tested on xp64. what about vista 64 ulimate OS with 2 295 ? Ross |
Send message Joined: 16 Jun 09 Posts: 85 Credit: 172,476 RAC: 0 |
I note 32 bit windows but been tested on xp64. what about vista 64 ulimate OS with 2 295 ? It should work correctly, I'm not sure how BOINC will react to the multi-gpu setup. If there are compute errors please let us know. |
Send message Joined: 2 Jan 08 Posts: 23 Credit: 495,882,464 RAC: 0 |
To be honest the CUDA application is not running at its full potential, the computation is sliced into pieces so that it does not occupy the whole GPU in order to decrease the slowdown of the user interface. There are two ways the application can be executed As you said, #2 still slow down the computer. Best way is perhaps to uncheck "Use GPU while computer is in use" in BoincManager preferences. If users must do that, #1 is better. For me, GPUs are on crunch boxes with no screen. So #1 solution is the best, but it's just my personal case. |
Send message Joined: 16 Jun 09 Posts: 85 Credit: 172,476 RAC: 0 |
A new preference has been added to the Milkyway@Home preferences page that specifies whether the project is allowed to use your GPU for computation, this has to be enabled in order to receive work from the CUDA application. I've added it to the project specific preferences page and I'm almost 100% positive I've made it so that the default is to use the CPU. |
Send message Joined: 2 Jan 08 Posts: 23 Credit: 495,882,464 RAC: 0 |
It should work correctly, I'm not sure how BOINC will react to the multi-gpu setup. If there are compute errors please let us know. It's working well. I have 2 GTX295 (4 GPUs) on a winXP64 box. No problem. |
©2024 Astroinformatics Group