Message boards :
Number crunching :
Where's CUDA?
Message board moderation
Previous · 1 · 2 · 3 · Next
Author | Message |
---|---|
Send message Joined: 14 Feb 09 Posts: 999 Credit: 74,932,619 RAC: 0 |
Actually a lot of us are running 3 units on our GPU's. My little 4830 is running a 32000 average, might be a little higher but I also game on the computer. |
Send message Joined: 6 Mar 09 Posts: 51 Credit: 492,109,133 RAC: 0 |
It would be nice to have one boinc project accepting lower Gpu cards, boinc aqua and seti dos, but the aqua lagging and credit system is terrible, seti almost never tasks. So please, milky go Cuda! |
Send message Joined: 1 Sep 08 Posts: 520 Credit: 302,528,196 RAC: 276 |
Well, the thing is, the BOINC client developers are shall we say, a bit more CUDA friendly (or CUDA centric, or ATI-GPU antagonistic). So, that means that only projects willing to apply 3rd party application optimization have a chance to support ATI GPU's. At the moment, the count of projects in that basket is ONE -- MilkyWay. When CUDA support shows up here (perhaps a month after Dave gets back from one of his periodic camping trips at the end of the summer), then I suspect, since 3rd party optimization efforts are encouraged here, we may well see an optimized CUDA application which supports higher credit numbers. Very few projects encourage 3rd party optimization, for various reasons and so you won't see the higher credit per GPU numbers there. Add to that a different design approach between ATI/AMD and Nvidea regarding core counts, and if an ATI GPU is supported, higher numbers may always be.....wait for it ...... in the cards. Whell,,, it sucks..... |
Send message Joined: 6 Aug 08 Posts: 12 Credit: 25,607,279 RAC: 33 |
My 8400GS card used to push GPU Grid WU's but typically errored out with an error on my Windows system telling me the drivers failed. There for a long time SETI would even crash Windows XP and Vista using all the CUDA capable drivers. Online searches showed many people had problems with this (not for BOINC, but for video players in combination with audio drivers), so I think it is more of a nVidia thing in my case. I recently started SETI GPU usage again and it seems to be more stable. I don't get BSOD's now, but I do still get the drivers failing message from time to time. |
Send message Joined: 26 Jul 08 Posts: 627 Credit: 94,940,203 RAC: 0 |
Whell,,, it sucks..... Let's compare the double precision shader power of those cards: GTX260-216 27 (216/8) double precision capable ALUs, 1 MADD (2 flops) per clock, 1242MHz 27 * 2 * 1.242 = 67 GFlop/s * 3 cards = 201 GFlop/s in double precision HD4870 160 (800/5) double precision capable ALUs, 1 MADD (2 flops) per clock, 750 MHz 160 * 2 * 0.75 = 240 GFlops * 2 cards = 480 GFlop/s in double precision 480 / 201 * 52,000 credits = 124,179 credits So if you want the same 52k per day you get over at GPUGrid with your GTX260 cards here at MW, you should be aware the ATIs would still get about as much as they do now (some variation due to different implementations may apply, but I guess the general direction is clear). Discussion closed. PS: If the MW GPU project gets launched, it will (may?) accept single precision, which changes the game a bit (nvidia gets competitive). |
Send message Joined: 9 Nov 08 Posts: 41 Credit: 92,786,635 RAC: 0 |
Ok :) Thnx for repley..... It's sad then for GPUGRID.... that they don't have cuda opt... aplication.... BDW when CUDA will be avaible in MilkyWay? A proud member of the Polish National Team COME VISIT US at Polish National Team FORUM |
Send message Joined: 4 Oct 08 Posts: 1734 Credit: 64,228,409 RAC: 0 |
No real need as the ATI GPUs are turning over so much work, and the servers are coping OK now (not in the past). Go away, I was asleep |
Send message Joined: 6 Apr 08 Posts: 2018 Credit: 100,142,856 RAC: 0 |
|
Send message Joined: 12 Nov 07 Posts: 2425 Credit: 524,164 RAC: 0 |
No real need as the ATI GPUs are turning over so much work, and the servers are coping OK now (not in the past). I guess it would be wether those that run MW want to turn out more results quicker or not. I would think second server would be needed first. Travis said in the past they didn't want to keep increasing the wu size (as it was big enough), but it happened. So who knows. Doesn't expecting the unexpected make the unexpected the expected? If it makes sense, DON'T do it. |
Send message Joined: 12 Apr 09 Posts: 1 Credit: 501,239 RAC: 0 |
Yeah...come on...I definitely want to try cuda on MW...got a nvidia card itching for a go... |
Send message Joined: 16 Jun 09 Posts: 85 Credit: 172,476 RAC: 0 |
Hello, This is Anthony, I'm working on the CUDA application with Travis. There are a few things that need to be resolved in order to get the CUDA app released, namely 1. Checking the accuracy of the double precision math 2. Integration with BOINC |
Send message Joined: 1 Sep 08 Posts: 520 Credit: 302,528,196 RAC: 276 |
|
Send message Joined: 6 Apr 08 Posts: 2018 Credit: 100,142,856 RAC: 0 |
Hello, Hi Anthony. It's good to hear that CUDA is still being worked on for MilkyWay. Will any release of MW CUDA be with this project or "Milkyway@Home for GPUs" waiting in the wings? |
Send message Joined: 27 Feb 09 Posts: 45 Credit: 305,963 RAC: 0 |
Thanks for the update. So know we know the name of the person Travis has locked away in the dungeon. Mars rules this confectionery war! |
Send message Joined: 16 Jun 09 Posts: 85 Credit: 172,476 RAC: 0 |
The GPU app will go up as a beta application on the regular Milkyway@Home site when it is ready. The final likelihood is accurate to about 12 decimal points, while we would like it higher, 12 is good enough. Therefore the next step is setting up the server, since it requires a major upgrade it will take a couple of weeks and will most likely be released with 0.19. |
Send message Joined: 26 Jul 08 Posts: 627 Credit: 94,940,203 RAC: 0 |
The final likelihood is accurate to about 12 decimal points, while we would like it higher, 12 is good enough. Therefore the next step is setting up the server, since it requires a major upgrade it will take a couple of weeks and will most likely be released with 0.19. I would suggest you name it version 0.20 to avoid some confusion as the Windows versions have already number 0.19 ;) And could you put the likelihood code in the code release directory so I can update the ATI application? By the way, deviations after 12 digits is what I would expect for different summation (reduction) methods. I would not say right away that the GPUs are doing worse than CPUs, as the GPUs use normally a pairwise summation and CPUs (at least the old code) a simple summation resulting in a bit more rounding errors. If you look at the results for the MW_GPU_project appliation I posted in the code discussion forum, my version of the single precision GPU application delivered actually more accurate results than a single precision CPU version does (one can check this by a comparison to the double precision implementation). |
Send message Joined: 30 May 09 Posts: 9 Credit: 105,674 RAC: 0 |
BTW, which is the "single precision CPU version"? I posted some results against double precision here |
Send message Joined: 4 Jun 09 Posts: 45 Credit: 447,355 RAC: 0 |
What Operating Systems will the Cuda app support out of the gate and in the future? |
Send message Joined: 16 Jun 09 Posts: 85 Credit: 172,476 RAC: 0 |
Out of the gate: linux 32/64 bit windows 32 bit mac x86 32 bit Later on (Maybe) windows 64 bit (turns out there are some issues with using the Visual C++ Express Edition to build 64 bit binaries) |
Send message Joined: 14 Feb 09 Posts: 999 Credit: 74,932,619 RAC: 0 |
Out of the gate: Most Macs with Nvidia GPU's are going to be capable of 64-bit processing. |
©2024 Astroinformatics Group