Welcome to MilkyWay@home

Where's CUDA?


Advanced search

Message boards : Number crunching : Where's CUDA?
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · Next

AuthorMessage
Profilearkayn
Avatar

Send message
Joined: 14 Feb 09
Posts: 999
Credit: 74,932,619
RAC: 0
50 million credit badge10 year member badge
Message 27361 - Posted: 9 Jul 2009, 7:29:15 UTC - in response to Message 27359.  

Actually a lot of us are running 3 units on our GPU's.

My little 4830 is running a 32000 average, might be a little higher but I also game on the computer.
ID: 27361 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileBymark
Avatar

Send message
Joined: 6 Mar 09
Posts: 51
Credit: 467,657,756
RAC: 219,257
300 million credit badge10 year member badge
Message 27394 - Posted: 9 Jul 2009, 18:42:09 UTC - in response to Message 27361.  
Last modified: 9 Jul 2009, 18:42:48 UTC

It would be nice to have one boinc project accepting lower Gpu cards, boinc aqua and seti dos, but the aqua lagging and credit system is terrible, seti almost never tasks. So please, milky go Cuda!
ID: 27394 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
BarryAZ

Send message
Joined: 1 Sep 08
Posts: 519
Credit: 282,772,150
RAC: 1,098
200 million credit badge10 year member badgeextraordinary contributions badge
Message 27414 - Posted: 9 Jul 2009, 20:14:06 UTC - in response to Message 27359.  

Well, the thing is, the BOINC client developers are shall we say, a bit more CUDA friendly (or CUDA centric, or ATI-GPU antagonistic). So, that means that only projects willing to apply 3rd party application optimization have a chance to support ATI GPU's. At the moment, the count of projects in that basket is ONE -- MilkyWay.

When CUDA support shows up here (perhaps a month after Dave gets back from one of his periodic camping trips at the end of the summer), then I suspect, since 3rd party optimization efforts are encouraged here, we may well see an optimized CUDA application which supports higher credit numbers.

Very few projects encourage 3rd party optimization, for various reasons and so you won't see the higher credit per GPU numbers there. Add to that a different design approach between ATI/AMD and Nvidea regarding core counts, and if an ATI GPU is supported, higher numbers may always be.....wait for it ...... in the cards.


Whell,,, it sucks.....

In GPUGRID with 3xGTX260SP216 you may have 52000 credits per day,

In MilkyWay with 2xHD4870 you may have 110 000 credits per day...

So where is balance?
In this project you may run 2 wu per gpu. In gpu grid you may run only 1 wu per gpu.


ID: 27414 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileColeslaw
Avatar

Send message
Joined: 6 Aug 08
Posts: 12
Credit: 17,333,184
RAC: 41,653
10 million credit badge10 year member badge
Message 27416 - Posted: 9 Jul 2009, 20:30:33 UTC

My 8400GS card used to push GPU Grid WU's but typically errored out with an error on my Windows system telling me the drivers failed. There for a long time SETI would even crash Windows XP and Vista using all the CUDA capable drivers. Online searches showed many people had problems with this (not for BOINC, but for video players in combination with audio drivers), so I think it is more of a nVidia thing in my case. I recently started SETI GPU usage again and it seems to be more stable. I don't get BSOD's now, but I do still get the drivers failing message from time to time.
ID: 27416 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Cluster Physik

Send message
Joined: 26 Jul 08
Posts: 627
Credit: 94,940,203
RAC: 0
50 million credit badge10 year member badgeextraordinary contributions badge
Message 27417 - Posted: 9 Jul 2009, 20:44:09 UTC - in response to Message 27359.  

Whell,,, it sucks.....

In GPUGRID with 3xGTX260SP216 you may have 52000 credits per day,

In MilkyWay with 2xHD4870 you may have 110 000 credits per day...

So where is balance?

Let's compare the double precision shader power of those cards:

GTX260-216
27 (216/8) double precision capable ALUs, 1 MADD (2 flops) per clock, 1242MHz
27 * 2 * 1.242 = 67 GFlop/s * 3 cards = 201 GFlop/s in double precision

HD4870
160 (800/5) double precision capable ALUs, 1 MADD (2 flops) per clock, 750 MHz
160 * 2 * 0.75 = 240 GFlops * 2 cards = 480 GFlop/s in double precision

480 / 201 * 52,000 credits = 124,179 credits

So if you want the same 52k per day you get over at GPUGrid with your GTX260 cards here at MW, you should be aware the ATIs would still get about as much as they do now (some variation due to different implementations may apply, but I guess the general direction is clear).
Discussion closed.

PS:
If the MW GPU project gets launched, it will (may?) accept single precision, which changes the game a bit (nvidia gets competitive).
ID: 27417 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileTomaszPawel
Avatar

Send message
Joined: 9 Nov 08
Posts: 41
Credit: 92,786,635
RAC: 0
50 million credit badge10 year member badge
Message 27778 - Posted: 15 Jul 2009, 7:49:29 UTC - in response to Message 27417.  

Ok :) Thnx for repley.....

It's sad then for GPUGRID.... that they don't have cuda opt... aplication....

BDW when CUDA will be avaible in MilkyWay?
A proud member of the Polish National Team

COME VISIT US at Polish National Team FORUM

ID: 27778 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
John Clark

Send message
Joined: 4 Oct 08
Posts: 1734
Credit: 64,228,409
RAC: 0
50 million credit badge10 year member badge
Message 27790 - Posted: 15 Jul 2009, 13:09:16 UTC

No real need as the ATI GPUs are turning over so much work, and the servers are coping OK now (not in the past).
Go away, I was asleep


ID: 27790 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileGalaxyIce
Avatar

Send message
Joined: 6 Apr 08
Posts: 2018
Credit: 100,142,856
RAC: 0
100 million credit badge10 year member badge
Message 27792 - Posted: 15 Jul 2009, 13:23:14 UTC - in response to Message 27790.  
Last modified: 15 Jul 2009, 13:24:14 UTC

No real need as the ATI GPUs are turning over so much work, and the servers are coping OK now (not in the past).

Maybe no real need, but there must be some (including me) who have nVidea and would like to try out CUDA in MW.

ID: 27792 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profilebanditwolf
Avatar

Send message
Joined: 12 Nov 07
Posts: 2425
Credit: 524,164
RAC: 0
500 thousand credit badge10 year member badge
Message 27795 - Posted: 15 Jul 2009, 13:47:35 UTC - in response to Message 27792.  

No real need as the ATI GPUs are turning over so much work, and the servers are coping OK now (not in the past).

Maybe no real need, but there must be some (including me) who have nVidea and would like to try out CUDA in MW.


I guess it would be wether those that run MW want to turn out more results quicker or not. I would think second server would be needed first.

Travis said in the past they didn't want to keep increasing the wu size (as it was big enough), but it happened. So who knows.
Doesn't expecting the unexpected make the unexpected the expected?
If it makes sense, DON'T do it.
ID: 27795 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
MAGPIE

Send message
Joined: 12 Apr 09
Posts: 1
Credit: 501,239
RAC: 0
500 thousand credit badge10 year member badge
Message 27796 - Posted: 15 Jul 2009, 14:06:48 UTC

Yeah...come on...I definitely want to try cuda on MW...got a nvidia card itching for a go...
ID: 27796 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Anthony Waters

Send message
Joined: 16 Jun 09
Posts: 85
Credit: 172,476
RAC: 0
100 thousand credit badge10 year member badge
Message 27915 - Posted: 18 Jul 2009, 1:55:17 UTC

Hello,

This is Anthony, I'm working on the CUDA application with Travis.

There are a few things that need to be resolved in order to get the CUDA app released, namely

1. Checking the accuracy of the double precision math
2. Integration with BOINC
ID: 27915 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
BarryAZ

Send message
Joined: 1 Sep 08
Posts: 519
Credit: 282,772,150
RAC: 1,098
200 million credit badge10 year member badgeextraordinary contributions badge
Message 27930 - Posted: 18 Jul 2009, 7:00:46 UTC - in response to Message 27792.  

I agree -- I'd like not to have to rely on GPUGrid as the only attractive CUDA project (SETI and Aqua run CUDA but both are problematic in their own way).

Maybe no real need, but there must be some (including me) who have nVidea and would like to try out CUDA in MW.


ID: 27930 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileGalaxyIce
Avatar

Send message
Joined: 6 Apr 08
Posts: 2018
Credit: 100,142,856
RAC: 0
100 million credit badge10 year member badge
Message 27936 - Posted: 18 Jul 2009, 9:41:04 UTC - in response to Message 27915.  

Hello,

This is Anthony, I'm working on the CUDA application with Travis.

There are a few things that need to be resolved in order to get the CUDA app released, namely

1. Checking the accuracy of the double precision math
2. Integration with BOINC

Hi Anthony. It's good to hear that CUDA is still being worked on for MilkyWay. Will any release of MW CUDA be with this project or "Milkyway@Home for GPUs" waiting in the wings?


ID: 27936 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
SATAN
Avatar

Send message
Joined: 27 Feb 09
Posts: 45
Credit: 305,963
RAC: 0
100 thousand credit badge10 year member badge
Message 28009 - Posted: 19 Jul 2009, 16:21:11 UTC

Thanks for the update. So know we know the name of the person Travis has locked away in the dungeon.
Mars rules this confectionery war!
ID: 28009 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Anthony Waters

Send message
Joined: 16 Jun 09
Posts: 85
Credit: 172,476
RAC: 0
100 thousand credit badge10 year member badge
Message 28329 - Posted: 24 Jul 2009, 21:58:58 UTC

The GPU app will go up as a beta application on the regular Milkyway@Home site when it is ready.

The final likelihood is accurate to about 12 decimal points, while we would like it higher, 12 is good enough. Therefore the next step is setting up the server, since it requires a major upgrade it will take a couple of weeks and will most likely be released with 0.19.
ID: 28329 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Cluster Physik

Send message
Joined: 26 Jul 08
Posts: 627
Credit: 94,940,203
RAC: 0
50 million credit badge10 year member badgeextraordinary contributions badge
Message 28346 - Posted: 25 Jul 2009, 10:17:20 UTC - in response to Message 28329.  

The final likelihood is accurate to about 12 decimal points, while we would like it higher, 12 is good enough. Therefore the next step is setting up the server, since it requires a major upgrade it will take a couple of weeks and will most likely be released with 0.19.

I would suggest you name it version 0.20 to avoid some confusion as the Windows versions have already number 0.19 ;)

And could you put the likelihood code in the code release directory so I can update the ATI application?

By the way, deviations after 12 digits is what I would expect for different summation (reduction) methods. I would not say right away that the GPUs are doing worse than CPUs, as the GPUs use normally a pairwise summation and CPUs (at least the old code) a simple summation resulting in a bit more rounding errors. If you look at the results for the MW_GPU_project appliation I posted in the code discussion forum, my version of the single precision GPU application delivered actually more accurate results than a single precision CPU version does (one can check this by a comparison to the double precision implementation).
ID: 28346 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
dobrichev

Send message
Joined: 30 May 09
Posts: 9
Credit: 105,674
RAC: 0
100 thousand credit badge10 year member badge
Message 28364 - Posted: 25 Jul 2009, 21:55:44 UTC - in response to Message 28346.  

BTW, which is the "single precision CPU version"?

I posted some results against double precision here
ID: 28364 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ztmike

Send message
Joined: 4 Jun 09
Posts: 45
Credit: 447,355
RAC: 0
100 thousand credit badge10 year member badge
Message 28472 - Posted: 28 Jul 2009, 2:28:07 UTC

What Operating Systems will the Cuda app support out of the gate and in the future?
ID: 28472 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Anthony Waters

Send message
Joined: 16 Jun 09
Posts: 85
Credit: 172,476
RAC: 0
100 thousand credit badge10 year member badge
Message 28502 - Posted: 29 Jul 2009, 0:51:52 UTC - in response to Message 28472.  

Out of the gate:
linux 32/64 bit
windows 32 bit
mac x86 32 bit

Later on (Maybe)
windows 64 bit (turns out there are some issues with using the Visual C++ Express Edition to build 64 bit binaries)
ID: 28502 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profilearkayn
Avatar

Send message
Joined: 14 Feb 09
Posts: 999
Credit: 74,932,619
RAC: 0
50 million credit badge10 year member badge
Message 28506 - Posted: 29 Jul 2009, 2:35:13 UTC - in response to Message 28502.  

Out of the gate:
linux 32/64 bit
windows 32 bit
mac x86 32 bit

Later on (Maybe)
windows 64 bit (turns out there are some issues with using the Visual C++ Express Edition to build 64 bit binaries)


Most Macs with Nvidia GPU's are going to be capable of 64-bit processing.
ID: 28506 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · Next

Message boards : Number crunching : Where's CUDA?

©2019 Astroinformatics Group