Welcome to MilkyWay@home

So whats DA gonna do about CUDA

Message boards : Number crunching : So whats DA gonna do about CUDA
Message board moderation

To post messages, you must log in.

AuthorMessage
James Nunley

Send message
Joined: 29 Nov 07
Posts: 39
Credit: 74,300,629
RAC: 0
Message 7912 - Posted: 22 Dec 2008, 4:47:02 UTC

DA came a'bitchin about the amount of credit that was being handed out here.

Now CUDA has been released over at seti@home I see some machines getting 60+ credits for 200 or less seconds of processing time which works out to about 25000 credits a day wonder what he is going to do about that?
ID: 7912 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Gavin Shaw
Avatar

Send message
Joined: 16 Jan 08
Posts: 98
Credit: 1,371,299
RAC: 0
Message 7914 - Posted: 22 Dec 2008, 5:06:10 UTC

But CUDA uses the GPU to do work right?

And GPU is different to CPU. So it would not be fair let alone right to compare the two versions to each other. Maybe comparing CUDA at project A to CUDA at project B and making them pay roughly the same is what he could do.

Note: This post contains no support either way (for or against) on the issue of project parity. That debate can be done by others. I have my view but I'll keep it to myself for now. :)

Never surrender and never give up. In the darkest hour there is always hope.

ID: 7914 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Vid Vidmar*
Avatar

Send message
Joined: 29 Aug 07
Posts: 81
Credit: 60,360,858
RAC: 0
Message 7916 - Posted: 22 Dec 2008, 10:30:09 UTC - in response to Message 7912.  

DA came a'bitchin about the amount of credit that was being handed out here.

Now CUDA has been released over at seti@home I see some machines getting 60+ credits for 200 or less seconds of processing time which works out to about 25000 credits a day wonder what he is going to do about that?

Those times you see are CPU times and not GPU times. The cuda app still uses a small fraction of CPU (2-10% on average) and you see these times. There was talk about timing GPU time, I guess in next app version.

BR,
ID: 7916 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Emanuel

Send message
Joined: 18 Nov 07
Posts: 280
Credit: 2,442,757
RAC: 0
Message 7917 - Posted: 22 Dec 2008, 10:32:49 UTC
Last modified: 22 Dec 2008, 10:35:56 UTC

If credit is a direct consequence of work accomplished, then all that can be done is to choose the reference machine appropriately. If CUDA is the norm, then the reference machine should use CUDA and computers without it will get significantly less credit for projects well suited to it. If it is not, then CUDA-enabled computers will get significantly more than others.

The only thing projects could conceivably do about the great disparity this creates is to scale the credits non-linearly by the needed computation time; but this also negates the numerical advantage of using newer, more expensive machines, which could be bad for the science - i.e. if you had to choose 1 computer with 100% performance for 100% credit or 2 computers with 40% + 40% performance but 60% + 60% credit, which would you pick? (assuming you care about credits - but people with that amount of commitment generally do, from what I've seen)
ID: 7917 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile The Gas Giant
Avatar

Send message
Joined: 24 Dec 07
Posts: 1947
Credit: 240,884,648
RAC: 0
Message 7927 - Posted: 22 Dec 2008, 22:18:25 UTC

Don't forget that a GPU is a massively parallel processing system. The number of FLOPS it produces far exceeds a CPU. The task for DA is to ensure that a WU crunched on a CPU gets the same credit as a task crunched on a GPU and by using FLOP counting this is achieved. What I don't want to see and severely disagree with is credit reduced per CPU or GPU second over time.

Due to Moores Law, if the credits being produced are getting too large then I can't see why they don't just pick a date, on that date divide the granted credit by 10 or 100 across the board and move on and not just slowly reduce the amount of credit granted over time.

It's not just SETI that's the issue right now. GPUGRID gives me about 3200 credits a day on my 9600GT. I don't think SETI CUDA gives me that sort of credit, but then that may be the better experience GPUGRID have with optimising their code for CUDA requirements.

Live long and BOINC!

All that is necessary for the triumph of evil is that good men do nothing.

If You're Not Outraged, You're Not Paying Attention.
ID: 7927 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Kevint
Avatar

Send message
Joined: 22 Nov 07
Posts: 285
Credit: 1,076,786,368
RAC: 0
Message 7928 - Posted: 23 Dec 2008, 1:39:32 UTC
Last modified: 23 Dec 2008, 1:40:39 UTC

When GPU processing becomes the norm... and that will be more than a couple years down the road, then there should be discussion about the differences.

But as it is, in order to process work with GPU, you would have to either spend the extra for a machine with GPU capabilities or upgrade.

For me, that would be a combination of both, and not a cheap avenue to proceed with.

In the past when I have configured and built a machine for crunching, video was that least of my expenses. $30-$40 for a cheapo video card.
Now I am thinking instead of upgrading all my old Pent-D's to newer quads, I will get a nice GPU card instead. But then, I think- well crap. A Q6600 or a Qxxxx is only a $400 more with a nice GPU card. So now I am into $700-800 per box for upgrading.

This gets expensive, and future upgrades will have to wait. Maybe until GPU becomes the "norm" and ALL BOINC projects write code for GPU.

Right now, I am not sure what all projects support GPU processing.. SETI and GpuGrid.. I don't know about any others ATM.
ID: 7928 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Emanuel

Send message
Joined: 18 Nov 07
Posts: 280
Credit: 2,442,757
RAC: 0
Message 7931 - Posted: 23 Dec 2008, 8:41:58 UTC
Last modified: 23 Dec 2008, 8:42:50 UTC

Don't forget that special-purpose GPUs are becoming more generalised while general-purpose CPUs are becoming more parallel. We may eventually see differences between the two disappear as the industry reaches an acceptable compromise - indeed, there are already plans for sticking several different types of processor on a single die, and with optical communication starting to become feasible on motherboards bandwidth limitations for coprocessors in separate chips may disappear. I think CUDA or an equivalent framework will stay as a functional basis for parallel programming - but the underlying hardware will become more varied.

If possible WUs should be able to capitalize on computers being able to process many WUs at a time, by treating each PC as a miniature supercomputer handling a subset of a problem. This is probably different for every project, but distributed computing should ultimately be very suited to parallelization, by definition.
ID: 7931 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 12 Apr 08
Posts: 621
Credit: 161,934,067
RAC: 0
Message 8010 - Posted: 25 Dec 2008, 14:19:50 UTC

A lot of people thought that Hyper-Threading was a waste when Intel started doing it. But, I have had several HT capable machines in my life and they did, in HT mode, run tasks a little more slowly; But, the throughput was indeed higher. Even more interesting is that after a hiatus, the new i7 processors have HT. So, I have a 4 core with 8 virtual processors.

So, yes, we are seeing an increase in the complexity of the processing elements ... but, dedicated processors, some purpose built (like a GPU) can serve better at certain tasks.

And as far as the credit complaints go... well, as I posted recently in NC on SaH, it is his fault for not fixing the cross-project credit problems which we had identified in BETA testing of BOINC. So, hoist by his own petard ...

Had we addressed the issues back when there were only 5 projects, well, we would not be in this boat now would we ... :)

What were they, oh yes... SaH, CPDN, Predictor@Home, LHC and Einstein ... heck even Rosetta came later ...
ID: 8010 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote

Message boards : Number crunching : So whats DA gonna do about CUDA

©2024 Astroinformatics Group