Message boards :
Number crunching :
So whats DA gonna do about CUDA
Message board moderation
Author | Message |
---|---|
Send message Joined: 29 Nov 07 Posts: 39 Credit: 74,300,629 RAC: 0 |
DA came a'bitchin about the amount of credit that was being handed out here. Now CUDA has been released over at seti@home I see some machines getting 60+ credits for 200 or less seconds of processing time which works out to about 25000 credits a day wonder what he is going to do about that? |
Send message Joined: 16 Jan 08 Posts: 98 Credit: 1,371,299 RAC: 0 |
But CUDA uses the GPU to do work right? And GPU is different to CPU. So it would not be fair let alone right to compare the two versions to each other. Maybe comparing CUDA at project A to CUDA at project B and making them pay roughly the same is what he could do. Note: This post contains no support either way (for or against) on the issue of project parity. That debate can be done by others. I have my view but I'll keep it to myself for now. :) Never surrender and never give up. In the darkest hour there is always hope. |
Send message Joined: 29 Aug 07 Posts: 81 Credit: 60,360,858 RAC: 0 |
DA came a'bitchin about the amount of credit that was being handed out here. Those times you see are CPU times and not GPU times. The cuda app still uses a small fraction of CPU (2-10% on average) and you see these times. There was talk about timing GPU time, I guess in next app version. BR, |
Send message Joined: 18 Nov 07 Posts: 280 Credit: 2,442,757 RAC: 0 |
If credit is a direct consequence of work accomplished, then all that can be done is to choose the reference machine appropriately. If CUDA is the norm, then the reference machine should use CUDA and computers without it will get significantly less credit for projects well suited to it. If it is not, then CUDA-enabled computers will get significantly more than others. The only thing projects could conceivably do about the great disparity this creates is to scale the credits non-linearly by the needed computation time; but this also negates the numerical advantage of using newer, more expensive machines, which could be bad for the science - i.e. if you had to choose 1 computer with 100% performance for 100% credit or 2 computers with 40% + 40% performance but 60% + 60% credit, which would you pick? (assuming you care about credits - but people with that amount of commitment generally do, from what I've seen) |
Send message Joined: 24 Dec 07 Posts: 1947 Credit: 240,884,648 RAC: 0 |
Don't forget that a GPU is a massively parallel processing system. The number of FLOPS it produces far exceeds a CPU. The task for DA is to ensure that a WU crunched on a CPU gets the same credit as a task crunched on a GPU and by using FLOP counting this is achieved. What I don't want to see and severely disagree with is credit reduced per CPU or GPU second over time. Due to Moores Law, if the credits being produced are getting too large then I can't see why they don't just pick a date, on that date divide the granted credit by 10 or 100 across the board and move on and not just slowly reduce the amount of credit granted over time. It's not just SETI that's the issue right now. GPUGRID gives me about 3200 credits a day on my 9600GT. I don't think SETI CUDA gives me that sort of credit, but then that may be the better experience GPUGRID have with optimising their code for CUDA requirements. Live long and BOINC! All that is necessary for the triumph of evil is that good men do nothing. If You're Not Outraged, You're Not Paying Attention. |
Send message Joined: 22 Nov 07 Posts: 285 Credit: 1,076,786,368 RAC: 0 |
When GPU processing becomes the norm... and that will be more than a couple years down the road, then there should be discussion about the differences. But as it is, in order to process work with GPU, you would have to either spend the extra for a machine with GPU capabilities or upgrade. For me, that would be a combination of both, and not a cheap avenue to proceed with. In the past when I have configured and built a machine for crunching, video was that least of my expenses. $30-$40 for a cheapo video card. Now I am thinking instead of upgrading all my old Pent-D's to newer quads, I will get a nice GPU card instead. But then, I think- well crap. A Q6600 or a Qxxxx is only a $400 more with a nice GPU card. So now I am into $700-800 per box for upgrading. This gets expensive, and future upgrades will have to wait. Maybe until GPU becomes the "norm" and ALL BOINC projects write code for GPU. Right now, I am not sure what all projects support GPU processing.. SETI and GpuGrid.. I don't know about any others ATM. |
Send message Joined: 18 Nov 07 Posts: 280 Credit: 2,442,757 RAC: 0 |
Don't forget that special-purpose GPUs are becoming more generalised while general-purpose CPUs are becoming more parallel. We may eventually see differences between the two disappear as the industry reaches an acceptable compromise - indeed, there are already plans for sticking several different types of processor on a single die, and with optical communication starting to become feasible on motherboards bandwidth limitations for coprocessors in separate chips may disappear. I think CUDA or an equivalent framework will stay as a functional basis for parallel programming - but the underlying hardware will become more varied. If possible WUs should be able to capitalize on computers being able to process many WUs at a time, by treating each PC as a miniature supercomputer handling a subset of a problem. This is probably different for every project, but distributed computing should ultimately be very suited to parallelization, by definition. |
Send message Joined: 12 Apr 08 Posts: 621 Credit: 161,934,067 RAC: 0 |
A lot of people thought that Hyper-Threading was a waste when Intel started doing it. But, I have had several HT capable machines in my life and they did, in HT mode, run tasks a little more slowly; But, the throughput was indeed higher. Even more interesting is that after a hiatus, the new i7 processors have HT. So, I have a 4 core with 8 virtual processors. So, yes, we are seeing an increase in the complexity of the processing elements ... but, dedicated processors, some purpose built (like a GPU) can serve better at certain tasks. And as far as the credit complaints go... well, as I posted recently in NC on SaH, it is his fault for not fixing the cross-project credit problems which we had identified in BETA testing of BOINC. So, hoist by his own petard ... Had we addressed the issues back when there were only 5 projects, well, we would not be in this boat now would we ... :) What were they, oh yes... SaH, CPDN, Predictor@Home, LHC and Einstein ... heck even Rosetta came later ... |
©2024 Astroinformatics Group