Welcome to MilkyWay@home

Posts by therealjcool

21) Message boards : Number crunching : Some feedback on Milkyway GPU crunching with various GPUs (Message 32101)
Posted 7 Oct 2009 by therealjcool
Post:

Have you tried crunching the WUs on the CPUs? The GPUs take the exact same WUs and just churn out the results so much faster. So you have to decide if you want to give the CPU guys roughly the same amount of credits as with other projects. That leads to the high pay for GPUs you experience now. If one would divide the credits per WU by two for instance, the pay on CPUs would be significantly lower than on other projects. The Milkyway GPU applications scale very well with the processing power these GPUs have in relation to CPUs. One simply gets a near perfect efficiency, in stark contrast to some other projects which tried GPU applications. Even comparing it to the (well optimized) GPUGrid application for instance, one sees that the ATI cards achieve here a higher amount of calculated operations in double precision than a GTX285 is able to do over at GPUGrid in single precision, even as there is a severe penalty for double precision on current GPUs (actually much higher on nvidia cards, which is the reason a GTX280 is trailing even an old and rusty HD3850).


Hi,

I think you missedmy point - it's not that the GPU units are faster than the CPU units, I was simply stating that, compared to other (bigger) projects like World Community Grid, the PPD Milkyway generates are simply way out of line, for both the CPU and GPU WUs. GPUGrid has a similar "problem" but it's not nearly as grave (my overclocked GTX 260-216 on GPUGrid does around 15k PPD). It's just bad for comparison, that's all. I take it the unmistakable irony on Ice's post above this one means I'm not alone with this feeling ;)

Got my 5870 lying right here by the way, now to find a vacant PCIe slot.. :D

22) Message boards : Number crunching : Some feedback on Milkyway GPU crunching with various GPUs (Message 32076)
Posted 7 Oct 2009 by therealjcool
Post:
Hello everybody,

just installed Milkyway on a few of my rigs, and wanted to let you know how this goes for me (everyone can use some positive feedback for a change, can't they? ;) )

First of all, let me say I am delighted to have found a project that actually WORKS on ATI GPUs without all the stupid issues, requirements and workarounds needed (see FAH). All I had to do to get them to work was:

1. Update my BOINC to the latest 6.10.x beta client
2. Attach to Milkyway@home (obviously) and set my preferences to use GPU while PCs are in use, NOT use CPUs (my CPUs are all taken for World Community Grid)
3. Close down BOINC and all BOINC services
4. Download the 0.20 ATI Win64 client from here and put it in the project data directory
5. Copy and rename the 3 aticalxxxx.dll files from the system32 directory to amdcalxxxx.dll (so that I had both the atical and amdcal files in there for a total of 6 files)
6. Start up BOINC again and let it rip ;)

So far, I have tried this on:
- HD4870X2 running Server 08 x64, Catalyst 9,9
- HD4870 1GB running Vista HP 64, Catalyst 9,9
- HD4850 512MB running Server 08 x64, Catalyst 9,6

Works on all machines with incredible performance (47s per WU per GPU in the 4870's, 54s on the 4850) :)

No errors so far, seems to work great.

What is truly amesome: Some machines, for example the HD4850, are not conencted to any pheripherals. I control them over LAN using UltraVNC, which poses a problem for most GPGPU apps. Not so for Milkyway, it works just fine without any screens attached.

One issue I had with the 4870X2: It jumps back an forth between 2D and 3D every time a new WU is started, causing the screen to flicker and flash. Also, the 2nd GPU doesn't come out of 2D clocks.

Solution: Simply create a CCC Profile with identical 2D and 3D clock rats and voltages (there are guides around the net how you can do that). Activate that profile before you start BOINC/Milkyway and you'll be fine, no screen flickering and 2 WUs every 47 seconds :D

Now, I also tried this on an Nvidia rig, running 2x GTX 280's in SLI. Here, it works even better (fool proof really). I just attached to Milkyway, and it automatically downloaded all the apps and dlls and whatnot, then it started running just fine on both GPUs. I have to say the performance is way better on AMDs though, it takes the GTX 280's around 3 minutes per WU, that's rather slow compared to the ATIs.
OS is Vista 64 Ultimate, driver is 190.62. SLI was enabled, no CUDA Toolkit or anything else installed.

Critique:

Well, there has to be some, right?!

And well, there is.. small points, but valid ones IMO. I will give you 3 points that can be improved, in my opinion:

1. Make ATI cards work automatically, just like NV GPUs. While being relatively easy to do for someone like me, who has suffered through FAH manual install routines, following the steps neccessary to get an ATI GPU crunching may still be too much of a hassle, causing many people to abandon the idea of joining the project.
As I said, without the manual copying and the Catalyst workaround, Milkyway wouldn't work on any of the ATI GPUs.

2. Seriously. Make the WUs bigger. 47s on a HD4870 is WAY too short. It cloggs my BOINC logs and it looses performance because of the high switching interval between the WUs (takes a second or so for the enxt WU to start). Just imagine what these WUs will be like on the upcoming HD5000 series.

3. Credit system. The PPD Milkyway is claiming are extremely inflated and WAY over the top. One HD4870 can do around 100k BOINC PPD on that project. For comparison, I have around 12 state-of-the-art rigs running on World Community Grid, half of them Dual or Quad sockets, all overclocked. Not counting Hyperthreading, the surpass the 200Ghz mark when counted together. This extensive farm does around 30k BOINC PPD ;)

Finally, I will be getting a HD5870 shortly. Inetrested to see what it can do.. probably 20s WUs, LOL!




Previous 20

©2024 Astroinformatics Group