Message boards :
Number crunching :
There's something VERY wrong with the "Top GPU models" list
Message board moderation
Author | Message |
---|---|
Send message Joined: 6 Oct 16 Posts: 11 Credit: 6,623,395 RAC: 0 |
Here is the list link: https://milkyway.cs.rpi.edu/milkyway/gpu_list.php In what universe is a 560 Ti more productive than a 970? What is going on? |
Send message Joined: 7 Apr 10 Posts: 2 Credit: 127,675,955 RAC: 0 |
The universe where the 560 Ti (163.97 GFLOPS) has more double precision power than the 970 (109 GFLOPS). |
Send message Joined: 30 Dec 14 Posts: 34 Credit: 909,998,366 RAC: 0 |
Is it possible for one to tell if a computer is running SLI or not? SLI with two cards, per NVidia, How much of a performance increase will I see with SLI technology? The GTX 560 specifications show that the GTX 560 Ti can be run in two-way SLI and the GTX 560 Ti Limited Edition can be run in three-way SLI. This could explain the observed performance. |
Send message Joined: 6 Oct 16 Posts: 11 Credit: 6,623,395 RAC: 0 |
The universe where the 560 Ti (163.97 GFLOPS) has more double precision power than the 970 (109 GFLOPS). Wouldn't that make its performance 60% better then, rather than marginally better as the table shows? Edit: nevermind, it's probably due to the higher clock of the 970. |
Send message Joined: 8 Aug 17 Posts: 1 Credit: 536,277 RAC: 0 |
New boy here! Can somebody explain to me why the GTX 980 is not on the list for the Mac? I got a warning when I first joined saying that I couldn't use a CUDA driver later than v6 - I am currently on v8.0.83 It works fine as a processor for video apps etc. I understand about the limitations of it for double precision. I have a MacPro 4,1 hacked to 5,1 and running Sierra. It has a GT 640 to drive the monitors, so the GTX 980 is just used for extra processing. CudaZ is working fine so I can give numbers if interested. I have just found the BOINC Manager Event Log and in that it says the Nvidia GPU is being used, so the CUDA driver is OK? |
Send message Joined: 20 Aug 11 Posts: 2 Credit: 296,418,931 RAC: 6,654 |
Is there something wrong with the GPU-list: the RX480 (roundabout 360 GFLOP/s DP) is higher/Number One in list with less double-precision power (has more Power than the 6970 with over 680GFLOP/s DP)? |
Send message Joined: 9 Apr 13 Posts: 9 Credit: 123,709,570 RAC: 0 |
Something is wrong with that list for sure. Tahiti (HD7970, HD7950, R9 280X) dominates Milky Way just look at Assimilator's thread on benchmarks. Even older AMD cards are quite respectable, like the HD6970 (Caymen) and 5870 (Cypress). Just not much double precision in high performance modern cards unless you want to spend a fortune and even then.... That list just doesn't make sense for this project. |
Send message Joined: 8 May 09 Posts: 3339 Credit: 524,010,781 RAC: 0 |
Is it possible for one to tell if a computer is running SLI or not? In Boinc SLI does not work, it only works for gaming and other programs, so even if someone does have an SLI ribbon between cards Boinc still sees them as individual cards and their crunching power is not higher. The individual is probably over clocking their gpu's, that does show up as a higher performance in Boinc. They make some very nice closed loop systems that work very well and if you use them you can overclock your gpu's to higher than normal speeds with few problems. If the person has used a ribbon to SLI them then overclocking them is the likely next step as well to get more performance when gaming etc. |
Send message Joined: 8 May 09 Posts: 3339 Credit: 524,010,781 RAC: 0 |
Something is wrong with that list for sure. Tahiti (HD7970, HD7950, R9 280X) dominates Milky Way just look at Assimilator's thread on benchmarks. Even older AMD cards are quite respectable, like the HD6970 (Caymen) and 5870 (Cypress). I think it's because the app was written a while back now and the new cards aren't doing things the same way anymore. My 5870 is doing them in about 143 seconds, while my 1080Ti is doing them in 122 seconds and my 480 is doing them in 132 seconds. My 1060 is doing them in 246 seconds and my 980 is doing them in 256 seconds. I don't overclock anything, the cards run as they come out of the box. |
Send message Joined: 25 Feb 13 Posts: 580 Credit: 94,200,158 RAC: 0 |
Hey Everyone, Just for a little insight as to why newer cards may not seem to be a huge improvement over older cards. About a year ago we switched to bundled work units which are about 5 times the size of old work units. We also currently run our application using double precision calculations to improve the fitting of our models. In older GPUs you might find 1/4 or 1/8 of the cores in the GPU could do double precision calculations. In newer GPUs there are significantly more cores in general but the ratio of double to single precision cores has reduces to 1/24 or 1/32. This means that the number of double precision cores has not scaled to the same degree single precision cores have and as such you will not see huge performance increases with newer cards. That being said, I have been testing our application on single precision calculations to see if we could have any use for them in the future. The results look promising and hopefully in the first couple months of next year, you might see a test project popping up that will see if we get the same results with it as our double precision application. Jake |
Send message Joined: 8 May 09 Posts: 3339 Credit: 524,010,781 RAC: 0 |
Hey Everyone, Even if they took longer if they came out with good results that would let some of the new gpu's run them faster I would think. |
©2024 Astroinformatics Group