Welcome to MilkyWay@home

There's something VERY wrong with the "Top GPU models" list

Message boards : Number crunching : There's something VERY wrong with the "Top GPU models" list
Message board moderation

To post messages, you must log in.

AuthorMessage
Erico

Send message
Joined: 6 Oct 16
Posts: 11
Credit: 6,623,395
RAC: 0
Message 66557 - Posted: 9 Aug 2017, 12:33:42 UTC
Last modified: 9 Aug 2017, 12:34:40 UTC



Here is the list link: https://milkyway.cs.rpi.edu/milkyway/gpu_list.php

In what universe is a 560 Ti more productive than a 970? What is going on?
ID: 66557 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
mineroad

Send message
Joined: 7 Apr 10
Posts: 2
Credit: 127,675,955
RAC: 0
Message 66558 - Posted: 9 Aug 2017, 17:19:23 UTC - in response to Message 66557.  

The universe where the 560 Ti (163.97 GFLOPS) has more double precision power than the 970 (109 GFLOPS).
ID: 66558 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Vester
Avatar

Send message
Joined: 30 Dec 14
Posts: 34
Credit: 909,988,687
RAC: 918
Message 66559 - Posted: 10 Aug 2017, 12:33:29 UTC

Is it possible for one to tell if a computer is running SLI or not?

SLI with two cards, per NVidia,
How much of a performance increase will I see with SLI technology?
The amount of performance improvement will depend on the application and its ability to scale. Several of today's hottest games see a full 2x increase in performance when using SLI technology with two graphics cards. 3-way NVIDIA SLI technology enables up to 2.8x performance increase over a single GPU. In general, applications running at higher resolutions with higher image quality settings will benefit most.


The GTX 560 specifications show that the GTX 560 Ti can be run in two-way SLI and the GTX 560 Ti Limited Edition can be run in three-way SLI.

This could explain the observed performance.
ID: 66559 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Erico

Send message
Joined: 6 Oct 16
Posts: 11
Credit: 6,623,395
RAC: 0
Message 66560 - Posted: 10 Aug 2017, 12:43:58 UTC - in response to Message 66558.  
Last modified: 10 Aug 2017, 12:58:21 UTC

The universe where the 560 Ti (163.97 GFLOPS) has more double precision power than the 970 (109 GFLOPS).


Wouldn't that make its performance 60% better then, rather than marginally better as the table shows?

Edit: nevermind, it's probably due to the higher clock of the 970.
ID: 66560 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
PerryM

Send message
Joined: 8 Aug 17
Posts: 1
Credit: 536,277
RAC: 0
Message 66561 - Posted: 17 Aug 2017, 9:53:25 UTC
Last modified: 17 Aug 2017, 10:50:10 UTC

New boy here!
Can somebody explain to me why the GTX 980 is not on the list for the Mac?
I got a warning when I first joined saying that I couldn't use a CUDA driver later than v6 - I am currently on v8.0.83
It works fine as a processor for video apps etc. I understand about the limitations of it for double precision.
I have a MacPro 4,1 hacked to 5,1 and running Sierra. It has a GT 640 to drive the monitors, so the GTX 980 is just used for extra processing. CudaZ is working fine so I can give numbers if interested.
I have just found the BOINC Manager Event Log and in that it says the Nvidia GPU is being used, so the CUDA driver is OK?
ID: 66561 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Sputnik

Send message
Joined: 20 Aug 11
Posts: 2
Credit: 295,627,496
RAC: 4,054
Message 66574 - Posted: 27 Aug 2017, 7:44:16 UTC

Is there something wrong with the GPU-list: the RX480 (roundabout 360 GFLOP/s DP) is higher/Number One in list with less double-precision power (has more Power than the 6970 with over 680GFLOP/s DP)?
ID: 66574 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
JoeM

Send message
Joined: 9 Apr 13
Posts: 9
Credit: 123,709,570
RAC: 0
Message 66875 - Posted: 22 Dec 2017, 2:06:51 UTC

Something is wrong with that list for sure. Tahiti (HD7970, HD7950, R9 280X) dominates Milky Way just look at Assimilator's thread on benchmarks. Even older AMD cards are quite respectable, like the HD6970 (Caymen) and 5870 (Cypress).

Just not much double precision in high performance modern cards unless you want to spend a fortune and even then....

That list just doesn't make sense for this project.
ID: 66875 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile mikey
Avatar

Send message
Joined: 8 May 09
Posts: 3315
Credit: 519,939,961
RAC: 22,685
Message 66876 - Posted: 22 Dec 2017, 13:17:48 UTC - in response to Message 66559.  

Is it possible for one to tell if a computer is running SLI or not?

SLI with two cards, per NVidia,
How much of a performance increase will I see with SLI technology?
The amount of performance improvement will depend on the application and its ability to scale. Several of today's hottest games see a full 2x increase in performance when using SLI technology with two graphics cards. 3-way NVIDIA SLI technology enables up to 2.8x performance increase over a single GPU. In general, applications running at higher resolutions with higher image quality settings will benefit most.


The GTX 560 specifications show that the GTX 560 Ti can be run in two-way SLI and the GTX 560 Ti Limited Edition can be run in three-way SLI.

This could explain the observed performance.


In Boinc SLI does not work, it only works for gaming and other programs, so even if someone does have an SLI ribbon between cards Boinc still sees them as individual cards and their crunching power is not higher. The individual is probably over clocking their gpu's, that does show up as a higher performance in Boinc. They make some very nice closed loop systems that work very well and if you use them you can overclock your gpu's to higher than normal speeds with few problems. If the person has used a ribbon to SLI them then overclocking them is the likely next step as well to get more performance when gaming etc.
ID: 66876 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile mikey
Avatar

Send message
Joined: 8 May 09
Posts: 3315
Credit: 519,939,961
RAC: 22,685
Message 66877 - Posted: 22 Dec 2017, 13:27:48 UTC - in response to Message 66875.  

Something is wrong with that list for sure. Tahiti (HD7970, HD7950, R9 280X) dominates Milky Way just look at Assimilator's thread on benchmarks. Even older AMD cards are quite respectable, like the HD6970 (Caymen) and 5870 (Cypress).

Just not much double precision in high performance modern cards unless you want to spend a fortune and even then....

That list just doesn't make sense for this project.


I think it's because the app was written a while back now and the new cards aren't doing things the same way anymore. My 5870 is doing them in about 143 seconds, while my 1080Ti is doing them in 122 seconds and my 480 is doing them in 132 seconds. My 1060 is doing them in 246 seconds and my 980 is doing them in 256 seconds. I don't overclock anything, the cards run as they come out of the box.
ID: 66877 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jake Weiss
Volunteer moderator
Project developer
Project tester
Project scientist

Send message
Joined: 25 Feb 13
Posts: 580
Credit: 94,200,158
RAC: 0
Message 66880 - Posted: 23 Dec 2017, 15:57:40 UTC

Hey Everyone,

Just for a little insight as to why newer cards may not seem to be a huge improvement over older cards. About a year ago we switched to bundled work units which are about 5 times the size of old work units.

We also currently run our application using double precision calculations to improve the fitting of our models. In older GPUs you might find 1/4 or 1/8 of the cores in the GPU could do double precision calculations. In newer GPUs there are significantly more cores in general but the ratio of double to single precision cores has reduces to 1/24 or 1/32. This means that the number of double precision cores has not scaled to the same degree single precision cores have and as such you will not see huge performance increases with newer cards.

That being said, I have been testing our application on single precision calculations to see if we could have any use for them in the future. The results look promising and hopefully in the first couple months of next year, you might see a test project popping up that will see if we get the same results with it as our double precision application.

Jake
ID: 66880 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile mikey
Avatar

Send message
Joined: 8 May 09
Posts: 3315
Credit: 519,939,961
RAC: 22,685
Message 66883 - Posted: 24 Dec 2017, 15:20:19 UTC - in response to Message 66880.  

Hey Everyone,

Just for a little insight as to why newer cards may not seem to be a huge improvement over older cards. About a year ago we switched to bundled work units which are about 5 times the size of old work units.

We also currently run our application using double precision calculations to improve the fitting of our models. In older GPUs you might find 1/4 or 1/8 of the cores in the GPU could do double precision calculations. In newer GPUs there are significantly more cores in general but the ratio of double to single precision cores has reduces to 1/24 or 1/32. This means that the number of double precision cores has not scaled to the same degree single precision cores have and as such you will not see huge performance increases with newer cards.

That being said, I have been testing our application on single precision calculations to see if we could have any use for them in the future. The results look promising and hopefully in the first couple months of next year, you might see a test project popping up that will see if we get the same results with it as our double precision application.

Jake


Even if they took longer if they came out with good results that would let some of the new gpu's run them faster I would think.
ID: 66883 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote

Message boards : Number crunching : There's something VERY wrong with the "Top GPU models" list

©2024 Astroinformatics Group