Welcome to MilkyWay@home

Rx570 vs. gtx 1080, 1080ti, 2080

Message boards : Number crunching : Rx570 vs. gtx 1080, 1080ti, 2080
Message board moderation

To post messages, you must log in.

AuthorMessage
wolfman1360

Send message
Joined: 17 Feb 17
Posts: 21
Credit: 8,511,880
RAC: 0
Message 69063 - Posted: 19 Sep 2019, 6:19:08 UTC

Hello everyone.
I've been an on and off contributor to this project for a while now. Right now, Einstein is taking priority, despite resources saying otherwise, but that's a Boinc problem more than anything. Regardless - when I was getting work from here, they seem to be a very nice fit for each other, though maybe I will soon set Einstein to 0% resources.
Right now I've got this rx570 set to complete two concurrent wus. My app_config looks like this and I seem to be completing them within 2-2.5 minutes.
<app_config>
<app>
<name>milkyway</name>
<gpu_versions>
<gpu_usage>0.5</gpu_usage>
<cpu_usage>0.25</cpu_usage>
</gpu_versions>
</app>
</app_config>
Does that look alright for this card? I'm still very new at assigning CPU cores to GPU work. The GPU load pegs at, I think, a constant 100% with periodic periods of less, though not many. though I'm not entirely sure if I should be paying attention to that, memory used, power draw, or something else entirely.
The processor is a Ryzen 1800x which is crunching Asteroids right now and I have Boinc set to use 87% CPU, since Einstein likes to use 1 core per WU and I just have it using the website preferences as a guide.
Does any of this need to be changed at all for better optimization?

Now for the interesting question. I know that this project favors AMD cards quite heavily. What kinds of runtimes can I expect from a gtx 1080, 1080 ti, or 2080? What do folks recommend setting the number of WUs to on those cards vs. CPU cores in use for the GPU? Is there anything else I should keep in mind?
thanks a ton!
ID: 69063 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
mmonnin

Send message
Joined: 2 Oct 16
Posts: 167
Credit: 1,005,839,047
RAC: 48,035
Message 69066 - Posted: 19 Sep 2019, 11:07:20 UTC

AMD RX cards perform MUCH better at E@H than MW@H. That is where I ran my RX580.

This project favors cards with high FP64 compute power. So AMD 78xx/R9, NV Titan Black, AMD Radeon VII, NV Titan V. Most of the top PCs run one of those 4 generations of GPUs.
https://milkyway.cs.rpi.edu/milkyway/top_hosts.php

I have most of your cards, the RX, 1080 and Ti but will never run them at MW@H as they have low FP64 compute power. But if you want, run as many simultaneous tasks to keep the utilization pegged. The CPU port of <gpu_versions> just dedicates that much CPU to NOT run CPU tasks. It in no way affects the actual CPU usage of the GPU app. If you put <cpu_usage>4</cpu_usage> BOINC will not run 4 CPU tasks and leave those 4 CPUs for the GPU even if the GPU task uses 0.1 CPU threads in Task Mgr.
ID: 69066 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jim1348

Send message
Joined: 9 Jul 17
Posts: 100
Credit: 16,967,906
RAC: 0
Message 69067 - Posted: 19 Sep 2019, 12:21:52 UTC - in response to Message 69066.  
Last modified: 19 Sep 2019, 12:23:24 UTC

I concur with all of that. Use the Nvidia cards elsewhere. And Windows does better than Linux (or Darwin), which is a bit unusual.

I am very happy with my RX 570 on Win7 64-bit, averaging around 100 seconds per work unit. That is a bit surprising, since it does not have great dual-precision specs.
https://milkyway.cs.rpi.edu/milkyway/results.php?hostid=803662

PS - You can get faster cards, but it won't do much good, since they usually have problems getting enough work for some reason. I am not sure whether that is fixed or not.
ID: 69067 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Keith Myers
Avatar

Send message
Joined: 24 Jan 11
Posts: 708
Credit: 543,286,322
RAC: 140,284
Message 69073 - Posted: 19 Sep 2019, 18:27:24 UTC - in response to Message 69063.  
Last modified: 19 Sep 2019, 18:29:34 UTC

I seem to do the work units in around 100 - 140 seconds for both my 1080 and 2080. And around 90 seconds for my 1080 Ti.
ID: 69073 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Keith Myers
Avatar

Send message
Joined: 24 Jan 11
Posts: 708
Credit: 543,286,322
RAC: 140,284
Message 69074 - Posted: 19 Sep 2019, 18:34:42 UTC - in response to Message 69067.  

And Windows does better than Linux (or Darwin), which is a bit unusual.

I disagree. Here is a task both run on a 1080 Ti. Mine on Linux and my wingman on Windows.
https://milkyway.cs.rpi.edu/milkyway/workunit.php?wuid=1803261603
My card did the task 10 seconds faster.
ID: 69074 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
gambatesa
Avatar

Send message
Joined: 23 Feb 18
Posts: 26
Credit: 4,744,416,145
RAC: 0
Message 69075 - Posted: 19 Sep 2019, 20:04:15 UTC - in response to Message 69073.  

I seem to do the work units in around 100 - 140 seconds for both my 1080 and 2080. And around 90 seconds for my 1080 Ti.


Milkyway needs a strong FP64 GPU.. I don't have any nvidia but only AMD to compare.. a Radeon VII can complete 4WU in 40-45secs (10-11secs each) and a Radeon 7970/280x can complete 3WU in 100sec (33secs each), a 7950/280 needs 3-4 secs more
Want your Kids stay off from Drugs? Get them building Crunching PC's and they'll never have enough money for drugs
ID: 69075 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jim1348

Send message
Joined: 9 Jul 17
Posts: 100
Credit: 16,967,906
RAC: 0
Message 69077 - Posted: 19 Sep 2019, 20:54:42 UTC - in response to Message 69074.  
Last modified: 19 Sep 2019, 20:55:44 UTC

And Windows does better than Linux (or Darwin), which is a bit unusual.

I disagree. Here is a task both run on a 1080 Ti. Mine on Linux and my wingman on Windows.
https://milkyway.cs.rpi.edu/milkyway/workunit.php?wuid=1803261603
My card did the task 10 seconds faster.

Are you running two cards? It looks like you have a RTX 2080.
But I have looked mainly at the N-body (many of them), and that was my conclusion there. Perhaps it does not hold true for the GPUs, and the difference in OS may not be so important.
ID: 69077 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
wolfman1360

Send message
Joined: 17 Feb 17
Posts: 21
Credit: 8,511,880
RAC: 0
Message 69080 - Posted: 19 Sep 2019, 22:43:44 UTC - in response to Message 69073.  

I seem to do the work units in around 100 - 140 seconds for both my 1080 and 2080. And around 90 seconds for my 1080 Ti.

Thank you. This is exactly what I was looking for. How many workunits do you run at once on all 3 of those cards?
I know there are cards far better suited to this project that can wipe the floor with mine. Maybe I'll grab an r9 280x down the road. I also know that every little bit helps and I'm certainly not looking at getting into the top anything. I simply don't have the finances or physical space for that. I just wanted to make sure I could maximize output with what I do have.
ID: 69080 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Keith Myers
Avatar

Send message
Joined: 24 Jan 11
Posts: 708
Credit: 543,286,322
RAC: 140,284
Message 69082 - Posted: 20 Sep 2019, 0:05:21 UTC - in response to Message 69077.  

And Windows does better than Linux (or Darwin), which is a bit unusual.

I disagree. Here is a task both run on a 1080 Ti. Mine on Linux and my wingman on Windows.
https://milkyway.cs.rpi.edu/milkyway/workunit.php?wuid=1803261603
My card did the task 10 seconds faster.

Are you running two cards? It looks like you have a RTX 2080.
But I have looked mainly at the N-body (many of them), and that was my conclusion there. Perhaps it does not hold true for the GPUs, and the difference in OS may not be so important.

I either run 3 cards or 4 cards on each host. The one I referenced has two RTX 2070's, one GTX 1080 TI and one RTX 2080.
ID: 69082 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Keith Myers
Avatar

Send message
Joined: 24 Jan 11
Posts: 708
Credit: 543,286,322
RAC: 140,284
Message 69083 - Posted: 20 Sep 2019, 0:08:27 UTC - in response to Message 69080.  

I seem to do the work units in around 100 - 140 seconds for both my 1080 and 2080. And around 90 seconds for my 1080 Ti.

Thank you. This is exactly what I was looking for. How many workunits do you run at once on all 3 of those cards?
I know there are cards far better suited to this project that can wipe the floor with mine. Maybe I'll grab an r9 280x down the road. I also know that every little bit helps and I'm certainly not looking at getting into the top anything. I simply don't have the finances or physical space for that. I just wanted to make sure I could maximize output with what I do have.

I only run single tasks on each card. Primary project is Seti with the special app which requires running only singles on each card. So all my projects only run singles.
ID: 69083 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Keith Myers
Avatar

Send message
Joined: 24 Jan 11
Posts: 708
Credit: 543,286,322
RAC: 140,284
Message 69084 - Posted: 20 Sep 2019, 0:22:15 UTC - in response to Message 69075.  

I seem to do the work units in around 100 - 140 seconds for both my 1080 and 2080. And around 90 seconds for my 1080 Ti.


Milkyway needs a strong FP64 GPU.. I don't have any nvidia but only AMD to compare.. a Radeon VII can complete 4WU in 40-45secs (10-11secs each) and a Radeon 7970/280x can complete 3WU in 100sec (33secs each), a 7950/280 needs 3-4 secs more

Yes, won't argue that the consumer Nvidia cards are deliberately FP64 crippled compared to the prosumer or research cards so they don't cannibalize the sales of Teslas and Quadros. If your primary project is MilkyWay, then an ATI/AMD card makes the most sense. I have just stayed away from ATI/AMD because of the challenge of installing the drivers and maintaining them. The Nvidia drivers just install and run with no issues ever. The ATI/AMD drivers are a complete fiasco as all the constant posts of issues posted attest in the forums. That is what I seem to spend most of my time in the forums trying to help users with ATI/AMD cards that won't run compute.

I know that AMD cards are cheaper than Nvidia cards but I sometimes wish I could just tell someone, dump the AMD card and get a Nvidia card and you will be up and running instantly without the headache of the AMD drivers. All I am saying is that the perceived deficit of FP64 on Nvidia is not that great in the end. Discounting a Radeon 7 of course.

And if your primary project is Seti, then it is a no-brainer to get Nvidia cards since the applications available for Nvidia and Linux are so much better and faster than any Windows based card or AMD card.
ID: 69084 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jim1348

Send message
Joined: 9 Jul 17
Posts: 100
Credit: 16,967,906
RAC: 0
Message 69092 - Posted: 20 Sep 2019, 14:36:03 UTC - in response to Message 69084.  
Last modified: 20 Sep 2019, 14:52:25 UTC

I have just stayed away from ATI/AMD because of the challenge of installing the drivers and maintaining them. The Nvidia drivers just install and run with no issues ever. The ATI/AMD drivers are a complete fiasco as all the constant posts of issues posted attest in the forums.

True enough for the latest versions. But the RX 500 series is easy enough as I understand it, though mine is on Windows. And the power efficiency of AMD is much better here; I use about 90 watts (GPU-Z) on my RX 570 for my 100 seconds; I expect it would be more like 140 watts on the Nvidia cards.

I have a bunch of Nvidias, but they are used elsewhere (on Ubuntu).
ID: 69092 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
wolfman1360

Send message
Joined: 17 Feb 17
Posts: 21
Credit: 8,511,880
RAC: 0
Message 69114 - Posted: 23 Sep 2019, 17:57:05 UTC - in response to Message 69092.  

I have just stayed away from ATI/AMD because of the challenge of installing the drivers and maintaining them. The Nvidia drivers just install and run with no issues ever. The ATI/AMD drivers are a complete fiasco as all the constant posts of issues posted attest in the forums.

True enough for the latest versions. But the RX 500 series is easy enough as I understand it, though mine is on Windows. And the power efficiency of AMD is much better here; I use about 90 watts (GPU-Z) on my RX 570 for my 100 seconds; I expect it would be more like 140 watts on the Nvidia cards.

I have a bunch of Nvidias, but they are used elsewhere (on Ubuntu).

Perhaps latest AMD drivers are having issues somewhere, I know on Seti in particular, but for my rx series I have never had a driver issue on any of my AMD cards.
For this project in particular, the fp64 performance of the Nvidia cards, at least on the consumer side of things, are pretty terrible though. Rtx 2080: 314.6 GFLOPS. Rx570: 318.5 GFLOPS. So for what now is around $100 or so you can have better performance on this project as a posed to a $1100 GPU, and use much less power to boot. Those numbers seem to correlate to real world performance on this project from what I can tell, too.
Are there other projects that heavily utilize fp64 as well?
ID: 69114 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Keith Myers
Avatar

Send message
Joined: 24 Jan 11
Posts: 708
Credit: 543,286,322
RAC: 140,284
Message 69115 - Posted: 23 Sep 2019, 18:59:29 UTC - in response to Message 69114.  

Are there other projects that heavily utilize fp64 as well?

I am only aware of MilkyWay with the requirement for FP64. Pretty much every other project only needs single precision.
ID: 69115 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
mmonnin

Send message
Joined: 2 Oct 16
Posts: 167
Credit: 1,005,839,047
RAC: 48,035
Message 69116 - Posted: 24 Sep 2019, 0:06:31 UTC

PrimeGrid GFN are FP64 as well. The last NV consumer card with high FP64 was the Titan Black I believe. Now NV leaves that to the Pro cards like Tesla.
ID: 69116 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile sir sant

Send message
Joined: 7 Dec 09
Posts: 11
Credit: 637,858,888
RAC: 0
Message 69152 - Posted: 2 Oct 2019, 19:22:43 UTC - in response to Message 69114.  

I have just stayed away from ATI/AMD because of the challenge of installing the drivers and maintaining them. The Nvidia drivers just install and run with no issues ever. The ATI/AMD drivers are a complete fiasco as all the constant posts of issues posted attest in the forums.

True enough for the latest versions. But the RX 500 series is easy enough as I understand it, though mine is on Windows. And the power efficiency of AMD is much better here; I use about 90 watts (GPU-Z) on my RX 570 for my 100 seconds; I expect it would be more like 140 watts on the Nvidia cards.

I have a bunch of Nvidias, but they are used elsewhere (on Ubuntu).

Perhaps latest AMD drivers are having issues somewhere, I know on Seti in particular, but for my rx series I have never had a driver issue on any of my AMD cards.
For this project in particular, the fp64 performance of the Nvidia cards, at least on the consumer side of things, are pretty terrible though. Rtx 2080: 314.6 GFLOPS. Rx570: 318.5 GFLOPS. So for what now is around $100 or so you can have better performance on this project as a posed to a $1100 GPU, and use much less power to boot. Those numbers seem to correlate to real world performance on this project from what I can tell, too.
Are there other projects that heavily utilize fp64 as well?


Amd 7970\r280x has 1000 GFLOPS of fp64, and used ones are cheap even on ebay:
https://www.ebay.com/b/AMD-Radeon-R9-280X-Computer-Graphics-Cards/27386/bn_8951879
Just buy one with a good cooling. Perf per watt is still very good, regardless of 240w tdp.
That is thanks to other gamingcards having they'r dp perf cut down, cause games dont really need it.

My good old 7970 ghz edition is able to pull 500 000points per day without oc (3wu-s at a time).
Radeon VII and Titan V have better perf and perf per watt, but for a hobby they are too expensive imo.

About amd drivers, they are what they are, but they fave matured a great deal.
Been mixed amd and nvidia (gpus in the same pc) user since the days of the allmighty 1gb ati 7870cf +gf8800.

The old rule for ati\amd drivers was: when new gpus come out wait for one year for drivers to come out and then buy it. Wasn't the case for 7990 dual gpu board, cause after a year people started selling them for lack in realworld gaming performance. Synthetic benches scaled well (i was 7970 3way cf user back then), but gaming perf sucked.

Later drivers got better, but it took faar too loong for amd to get there. People were dissapointed.
And afcourse then amd quietly killed of its cf support through drivers, so for doom 2016 i bought a single gtx1060 3gb gpu, and i really haven't gone back to sli or cf since then.
ID: 69152 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote

Message boards : Number crunching : Rx570 vs. gtx 1080, 1080ti, 2080

©2024 Astroinformatics Group