Welcome to MilkyWay@home

Ampere


Advanced search

Message boards : Number crunching : Ampere
Message board moderation

To post messages, you must log in.

AuthorMessage
ProfileChooka
Avatar

Send message
Joined: 13 Dec 12
Posts: 78
Credit: 1,341,034,248
RAC: 2,448,356
1 billion credit badge8 year member badgeextraordinary contributions badge
Message 70133 - Posted: 30 Sep 2020, 9:47:44 UTC

I figured this would be a good starting point for anyone with a 3000 series NVIDIA card who wants to share their experience, or people considering these for Milkyway@Home.

ID: 70133 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileKeith Myers
Avatar

Send message
Joined: 24 Jan 11
Posts: 451
Credit: 348,467,274
RAC: 421,630
300 million credit badge10 year member badgeextraordinary contributions badge
Message 70134 - Posted: 30 Sep 2020, 19:46:04 UTC
Last modified: 30 Sep 2020, 19:54:12 UTC

Will follow this thread. Don't expect too many Ampere cards here because they are gimped once again by Nvidia on FP64 compute. Half again performance (1:64) compared to previous generation Turing (1:32).
But whether the other architectural improvements ameliorate the reduction in FP64 is the question.

[Edit] There actually was a 3080 already crunching here on the 19th. But all the tasks are already cleared the database. Don't remember anything remarkable about the runtimes.
https://milkyway.cs.rpi.edu/milkyway/results.php?hostid=864899
https://milkyway.cs.rpi.edu/milkyway/hosts_user.php?userid=34850
ID: 70134 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileChooka
Avatar

Send message
Joined: 13 Dec 12
Posts: 78
Credit: 1,341,034,248
RAC: 2,448,356
1 billion credit badge8 year member badgeextraordinary contributions badge
Message 70135 - Posted: 1 Oct 2020, 8:03:57 UTC
Last modified: 1 Oct 2020, 8:11:29 UTC

Hi Keith,
I agree that they might not be great at Milkyway but I was curious anyway.
I'm interested to see what AMD bring to the table in October but I'm not expecting another Radeon VII type card.

ID: 70135 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileKeith Myers
Avatar

Send message
Joined: 24 Jan 11
Posts: 451
Credit: 348,467,274
RAC: 421,630
300 million credit badge10 year member badgeextraordinary contributions badge
Message 70136 - Posted: 1 Oct 2020, 19:26:54 UTC

I agree I don't think we will see RVII performance either. I am curious why they went with such a narrow memory bus. I know that they don't have access to GDDR6X as that is exclusive memory for Nvidia from Micron. And HBM is just too expensive to use for consumer cards. But I would have thought the RX6900XT would have had at minimum a 384bit bus or even 512 bit.

I wonder if the memory infrastructure will hamper the architectural improvements from RDNA2?
ID: 70136 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileChooka
Avatar

Send message
Joined: 13 Dec 12
Posts: 78
Credit: 1,341,034,248
RAC: 2,448,356
1 billion credit badge8 year member badgeextraordinary contributions badge
Message 70141 - Posted: 11 Oct 2020, 5:16:18 UTC

This could be the answer Keith?
Infinity Cache :)

https://www.techradar.com/au/news/amds-infinity-cache-could-be-big-navis-secret-weapon-to-beat-nvidias-rtx-3000-gpus

ID: 70141 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileKeith Myers
Avatar

Send message
Joined: 24 Jan 11
Posts: 451
Credit: 348,467,274
RAC: 421,630
300 million credit badge10 year member badgeextraordinary contributions badge
Message 70142 - Posted: 11 Oct 2020, 6:19:31 UTC

Somehow I don't think so. The extra cache would take a lot more transistors and silicon real estate. Not to mention a further increase in the power budget. The purported die size is smaller than the 3080 die so I don't see how they could have added more transistors and cache on the same node size and gained 128MB of L2 cache and yet be significantly smaller than the 3080 die.

Will have to wait and see if this Infinity Cache is for Big Navi or for Navi 22.
ID: 70142 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
subsonic

Send message
Joined: 22 Feb 10
Posts: 3
Credit: 91,840,468
RAC: 0
50 million credit badge11 year member badge
Message 70285 - Posted: 28 Dec 2020, 17:07:03 UTC

Here we can see a crunching NVIDIA GeForce RTX 3080:
https://milkyway.cs.rpi.edu/milkyway/show_host_detail.php?hostid=856552
ID: 70285 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileKeith Myers
Avatar

Send message
Joined: 24 Jan 11
Posts: 451
Credit: 348,467,274
RAC: 421,630
300 million credit badge10 year member badgeextraordinary contributions badge
Message 70286 - Posted: 28 Dec 2020, 18:14:42 UTC - in response to Message 70285.  

Here we can see a crunching NVIDIA GeForce RTX 3080:
https://milkyway.cs.rpi.edu/milkyway/show_host_detail.php?hostid=856552

Which is pretty pathetic actually assuming 1X tasks per card.
My old GTX 1080Ti does Separation in ~90 seconds or less.
The 1/64 DP FP pipeline on the Ampere cards hurt MW processing even more than the Turing cards.
ID: 70286 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profilemikey
Avatar

Send message
Joined: 8 May 09
Posts: 2543
Credit: 462,666,679
RAC: 64
300 million credit badge12 year member badgeextraordinary contributions badge
Message 70290 - Posted: 29 Dec 2020, 11:09:28 UTC - in response to Message 70286.  

Here we can see a crunching NVIDIA GeForce RTX 3080:
https://milkyway.cs.rpi.edu/milkyway/show_host_detail.php?hostid=856552


Which is pretty pathetic actually assuming 1X tasks per card.
My old GTX 1080Ti does Separation in ~90 seconds or less.
The 1/64 DP FP pipeline on the Ampere cards hurt MW processing even more than the Turing cards.


Crunch3r is back crunching at Collatz and has a 10gb 3080 and posted some numbers:

https://boinc.thesonntags.com/collatz/forum_thread.php?id=202
ID: 70290 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileKeith Myers
Avatar

Send message
Joined: 24 Jan 11
Posts: 451
Credit: 348,467,274
RAC: 421,630
300 million credit badge10 year member badgeextraordinary contributions badge
Message 70291 - Posted: 29 Dec 2020, 17:07:06 UTC - in response to Message 70290.  

Here we can see a crunching NVIDIA GeForce RTX 3080:
https://milkyway.cs.rpi.edu/milkyway/show_host_detail.php?hostid=856552


Which is pretty pathetic actually assuming 1X tasks per card.
My old GTX 1080Ti does Separation in ~90 seconds or less.
The 1/64 DP FP pipeline on the Ampere cards hurt MW processing even more than the Turing cards.


Crunch3r is back crunching at Collatz and has a 10gb 3080 and posted some numbers:

https://boinc.thesonntags.com/collatz/forum_thread.php?id=202

Interesting thread. Didn't know anything about Collatz. Surprised it only uses INTEGER IOPS. No floating point.

So not really much comparison can be applied to workloads at projects that use FP32 or FP64 operations.
ID: 70291 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProDigit

Send message
Joined: 13 Nov 19
Posts: 9
Credit: 25,901,723
RAC: 0
20 million credit badge1 year member badge
Message 70293 - Posted: 30 Dec 2020, 1:14:40 UTC - in response to Message 70286.  

Here we can see a crunching NVIDIA GeForce RTX 3080:
https://milkyway.cs.rpi.edu/milkyway/show_host_detail.php?hostid=856552

Which is pretty pathetic actually assuming 1X tasks per card.
My old GTX 1080Ti does Separation in ~90 seconds or less.
The 1/64 DP FP pipeline on the Ampere cards hurt MW processing even more than the Turing cards.

One thing you'd have to keep into consideration,
The 3080 and 3090 have significantly more shaders, so the boost frequency is lower.
If any RTX is utilized less than 50%, the shader frequency further lowers to ~1350Mhz.
If that's the case, you can expect a 75% increase in performance, just by letting the shaders run at their rated frequency.
More, if your GPU is a third party GPU, as the factory Nvidia GPUs are great for small cases (since they essentially pushing 50-66% of the heat outside of the case, but are running less efficient (hotter) than third party GPUs with triple fan heat sinks.

Add to that 2, 3, or 4 WUs per GPU (which most likely will cause the GPU to run at max frequency), and I think you could potentially see a 200-300% improvement (unless for some reason, the GPU still utilizes less than 150W).
ID: 70293 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileKeith Myers
Avatar

Send message
Joined: 24 Jan 11
Posts: 451
Credit: 348,467,274
RAC: 421,630
300 million credit badge10 year member badgeextraordinary contributions badge
Message 70296 - Posted: 30 Dec 2020, 19:32:52 UTC - in response to Message 70293.  

We could prove your theory . . . . . IF the project admins at GPUGrid would release a Ampere compatible acemd3 application.

That app has no troubles keeping a gpu busy at 98% utilization running just 1 task. Maxes out the power consumption also. It will use all of the card TDP unless you power limit it.

Teammate has two RTX 3070's just waiting on a compatible app.
ID: 70296 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
0815-ICE

Send message
Joined: 4 Dec 20
Posts: 1
Credit: 9,906,513
RAC: 0
5 million credit badge
Message 70298 - Posted: 30 Dec 2020, 23:12:29 UTC

Hi there

I'm still very new to MilkyWay@home and hardly know my way around. Are there any settings that you should consider so that BOINC can run well? I see in the Task Manager that my processor is used 100% but my GPU is hardly used. I'm using an RTX 3090 and my computer ID is 873925

Thank you for tips
ID: 70298 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileKeith Myers
Avatar

Send message
Joined: 24 Jan 11
Posts: 451
Credit: 348,467,274
RAC: 421,630
300 million credit badge10 year member badgeextraordinary contributions badge
Message 70299 - Posted: 31 Dec 2020, 17:32:52 UTC - in response to Message 70298.  

Common mistake I see. You are using Task Manager incorrectly. If I remember Windows user comments on similar posts, you have to change the view of the gpu Task Manager screen to Compute instead of Graphics or something in the menus. Your card actually is running at almost 100% utilization.
ID: 70299 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote

Message boards : Number crunching : Ampere

©2021 Astroinformatics Group