Welcome to MilkyWay@home

4x HD7990 in one PC?


Advanced search

Message boards : Number crunching : 4x HD7990 in one PC?
Message board moderation

To post messages, you must log in.

AuthorMessage
ProfileSutaru Tsureku

Send message
Joined: 30 Apr 09
Posts: 99
Credit: 25,677,758
RAC: 12,715
20 million credit badge10 year member badge
Message 61000 - Posted: 7 Feb 2014, 15:07:17 UTC

I would like to build a new machine.

4x AMD Radeon HD7990 (dual GPU) graphic cards with ASUS Z9PE-D8 WS Motherboard and 2x Intel Xeon E5-2630v2 CPUs.

My question: Is this possible?

1 HD7990 use up to 375W fully loaded (in web test up to 395W).
2x 8pin PCIe power connector, 2x 150W= 300W
Over the PCIe slot 75W

How much wattage can go over the 24pin power connector?
I calculate with 50W for the motherboard/system-RAM,
and then 4x 75W/PCIe slot= 300W
(The both CPUs get power over 2x EPS power connector.)
So over the 24pin power connector must come at least 350W.
Can the 24pin power connector/cable of the power supply deliver at least 350W, or is this too much for the cable/plug or the plug at the motherboard?

I see now, the motherboard have also a 4pin molex plug.
Is this a connection to the motherboard or from the motherboard to other equipment? In which direction go the power?
How much wattage can go over this connection?

Thanks.

(BTW. Yes, this machine will do primary SETI, but during the weekly server maintenance there this machine will do Milkyway. ;-)

ID: 61000 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
[TA]Assimilator1
Avatar

Send message
Joined: 22 Jan 11
Posts: 365
Credit: 42,732,038
RAC: 3
30 million credit badge9 year member badgeextraordinary contributions badge
Message 61002 - Posted: 7 Feb 2014, 19:48:07 UTC
Last modified: 7 Feb 2014, 20:33:18 UTC

Good god if this rig is possible it'll be an absolute beast!! (if you do build it I'd like to see the WU times in my benchmark thread ;) ).

So 4x 375w = 1500w for just the GPUs under full load!!!

EZ plug seems to be an additional power supply socket for the grx cards, I say seems to as I haven't found it in the online manual for that mbrd, it's not mentioned in the text!! http://dlcdnet.asus.com/pub/ASUS/mb/LGA2011/Z9PE-D8-WS/Manual/e8726_z9pe-d8_ws.pdf
Although page 2-46 of the Rampage III Extreme manual does confirm it. Neither manual says how much power can be drawn through it.

Just trying to find out how much power can be pulled through the 24 pin plug, trawling the ATX guideline docs to try to find out........

Right, I'm trying to d/l this http://www.enermax.cn/enermax_pdf/EPS12V%20Spec2_92.pdf but it's taking forever & I've got other things I gotta do.
Maybe you'll have better luck than me getting it, if not I'll pop back here tomorrow.
Your answer will possibly be in that file but it only covers upto 950 PSUs. That file btw is eps12v power supply design guide v2.92

Btw, can you even get an ATX PSU over 1500w?? It looks doubtful........

Yes you can, upto 10.5 Kw!!! lol http://www.pcper.com/reviews/Cases-and-Cooling/Miller-Electric-10000W-Power-Supply-Review?aid=384&type=expert&pid=1
Team AnandTech - SETI@H, Muon1 DPAD, F@H, MW@H, A@H, LHC@H, POGS, R@H, Einstein@H, DHEP.

Main rig - i7 4930k @4.1 GHz, RX 580 8 GB, 16 GB DDR3 1866, Win 7 64bit
2nd rig - Q9550 @3.6 GHz, HD 7870 XT 3GB(DS), 8 GB DDR2 1066, Win 7 64bit
ID: 61002 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Link
Avatar

Send message
Joined: 19 Jul 10
Posts: 357
Credit: 16,320,358
RAC: 0
10 million credit badge9 year member badge
Message 61004 - Posted: 7 Feb 2014, 20:51:07 UTC - in response to Message 61000.  

How much wattage can go over the 24pin power connector?
I calculate with 50W for the motherboard/system-RAM,
and then 4x 75W/PCIe slot= 300W
(The both CPUs get power over 2x EPS power connector.)
So over the 24pin power connector must come at least 350W.
Can the 24pin power connector/cable of the power supply deliver at least 350W, or is this too much for the cable/plug or the plug at the motherboard?

Well, let's put it that way: this board has 7 PCI-E slots, if we assume that all of them can be used together (I have not read the manual), than the motherboard must be able to deliver 75W to each of them at once. Than if you use just 4 slots, I don't see, why there should be an issue.



I see now, the motherboard have also a 4pin molex plug.
Is this a connection to the motherboard or from the motherboard to other equipment? In which direction go the power?
How much wattage can go over this connection?

No idea about the wattage, but the power goes as usual from the PSU to the motherboard. What it is used for is up to the designer of the motherboard, on "normal" motherboards it's usually used for CPU (that's what it was introdused for).

OTOH, I still don't understand why you are trying to use workstation components for a BOINC machine. Unless there's other stuff you plan to use this machine for where you'll actally need that processing power in one box, I'd recommend to make two machines out of that. Than you can use standard PC components which will make the whole project cheaper and easier and the computing power for BOINC will still be the same.
.
ID: 61004 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
SF6

Send message
Joined: 30 Oct 13
Posts: 4
Credit: 11,126,923
RAC: 0
10 million credit badge6 year member badge
Message 61006 - Posted: 7 Feb 2014, 21:27:30 UTC

The problem is not power. I think neither the BIOS nor the OS can handle more than four GPUs (e.g. 2x7990). More would require BIOS tweaking and a different OS from Windows.
ID: 61006 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileMumak
Avatar

Send message
Joined: 8 Apr 13
Posts: 89
Credit: 517,085,245
RAC: 0
500 million credit badge7 year member badgeextraordinary contributions badge
Message 61007 - Posted: 7 Feb 2014, 22:06:02 UTC
Last modified: 7 Feb 2014, 22:06:24 UTC

Cooling is another question. Having 4 such cards (running under sustained load) on one board close together without PCIe risers or liquid cooling would be hard to cool. Four 3-slot cards won't fit there anyway without PCIe risers.
ID: 61007 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profilemikey
Avatar

Send message
Joined: 8 May 09
Posts: 2372
Credit: 445,509,067
RAC: 82,717
300 million credit badge10 year member badgeextraordinary contributions badge
Message 61010 - Posted: 8 Feb 2014, 12:33:43 UTC - in response to Message 61006.  

The problem is not power. I think neither the BIOS nor the OS can handle more than four GPUs (e.g. 2x7990). More would require BIOS tweaking and a different OS from Windows.


I saw a config the other day that a guy had 4 7990's in his machines, 4 pc's of his were setup that way, but I don't remember where I saw it or what OS the guy was using.

One thing you could research is using dual psu's in the machine, I have seen that before, one of my old Servers had that capability for example. That way you wouldn't have to get one HUGE psu, just two smaller and less expensive ones.

One of the guys I have talked to over my time at Boinc is the user Dark Ryder, he builds computer systems as well as crunches, and could be a resource for you, here is his website http://darkryder.com/#pricing_modified I have NEVER bought anything from him, but he has always been nice and open in our discussions.
ID: 61010 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Dunx

Send message
Joined: 13 Feb 11
Posts: 24
Credit: 900,924,350
RAC: 1,951,255
500 million credit badge9 year member badge
Message 61027 - Posted: 9 Feb 2014, 21:09:14 UTC
Last modified: 9 Feb 2014, 21:49:30 UTC

I have a pair of Asus X58 boards here - both with burnt out ATX 12V pins.... P6T6 WS and P6T7 WS, neither have additional PCI_E connectors for the GPU's - but the X79 now does.....

dunx

P.S. I just bought a short extension lead and soldered the female half to the board, and the male half replaced the burnt out ATX connector on the PSU ATX lead : - )

P.P.S. GTX 480 and three GTX 460's, VS HD 5870 + HD 7950..... now added a R9 280X !
ID: 61027 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profilemikey
Avatar

Send message
Joined: 8 May 09
Posts: 2372
Credit: 445,509,067
RAC: 82,717
300 million credit badge10 year member badgeextraordinary contributions badge
Message 61031 - Posted: 10 Feb 2014, 12:49:07 UTC - in response to Message 61027.  

I have a pair of Asus X58 boards here - both with burnt out ATX 12V pins.... P6T6 WS and P6T7 WS, neither have additional PCI_E connectors for the GPU's - but the X79 now does.....

dunx

P.S. I just bought a short extension lead and soldered the female half to the board, and the male half replaced the burnt out ATX connector on the PSU ATX lead : - )

P.P.S. GTX 480 and three GTX 460's, VS HD 5870 + HD 7950..... now added a R9 280X !


Maximum pc had a short blurb a couple of months ago regarding the X79 boards and I think they said that after 1 gpu they would all slow down to slower throughput speeds, ie no longer be x16 slots but drop to x4 or even x1 slots. It was in the back in the comments section.
ID: 61031 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Dunx

Send message
Joined: 13 Feb 11
Posts: 24
Credit: 900,924,350
RAC: 1,951,255
500 million credit badge9 year member badge
Message 61086 - Posted: 11 Feb 2014, 21:10:43 UTC
Last modified: 11 Feb 2014, 21:12:43 UTC

I think they are just seeing the added latency from a PLX type interface, the native x16 slots are fine ( up to three in use), but add a fourth or use seven single slot GPU's and the interface hardware shows it's delay....

IMHO, not many BOINC projects require "gaming" levels of bandwidth.

Anyway hope to find out within a month or two !

dunx

P.S. In general they saw about a 10% shortfall, hardly PCI-E x1 performance....
ID: 61086 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profilemikey
Avatar

Send message
Joined: 8 May 09
Posts: 2372
Credit: 445,509,067
RAC: 82,717
300 million credit badge10 year member badgeextraordinary contributions badge
Message 61088 - Posted: 11 Feb 2014, 22:42:54 UTC - in response to Message 61086.  
Last modified: 11 Feb 2014, 22:43:15 UTC

I think they are just seeing the added latency from a PLX type interface, the native x16 slots are fine ( up to three in use), but add a fourth or use seven single slot GPU's and the interface hardware shows it's delay....

IMHO, not many BOINC projects require "gaming" levels of bandwidth.

Anyway hope to find out within a month or two !

dunx

P.S. In general they saw about a 10% shortfall, hardly PCI-E x1 performance....


Yeah I have seen where people are running their gpu's on a pci-e x1 slot and they say it is working just fine.
ID: 61088 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
[TA]Assimilator1
Avatar

Send message
Joined: 22 Jan 11
Posts: 365
Credit: 42,732,038
RAC: 3
30 million credit badge9 year member badgeextraordinary contributions badge
Message 61095 - Posted: 12 Feb 2014, 19:02:26 UTC - in response to Message 61088.  
Last modified: 12 Feb 2014, 19:02:44 UTC

Hmm that's funny, a team mate of mine posted about slow downs going from x16 to x8 in Einstein@home, & others on F@H, let alone PCIE x1.

See this thread http://forums.anandtech.com/showthread.php?t=2348749

Maybe it depends on the GPU & the project.
Team AnandTech - SETI@H, Muon1 DPAD, F@H, MW@H, A@H, LHC@H, POGS, R@H, Einstein@H, DHEP.

Main rig - i7 4930k @4.1 GHz, RX 580 8 GB, 16 GB DDR3 1866, Win 7 64bit
2nd rig - Q9550 @3.6 GHz, HD 7870 XT 3GB(DS), 8 GB DDR2 1066, Win 7 64bit
ID: 61095 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profilemesyn191

Send message
Joined: 25 Dec 09
Posts: 29
Credit: 126,023,571
RAC: 0
100 million credit badge10 year member badgeextraordinary contributions badge
Message 61134 - Posted: 16 Feb 2014, 8:08:54 UTC
Last modified: 16 Feb 2014, 8:11:46 UTC

Yeah it depends on the project.

Milkyway is not bandwidth intensive or even particularly data intensive (in terms of memory required).

It is however extremely math intensive which is why AMD GPU's do so well at crunching it.

You can test this by down clocking your VRAM quite a bit and it'll hardly effect or no effect on crunch time for your WU. Doesn't save a lot of power though so its usually not worth bothering with unless you have a dedicated 24/7 crunching card.

OP: You can do it with one machine, barring weird BIOS issues with that workstation mobo which ASUS is known for, but its soooo much easier to do it with those cards spread across 2 machines.
ID: 61134 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profilemikey
Avatar

Send message
Joined: 8 May 09
Posts: 2372
Credit: 445,509,067
RAC: 82,717
300 million credit badge10 year member badgeextraordinary contributions badge
Message 61135 - Posted: 16 Feb 2014, 12:08:03 UTC - in response to Message 61095.  

Hmm that's funny, a team mate of mine posted about slow downs going from x16 to x8 in Einstein@home, & others on F@H, let alone PCIE x1.

See this thread http://forums.anandtech.com/showthread.php?t=2348749

Maybe it depends on the GPU & the project.


The Intel X79 motherboards have this issue when using multiple gpu's in them, it is known and not 'fixable' right now, it is a built into the bios thing. I am guessing it just one of those things they didn't consider a big deal when they designed the specs.
ID: 61135 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
[TA]Assimilator1
Avatar

Send message
Joined: 22 Jan 11
Posts: 365
Credit: 42,732,038
RAC: 3
30 million credit badge9 year member badgeextraordinary contributions badge
Message 61429 - Posted: 19 Mar 2014, 20:27:25 UTC

A 1.7 kw PSU is being released! http://www.anandtech.com/show/7878/lepa-releases-1700w-maxplatinum-power-supply-eu-only
Team AnandTech - SETI@H, Muon1 DPAD, F@H, MW@H, A@H, LHC@H, POGS, R@H, Einstein@H, DHEP.

Main rig - i7 4930k @4.1 GHz, RX 580 8 GB, 16 GB DDR3 1866, Win 7 64bit
2nd rig - Q9550 @3.6 GHz, HD 7870 XT 3GB(DS), 8 GB DDR2 1066, Win 7 64bit
ID: 61429 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profilemikey
Avatar

Send message
Joined: 8 May 09
Posts: 2372
Credit: 445,509,067
RAC: 82,717
300 million credit badge10 year member badgeextraordinary contributions badge
Message 61431 - Posted: 20 Mar 2014, 11:35:39 UTC - in response to Message 61429.  

ID: 61431 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Link
Avatar

Send message
Joined: 19 Jul 10
Posts: 357
Credit: 16,320,358
RAC: 0
10 million credit badge9 year member badge
Message 61433 - Posted: 20 Mar 2014, 18:29:08 UTC - in response to Message 61429.  

A 1.7 kw PSU is being released! http://www.anandtech.com/show/7878/lepa-releases-1700w-maxplatinum-power-supply-eu-only

And that's still not enough for a single PSU design if one of those cards needs almost 400W for itself. A 2000W PSU might be enough...
.
ID: 61433 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profilemikey
Avatar

Send message
Joined: 8 May 09
Posts: 2372
Credit: 445,509,067
RAC: 82,717
300 million credit badge10 year member badgeextraordinary contributions badge
Message 61435 - Posted: 21 Mar 2014, 11:35:29 UTC - in response to Message 61433.  

A 1.7 kw PSU is being released! http://www.anandtech.com/show/7878/lepa-releases-1700w-maxplatinum-power-supply-eu-only

And that's still not enough for a single PSU design if one of those cards needs almost 400W for itself. A 2000W PSU might be enough...


I had a server once, a 3 foot by 3 foot box thingy that had could use two power supplies if you plugged them both in. I was not gpu crunching back then but think it was before pci-e slots. I have also seen adapters where you could use more then one power supply in an ordinary machine, but have never tried it. If it were me I would setup a 2nd machine and put two in each and just go cheap on the 2nd machines cpu if money was tight. Those HUGE power supplies are not cheap and you can get a 850 watt psu now for under 100 dollars in the US. My idea would also work because if something breaks at least you are still crunching, not totally down if anything goes out when using only a single machine. I will use a saying I heard and like lately...just because you can doesn't mean you should. I heard it in reference to the NSA's monitoring of EVERYTHING AND ANYTHING, but I think it works here too.
ID: 61435 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote

Message boards : Number crunching : 4x HD7990 in one PC?

©2020 Astroinformatics Group