can my power supply support two NVIDIA GPUs?

Neil Newell
Neil Newell
Joined: 20 Nov 12
Posts: 176
Credit: 169699457
RAC: 0

The nvidia settings control

The nvidia settings control panel shows the current PCIe state, clock rate, fan speed, and temperature (at least on linux, I assume it's the same on Windows).

Bus state and memory usage is shown on the GPU 0, GPU1 etc 'Graphics Card information' page, temperature and fan speed under 'Thermal settings' and clock speed and performance mode under 'PowerMizer'.

Anonymous

I have been following your

I have been following your responses in thread for the last couple of days but did not comment until now because I did not have anything to report. About a week ago I had moved projects around between two nodes I use for crunching. At first this seemed like a good solution and would result in E@H getting a standalone node while S@H and Rosetta would share a node. It turned out that with my configuration of S@H and Rosetta would not take full advantage of the GPU in that node. I decided to use that node for the same projects but without a GPU. I moved the GPU from this node into the standalone E@H node after reading all of the posts in this thread. I had to wait for a day for the backlog of WUs on both nodes to work their way off the queue.

Yesterday I added a 2nd 650 Ti to the E@H node. This GPU had 2gb of memory while the other one had 1 gb. I moved the 2gb unit into the 3.0 X 16 slot and the 1gb into the 2.0 X 16 slot. Powered up. And "accepted" more work units. A lot data downloaded and soon I was crunching 3 GPU WUs and an assortment of CPU WUs. I had read in this forum the need to tell Boinc to use all GPUs. I created a file called: cc_config.xml. It turned out that this file already existed and contained "some" other data not related to the use of GPU usage. Also in Ubuntu this cc_config.xml file located at /var/lib/boinc-client was actually a soft link to: /etc/boinc-client/cc_config.xml. This is the file that need to be modified under Ubuntu to instruct Boinc to use both GPUs. I modified the existing file to include the stanza to use both GPUs (in bold below):


1
1
1


1

What follows are some images showing the current configuration. I believe that these images will show how card placement in different slots is effected as explained by "MrS" in an earlier post.

In the first two images its interesting to note some new fields with the 331.20 drivers, i.e., GPU Utilization, Video Engine Utilization, Link width/speed and bandwidth utilization. Also note the differences between the two cards. Both are 650 Ti cards. GPU 0 is an EVGA product while GPU 1 is an Asus product. Both have different memory values.

In these two images note how the GPU 1 (Asus) GPU is showing Fan RPM while it is not shown in the GPU 0 image

Here note the differences in link width and speed. I believe that "MrS" was talking about the effect of these slots on card performance in an earlier post.

If anyone can shed some light on the values I would be most interested.

And what this was leading to:

AgentB

early on in this thread you had talked about wanting to monitor PSU temps and fan speeds. While looking around I came across the following which while it won't provide PSU temps it does provide PSU Fan status. For me this was a revelation. You might already know about it. The command is: sudo dmidecode

This will output a lot of data but here is the part that might be of interest:

Handle 0x0057, DMI type 27, 15 bytes
Cooling Device
Temperature Probe Handle: 0x0056
Type: Power Supply Fan
Status: OK

Cooling Unit Group: 1
OEM-specific Information: 0x00000000
Nominal Speed: Unknown Or Non-rotating
Description: Cooling Dev 1

From this data you can get the "Status" on the Power supply Fan. I am ASSUMING that if the fan fails that the Status would be something like "Status: Not OK"

A script could be written as a cronjob to get this status and if a bad status is returned the script could send an email, and then do an orderly shutdown.

A "man dmidecode" will discuss the "type" entries shown above.

To all who responded thanks for your input and for taking the time to comment.

AgentB
AgentB
Joined: 17 Mar 12
Posts: 915
Credit: 513211304
RAC: 0

RE: AgentB early on in

Quote:


AgentB

early on in this thread you had talked about wanting to monitor PSU temps and fan speeds. While looking around I came across the following which while it won't provide PSU temps it does provide PSU Fan status. For me this was a revelation. You might already know about it. The command is: sudo dmidecode


Sadly my mobo, bios and PSU dots are not joined here. I don´t recall such a connector and so dmidecode for me is rather silent.

What i was thinking was something along the old ESA standard for PSUs - which seems to have withered on the vine, or one of
these.

I haven´t had time to look over the specs to see what linux support support exists...i ... must ... resist.

ExtraTerrestrial Apes
ExtraTerrestria...
Joined: 10 Nov 04
Posts: 770
Credit: 576036901
RAC: 186456

Robl, thanks for colleting

Robl, thanks for colleting and posting this data. Regarding the PCIe speed your first two screenshots show the following:

GPU 1 in a 16x PCIe 3 slot, running at 15% utilization.
GPU 2 in a 16x PCIe 2 slot, running at 62% utilization.

From this we already see a sgnificant difference - it's easy to imagine that if the bus to/from the GPU is busy 62% of the time, at some point the GPU is going to wait for data to arrive before it can continue crunching. Hence performance should suffer to some extend.

Screenshots 5 & 6 show the following:

GPU 1 actively using 16 lanes at PCIe 3 speed (8 GT/s) at a GPU clock of 1058 MHz.
GPU 2 actively using 4 lanes at PCIe 2 speed (5 GT/s) at a GPU clock of 928 MHz.

The 16x quoted initially was probably what the card would be capable of in this slot, whereas here the tool shows how much is actually being used. so your PCIe 2 slot is:

16x mechanically, meaning you can put a GPU with 16 lanes in there.
4x electrically, meaning of those 16 possible lanes only 4 are actually being connected. That's why the bus utilization is much higher for GPU 2.

Taking a quick look at your results and using the amount of memory to distinguish between the cards (GPU 1 has 2 GB, GPU 2 has 1 GB):

GPU 1 needs 7900 s per 3 WUs
GPU 2 needs 10730 s per 3 WUs

That's 36% more throughput for GPU 1 at 14% higher clock speed, i.e. about 22% of that performance difference should be attributed to the PCIe slots. This performance difference due to the PCIe connection should only increase for faster cards.
Note: the amount of memory wouldn't matter in both cases, since 3 WUs fit well into that 1 GB of GPU 2.

MrS

Scanning for our furry friends since Jan 2002

Anonymous

MrS I read your reply with

MrS

I read your reply with great interest. I must admit however that until now I have never been too interested in how hardware interfaced to a motherboard. Usually if the hardware beeped, lit up, or spun I was quite satisfied. :>)

From what you wrote would I be correct is stating that if my current motherboard had 2 PCIe 3.0 X 16 slots that the "PCIe Bandwidth Utilization" would be about the same for both of these GPUs? And that the difference I am seeing is due to the second slot being a PCIe 2.0 X 16. In other words performance between two like GPUs is effected by the type of slots they are in - a 2.0 slot is slower than is a 3.0 slot. I was looking at an Asus motherboard that has 5 PCIe 3.0 X 16 slots. If a PC with this type of board was just for crunching and it was populated with 5 like GPUs would they all perform at the same level? or would there be some other factor(s) that would cause some of the boards to perform at a lower level?

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5872
Credit: 117255811939
RAC: 36209524

From my experience, as long

From my experience, as long as the slot is 'electrically' x16, there doesn't seem to be much difference between PCIe2 and PCIe3 - for the situation where you have a single GPU card (of the type you are using) in use. When you have a two slot motherboard like yours where each slot is 16 lanes 'mechanically', the problem is that they both can't be 16 lanes each 'electrically' - I believe there is a limit of 20 electrical lanes in total for a lot of motherboards. The performance penalty is coming from the x4 electrical limit on the second slot.

You may have a couple of options to improve performance. You may have a BIOS setting to make the two slots x8 - x8 and the gain on the second slot may outweigh any loss on the first.

A second option would be to source a higher spec multislot board (obviously more expensive) that can do x16 - x16 electrically on at least two of the slots. I don't know if that's possible - it may be more like x16 - x8 - x8 on three slots. As you get more slots there will be limitations on the electrical lanes and you will need to check that out carefully.

Cheers,
Gary.

DanNeely
DanNeely
Joined: 4 Sep 05
Posts: 1364
Credit: 3562358667
RAC: 0

RE: From my experience, as

Quote:

From my experience, as long as the slot is 'electrically' x16, there doesn't seem to be much difference between PCIe2 and PCIe3 - for the situation where you have a single GPU card (of the type you are using) in use. When you have a two slot motherboard like yours where each slot is 16 lanes 'mechanically', the problem is that they both can't be 16 lanes each 'electrically' - I believe there is a limit of 20 electrical lanes in total for a lot of motherboards. The performance penalty is coming from the x4 electrical limit on the second slot.

You may have a couple of options to improve performance. You may have a BIOS setting to make the two slots x8 - x8 and the gain on the second slot may outweigh any loss on the first.

A second option would be to source a higher spec multislot board (obviously more expensive) that can do x16 - x16 electrically on at least two of the slots. I don't know if that's possible - it may be more like x16 - x8 - x8 on three slots. As you get more slots there will be limitations on the electrical lanes and you will need to check that out carefully.

Standard intel boards have a total of 24: 16 from the CPU (3.0 in the newest versions) and 8 from the chipset (2.0 in the newest version); the catch is that several of the chipset's are normally used to connect other devices on the board (ex more SATA III, more USB3, Networking, Audio) and to power the 1x slots. In more fully featured boards the x4 physical slot is often only 1x electrically unless a smaller version version of the multiplexer used to give a pair of x16 slots for the GPUs is used. You can get boards with really large muxes that offer 3 or 4 x16 electrical slots but you'll pay for the privilege and unless you also pay for LGA2011 (which has 40 lanes off the CPU itself), the fact that you still only have 16 lanes to the CPU/Memory on the other side of the mux can become an issue.

Anonymous

Many more factors to consider

Many more factors to consider then just "can my power supply handle two GPUs?" I believe that all who read/follow this thread will benefit from the information/guidance provided.

Thanks to all who responded,

Ron

ExtraTerrestrial Apes
ExtraTerrestria...
Joined: 10 Nov 04
Posts: 770
Credit: 576036901
RAC: 186456

RE: From what you wrote

Quote:
From what you wrote would I be correct is stating that if my current motherboard had 2 PCIe 3.0 X 16 slots that the "PCIe Bandwidth Utilization" would be about the same for both of these GPUs? And that the difference I am seeing is due to the second slot being a PCIe 2.0 X 16.

Yes and almost yes! For the 2nd question the different clock speeds of the GPU also have to be taken into account (as I did, approximately). But any further difference can be attributed to the PCIe slots.

And a PCIe 2 slot with 16 lanes electrically connected / being used would fare much better than yours, bus utilization should not be higher than twice of the PCIe 3 16x slot, probably even a bit less. But in your case only 4 of the possible 16 lanes are being used, which does hurt performance clearly.

Is that difference between "mechanically 16x" slots and how many lanes are actually being used clear now?

MrS

Scanning for our furry friends since Jan 2002

dskagcommunity
dskagcommunity
Joined: 16 Mar 11
Posts: 89
Credit: 1215750777
RAC: 242268

Ive running 2 overclocked

Ive running 2 overclocked 570GTX (220W eatch as i read on the specs) on GPUGrid (so 99% GPU Load and max watt usage) on a 530W(!) BeQuiet PSU for only a few bugs ;) You have to look at the 12V Linespecs and look what is recommend Ampere from the GPU with Dr. Google. I dont believe this xxxW´s drawings. Because it is only a summary value AND its written down for extreme low quality PSUs. Thats why they recommend 500, 600 700W PSUs and so on. I drive all PSUs at 80%-90% Usage, none died since then and at the beginning i buyed real buget PSUs but with good Linepower on 12V. They run too until today :)

Why 80-90%? Because i read at the beginning from Green IT PSUs that they are most effectify at high load. But thats not the case anymore i read not a long time ago that it seems they have the best efficiency from 50-80% now. So a bigger PSU is not wrong now :) But it costs more ;)

DSKAG Austria Research Team: [LINK]http://www.research.dskag.at[/LINK]

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.