Maximum CUDA cards per Mother Board?

David Rapalyea
David Rapalyea
Joined: 3 Jan 13
Posts: 79
Credit: 63886821
RAC: 0
Topic 196871

GREEN ON GREEN

Will Einstein run four separate ASUS GTX 650s on a single motherboard? My experiment is to produce the lowest power usage per credit.

My custom machine will have a Gold Star Plus Power Supply. It will run Windows 7 on a 35 watt dual core motherboard with enough pcie card slots and space to mount four of the above GPUs.

The cards all draw power from the their respective pcie slots. I tested one in my mule machine. It has gold PSU and core duo processors. The Machine with no GPU, runs two einstien tasks at a time for a total power plug draw of 60 WATTS.

One single GTX 650 requires an additional 40 watts for a total of 100 watts at the plug. So power should not be a problem. Thirty five watt mother board plus 4x40 WATT GPUs should add up to about 200 watts total.

Total credits per day running BRP tasks might approach 50,000.

But the project will only work if the lastet Einstein BOINC will use most if not all of the four GPUs. For instance, will my builder need to access the BIOS settings? Stuff like that.

Thanks for any advice on how this can be done!

Arecibo 19 Oct 2012
Just Because The Space Alien Is Green
Does Not Mean You Should Go

Horacio
Horacio
Joined: 3 Oct 11
Posts: 205
Credit: 80557243
RAC: 0

Maximum CUDA cards per Mother Board?

BOINC can use as much GPUs as the system is able to drive (phisycally and logically, AFAIK Windows can handle up to 8 GPUs and I think that BOINC also has that limit) and Einstein will be using all the GPUs that BOINC can find.

But its not that easy... The mobo might have issues powering 4 GPUs through the PCIe slots when those GPUs dont use the extra power conectors...(not due to PSU issue, but because the coper traces of the power lines in the mobo may not be thick enough)
And also, Einstein tasks for GPU requiere a lot of CPU power to keep them feeded and for some portions of the taks. So a dual core might be not enough to keep the 4 GPUs running, specially if you want to run more that one task per GPU even if you dont use CPU cores for crunching.

Also, "enough PCIe slots" is vague, you need to know that ussually all the PCIe slots share the PCIe lanes and you would nedd a sandy bridge E or an ivy bridge CPU to get at least 4 slots running at 8x, less than that and the slots will be running at 4x or 1x and will degrade the performace a lot... (some mobos has a kind of PCIe lanes multiplexers that allows to have more PCIe slots even when the CPU dont provide them, but this multiplexed workaround is not intended for full simultaneous usage of all the lanes and at the end you will see the same lost on performance)
I dont know what the limits are for lanes on AMD.

Holmis
Joined: 4 Jan 05
Posts: 1118
Credit: 1055935564
RAC: 0

As long as the machine and

As long as the machine and Windows is set up properly so all cards is recognized and usable by Windows then Boinc should also detect them and be able to use them. As they are the same type of card no extra configuration of Boinc should be needed.

The problems I can think of that might pop up are these:
1. Can the motherboard can handle the power draw from all the cards at the same time? I've read that the PCIe port should be able to deliver 75 watts to graphics card so that shouldn't be a problem unless it's a cheap and poorly constructed board.

2. PCIe bandwidth, will there be enough room on the bus to keep the cards fed?
How many PCIe-lanes will each card get? x8, x4 or x1? Will it be PCIe 2.0 or 3.0?
If these things don't add up you sure will be power efficient but at the same time you might not be able to utilize the hardware to it's full potential.

3. Will the processor be fast enough to keep the cards fed?
Same as 2. It might not be the most efficient way.

At least if you go ahead and try this you could always buy one more system and move two of the cards if it doesn't deliver what you hope for.

David Rapalyea
David Rapalyea
Joined: 3 Jan 13
Posts: 79
Credit: 63886821
RAC: 0

Thanks, I have discussed

Thanks,

I have discussed some of these same concerns with my builder. For instance pcie2 v pcie 3. I belive the CUDA cards are pcie 3 but my mule HP core duo is pcie2 and delivers good production. I think the new motherboard is pcie 3 but with these lesser cards there may not be much difference. Any difference would be in my favor anyway.

My primary concern, as you also point out, is that these cards individually do not stretch the power capacity of any single slot, but I will be using four slots. However, it is hard for me to imagine a motherboard with five slots like this one has would not have a power bus that could not provide routine power to four of them. [The slot arrangement is such that my two slot sized card needs to skip one of the five on the motherboard.]

Processing power is not a problem. My HP core duos provide exactly 0.2 percent of their processing capacity to tend to this single existing GPU card.

But in general the news I have been given here tends to alleviate my biggest concern that the system would not see all four cards. But I understand it is an experiment. However, it would be worth it to get 50,000 credits per day on two hundred watts!

Arecibo 19 Oct 2012
Just Because The Space Alien Is Green
Does Not Mean You Should Go

David Rapalyea
David Rapalyea
Joined: 3 Jan 13
Posts: 79
Credit: 63886821
RAC: 0

Hi Horatio! I have the

Hi Horatio!

I have the same concerns as you do. However, these cards do not individually draw much power, so I am hoping that will not be a problem. The motherboard has five slots and I will only be using four.

I have more concerns, as you point out, concerning the "bridging" issue. Once again, however, these are very low performance cards, and I doubt their processing capacity will stress anything they plug into. As long as they are all recognized.

The point here is not astronomical production, but efficient production. Once it is all up and running I plan to make a report. Fifty thousand credits per day on 200 watts might not be a record, but the best I have heard so far is 100 watts per ten thousand. So if this works it would be a good contribution to the project discussion.

Thanks for the reply!

Arecibo 19 Oct 2012
Just Because The Space Alien Is Green
Does Not Mean You Should Go

Alex
Alex
Joined: 1 Mar 05
Posts: 451
Credit: 500127910
RAC: 214437

David, please take a look

David,

please take a look into the manual of your MB and check the bandwidth of the pcie-slots. I'm pretty shure only one of them will run at full speed (x16), the second slot is slower and 3rd and 4th slot will run @ x1 only.
To get out the most off your GPU's you may decide to run 2 tasks on each GPU. This needs a lot of pcie bandwidth which is most likely not available. Also keep in mind that a good performance of the GPU's requires a fast CPU to feed them.

I've often tried to calculate the best configuration in terms of power, performance and price.
At least for my needs I found a good performance with an 1155 MB with 2 pcie3 slots running both @ x8 and an Intel i3 3220 and 8GB ram. My system with an ASUS MB draws without GPU's ~62 Watts when running all 4 CPU's near 100% load.
ATM it is equipped with 2 AMD GPU's.
http://einsteinathome.org/host/6633241
No, the system is not running 24/7

HTH,

Alexander

Gamboleer
Gamboleer
Joined: 5 Dec 10
Posts: 173
Credit: 168389195
RAC: 0

GTX 650's require one 6-pin

GTX 650's require one 6-pin power connector each from the power supply, and won't run from PCI-e slot power alone - the fan will just spin at max speed, so you would need a monster power supply that has 4 6-pin outputs, even if you did not need that much power.

You'd also need a motherboard with an x79 chipset to run 4 cards properly, and a Sandy Bridge processor (in other words, one from the previous generation) that can handle 40 PCI-e lanes, so it can handle 8x 8x 8x 8x. I don't know which ones do that, but you could Google it easily enough.

However, a 2x GTX 650 is easy; there are many motherboards designed for 2 simultaneous cards running at 8x 8x, using a current-generation IvyBridge processor, and you can find many power supplies starting around the 500w range that have two 6-pin connectors. You can do fine with an IvyBridge i3 and two 650's, and will get a small performance boost if you use an i5; the most popular choice would be a 3570k (the i5 and later can do PCI-e 3.0; the i3 Ivy Bridge is limited to 2.0).

archae86
archae86
Joined: 6 Dec 05
Posts: 3145
Credit: 7023314931
RAC: 1824054

Alec wrote:you would need a

Alec wrote:
you would need a monster power supply that has 4 6-pin outputs, even if you did not need that much power.

I just bought a Gigabyte GTX 660 card, and it arrived with a curious little adapter which accepted two of the classic 4-pin Molex power connectors as input and provided one of the 6-pin PCIe type power connectors as output. My previous GTX460 purchase (also Gigabyte) included two of these. Probably one can find such things, or possibly a SATA to PCIe conversion of similar type, on the aftermarket.

To the Original poster--you mentioned that your intended build included a Gold Plus Supply, but did not mention model or rating. It is my present understanding that Graphics cards used for Einstein BOINC processing have transient power draw far above their average, which lasts long enough not just to be gracefully accepted by the capacitors on the graphics card or in the PC power supply. If you follow through on this project, I'll be interested to see how things work out for you.

Neil Newell
Neil Newell
Joined: 20 Nov 12
Posts: 176
Credit: 169699457
RAC: 0

The PCIe spec says 25W max.

The PCIe spec says 25W max. per slot from the bus (75W after negotiation for PEG slots), according to this wikipedia article.

Worth noting that gaming gives a higher current draw than the current BRP CUDA application (using more of the silicon?) so you don't need as big a PSU if you're not going to run games (or more intensive GPU apps). Easy enough to run 4 GPUs with 6-pin connectors off a PSU, using 4-pin molex to-6-pin GPU adapters (especially if you can get adapters which use a single 4-pin molex instead of the usual two).

Note that for 4xGPUs, you'll need an EATX case because the 4th card will hang over the edge of the motherboard (see e.g. Asus P9X79 WS, GA-990-UD7). Worth looking at AMD 990FX boards, because they've got enough PCIe 2.0 lanes to do quad x8. And while a low-end AMD CPU may have a higher TDP than some Intel chips, in practice it's not being used anywhere near its limit by the GPU app (I'm using an Athlon II X2 in a 990FX motherboard and it clocks down to 800Mhz @ 33C - a Sempron would probably do just as well).

Cooling will be an issue with 4 double-width cards adjacent to each other, obviously using low-wattage cards helps there (or water-cooling :) ). For both heat and efficiency, stay away from "OC" cards as power dissipation rises rapidly once the clock rate gets pushed. I'd hazard a guess that underclocking a GPU reduces power faster than it reduces the work rate, at least up to a point.

mikey
mikey
Joined: 22 Jan 05
Posts: 11888
Credit: 1828065059
RAC: 206393

RE: Alec wrote:you would

Quote:
Alec wrote:
you would need a monster power supply that has 4 6-pin outputs, even if you did not need that much power.

I just bought a Gigabyte GTX 660 card, and it arrived with a curious little adapter which accepted two of the classic 4-pin Molex power connectors as input and provided one of the 6-pin PCIe type power connectors as output. My previous GTX460 purchase (also Gigabyte) included two of these. Probably one can find such things, or possibly a SATA to PCIe conversion of similar type, on the aftermarket.

To the Original poster--you mentioned that your intended build included a Gold Plus Supply, but did not mention model or rating. It is my present understanding that Graphics cards used for Einstein BOINC processing have transient power draw far above their average, which lasts long enough not just to be gracefully accepted by the capacitors on the graphics card or in the PC power supply. If you follow through on this project, I'll be interested to see how things work out for you.

You can find these in most of the better pc stores as well as all over the net. My local mom and pop pc store has them, 6 right now. in a rack full of slide out drawers of connectors. They have some of most every type connector one could ever need. The problem for his 200 watt ps though is that those molex to 6 pin connectors use two molex to a single 6 pin, and that means a minimum of 9 molex, for the gpu's and one hard drive. If he went for a sata hd that would mean even fewer molex, as they rarely have equal numbers of both. A 200 watt pc only has a couple of these molex plugs on it, so I am guessing he will need a 600 watt, or more ps. I have a 400 watt in my hand and it has 4 molex and 1 sata plsg on it, plus 1 floppy drive connector. The 300 watt next to it has 5 molex, 1 floppy and NO sata connectors. I am thinking he will need at least an 850 watt ps to get enough connectors, I have an 850 watt ps in several of my machines and even they do not have enough connectors to run 4 gpu's at once. I can run two but even then am using a connector to power the 2nd one. The first card, an AMD 7970, is using both an 8 pin and a 6 pin, leaving no more 6 or 8 pin connectors available. Going to one of the newer type ps's that lets you plug in the cables you need, instead of having all the wires hard wired, means going up towards a 1000 wat ps or more.

Horacio
Horacio
Joined: 3 Oct 11
Posts: 205
Credit: 80557243
RAC: 0

RE: The point here is not

Quote:

The point here is not astronomical production, but efficient production. Once it is all up and running I plan to make a report. Fifty thousand credits per day on 200 watts might not be a record, but the best I have heard so far is 100 watts per ten thousand. So if this works it would be a good contribution to the project discussion.

Thanks for the reply!

Well, thats the point, if the CPU is not able to keep the 4 GPUs feeded they will be waiting and wasting a certain amount of energy doing nothing...
But this is just a minor thing, at the end, if the production is not efficient you will be able to build a second rig (at relatively low cost) to split the GPU load (or sell some GPUs).

Your main concern should be to be sure that the mobo is not going to burn their power lines... What brand/model is the mobo your are going to use for this?
Anything else is just about performamce and efficiency, it may work or not, but you wont lost money or hardware trying.

As reference my GTX680 + I72600 can do 3 BRPs every 45 mins, with 3 free cores, i.e. 4 BRPS/hour so 48K credits/day... I have no way to meassure the power cosumption on that host but the GTX680 is rated at 195W (and as it uses 2 extra power conections of 6 pins, it shouldn't go higher than 75*3= 225W)

So on the green side, you may get better RAC/Power ratios using just one better GPU than going for several middle level GPUs. Off course this are just conjetures but, the main advantage noted on every review about high end nvidia cars was not about raw performance, was about the remakably better performance/watt ratios.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.