Maxwell 2

ExtraTerrestrial Apes
ExtraTerrestria...
Joined: 10 Nov 04
Posts: 770
Credit: 583184804
RAC: 148864

Efficiency-wise it could be a

Efficiency-wise it could be a good idea to run E@H on one chip of a dual-chip card (like the GTX690) and GPU-Grid on the other one. E@H gets more of the PCIe bandwidth it neds so badly, whereas GPU-Grid would get more of the power budget it needs badly. Although, as far as I know the BOINC dev's have no intention to give us such fine-grained control.

MrS

Scanning for our furry friends since Jan 2002

archae86
archae86
Joined: 6 Dec 05
Posts: 3161
Credit: 7282271708
RAC: 2039262

RE: Just tonight something

Quote:
Just tonight something made me wonder: How might the productivity of a machine equipped with 2 GTX 750's (or 750 Ti's which seem more favored around here) compare in total output, power productivity, and price productivity to the same machine with a single 970 or 980?


As part of my recent journey of trying some configurations before settling to a particular allocation of two older GTX 660s, one 750 bought early this year, and a 750 Ti and 970 bought within the last month, I did generate a comparison of a single 970 running in host Stoll7, vs. the slightly mismatched pair of a base model 750 (1 GB, not overclocked ) plus a superclocked 750 Ti running in the exact same host.

The answer I got, which to some degree is specific to this host and to my configuration regarding such things as number of GPU jobs, number of CPU jobs, and affinity and priority managed by Process Lasso, is that for the current Perseus application the dual 750 installation beat the single 970 in all respects:

1. lower total purchase cost
2. lower system power consumption when running the Einstein work load
3. higher Einstein work production
4. easier installation

Regarding installation, the 970 card I used was long enough to pose a little positioning trouble with respect to hard drive cabling, while the much shorter 750 cards, which also lacked extra power connections, were dead easy.

These performance and power answers are specific to the current Perseus code running on this particular Windows 7 host. If future Einstein code improves the Maxwell fit, this answer might change if it gains more improvement on "big" Maxwells than small ones.

Details of performance comparison observations:

A 2 Perseus job per GPU plus 2 FGRP4 CPU jobs comparison on the 750 + 750 Ti installation burned 167 average watts at the wall, while generating credit at a 60,350/day rate.

A 3 Perseus job plus 2 FGRP4 CPU job run with the 970 burned 187 watts while generating credit at a 58,670/day rate

As I've questioned here how much benefit the 750 Ti variant is giving over a base 750 on current Perseus code, I'll mention the observed contribution of the two different cards while they were running together on my test:

With each running two Perseus jobs, the superclocked 750 Ti average elapsed time observed was 5:28:03, while the low-end 750 was 5:46:07. So current Perseus code obtains remarkably little advantage from the extra execution resources plus appreciably higher clock rate of this particular 750 Ti card vs. this particular base 750 card. (these times are considerably inferior to what either card alone would attain running on this host, as the cards are less promptly and adequately serviced by the host).

Lastly, regarding host effects:

After the test mentioned above, I moved the 970 from the 4-core non-HT Sandy Bridge host on which the test were run to a 2-core HT Haswell. The Haswell host is supporting the 970 enough better to obtain appreciably higher Einstein production, 64,164 system credit/day vs. 58,666.

On a less Maxwell note, my final configuration after all the shuffling includes two hosts with dual GPU, identical model GTX 660s, each paired with a 750 (the same plain 750 in the older Westmere box, and the 750 Ti in the new Sandy Bridge box). As the Westmere is PCIe 2.0, while the Sandy Bridge is 3.0 and dire things have been said here about lesser PCI impact, I was genuinely surprised to find myself getting somewhat higher GPU performance (from both the 660 and the 750) on the Westmere box than on the Sandy Bridge box. This Westmere has a bigger caches and more memory channels than this Sandy Bridge, so despite the age may offer better memory latency, throughput, or both in this application.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.