Maxwell 2

cliff
cliff
Joined: 15 Feb 12
Posts: 176
Credit: 283452444
RAC: 0

Hi Richard, Thanks

Hi Richard,
Thanks for the explanation, will any new versions of the apps be aware of more recent cards?

Regards,
Cliff

Cliff,

Been there, Done that, Still no damm T Shirt.

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2143
Credit: 2977097705
RAC: 776351

RE: Hi Richard,

Quote:

Hi Richard,
Thanks for the explanation, will any new versions of the apps be aware of more recent cards?

Regards,
Cliff


You'll have to ask the project developers about that. Hopefully this thread will nudge them into at least thinking about the issue.

archae86
archae86
Joined: 6 Dec 05
Posts: 3161
Credit: 7262195191
RAC: 1562979

Just tonight something made

Just tonight something made me wonder: How might the productivity of a machine equipped with 2 GTX 750's (or 750 Ti's which seem more favored around here) compare in total output, power productivity, and price productivity to the same machine with a single 970 or 980?

Were one to double the single GTX 750 performance that would be very, very competitive with Maxwell 2 results obtained to date. One won't, and some key questions are the degree to which that is driven by PCI bus communication limitation and the degree to which it is driven by CPU support resource limitation.

As I've about talked myself out of replacing a GTX 660 with 970 or 980, I'm at this moment somewhat serious about looking into the double 750 possibility, assuming either of my candidate hosts actually has two usable PCI slots.

Could anyone helpfully point me to a bit of background on practicalities of installing two GPU cards and running Einstein on them? Also has the double 750 matter already been well documented here in practice?

My two candidate systems have an Asrock Z77 Extreme4 in one case, which has two x16 PCIe 3.0 slots, and an Asrock Z68 Extreme3, which I think has two x16 PCIe 2.0 slots.

For this to be useful would hint that the current Perseus application runs into diminishing returns in employing a GPU with extremely high numbers of parallel resources. The Maxwell2 results we have seen here, in contrast to much more favorable results seen in widely reported graphics applications, suggest this might be true.

Jim1348
Jim1348
Joined: 19 Jan 06
Posts: 463
Credit: 257957147
RAC: 0

RE: Could anyone helpfully

Quote:

Could anyone helpfully point me to a bit of background on practicalities of installing two GPU cards and running Einstein on them? Also has the double 750 matter already been well documented here in practice?

My two candidate systems have an Asrock Z77 Extreme4 in one case, which has two x16 PCIe 3.0 slots, and an Asrock Z68 Extreme3, which I think has two x16 PCIe 2.0 slots.


I hope these results don't disappear too soon.
http://einsteinathome.org/host/11672606/tasks
These are for two GTX 750 Tis in an Asrock Z87 Extreme3 with an i7-4770 (Haswell). The cards are the Asus GTX750TI-OC-2GD5 non-overclocked (or minimally overclocked would be more accurate) version, running under Win7 64-bit. I have not had any problems with them that I recall, using the latest drivers (344.11).

While we are on the subject of more versus fewer cards, I had originally intended to buy few large cards, but they came out too late for the summer, so I now have six of the 750 Tis. That has advantages and disadvantages, but they run quiet and cool, and I need the extra PCs anyway to do more CPU projects, so it is not a big deal for me. The real advantage for the big cards is in Folding, where the Quick-Return Bonus favors fewer but faster cards. As for Einstein, I don't think it makes much difference, at least with the CUDA 3.2 apps that they have now.

Betreger
Betreger
Joined: 25 Feb 05
Posts: 992
Credit: 1606832680
RAC: 676916

RE: As for Einstein, I

Quote:
As for Einstein, I don't think it makes much difference, at least with the CUDA 3.2 apps that they have now.


I find that to be sad

Jim1348
Jim1348
Joined: 19 Jan 06
Posts: 463
Credit: 257957147
RAC: 0

I was running only one work

I was running only one work unit at a time on my GTX 750 Tis, since that loaded them to around 90% as I recall. Maybe you can squeeze out a lot more on the GTX 970/980 with multiple work units, I just haven't seen that reported yet. But they are still all very energy-efficient and would all get a boost with CUDA 6.5 presumably.

RAMA
RAMA
Joined: 5 May 05
Posts: 18
Credit: 657880205
RAC: 0

RE: I was running only one

Quote:
I was running only one work unit at a time on my GTX 750 Tis, since that loaded them to around 90% as I recall. Maybe you can squeeze out a lot more on the GTX 970/980 with multiple work units, I just haven't seen that reported yet. But they are still all very energy-efficient and would all get a boost with CUDA 6.5 presumably.


I was running a GTX 750i with 4 tasks, since I think its more a memory issue. Replaced it with a GTX 970 running 5 tasks and it shows using 932Mb dedicated memory out of 3GB and 211Mb shared.

Also running 4 tasks on the 750i seemed the sweet spot for max points in my setup.
Will try to run both cards in same computer and see if it works.
Also interested in running OpenCL tasks on the GTX970 since its benchmarks Scene "Sala" look like right up there with a R9 290X.

ExtraTerrestrial Apes
ExtraTerrestria...
Joined: 10 Nov 04
Posts: 770
Credit: 581861485
RAC: 138518

RE: Also interested in

Quote:
Also interested in running OpenCL tasks on the GTX970 since its benchmarks Scene "Sala" look like right up there with a R9 290X.


Just keep in mind that OpenCL is a programming language, and actual performance of GPUs depends largely on the algorithms and compilers being used. One benchmark is a valid specific statement gerading one special case, which does not necessarily apply to other cases.

Your point that Maxwell 2 seems to have generally improved OpenCL performance in typical benchmarks holds true, though :)

MrS

Scanning for our furry friends since Jan 2002

Zalster
Zalster
Joined: 26 Nov 13
Posts: 3117
Credit: 4050672230
RAC: 0

Running more than 1 work unit

Running more than 1 work unit per card doesn't improve any efficiency in those cards. 2 work units per 980 only doubles the time to complete. So really no point in doing more 1 at a time. Someone explained to me that Einstein has more to do with the PCIe pass through than say seti@home (which has very little). At seti multiple work units can be done with not much more increase in time to complete. That doesn't seem to hold true here. I've done testing as well and it hasn't supported doing more than 1 WU per card.

I run identical rigs with only difference in total System Ram and GPUs

rough estimates
980 SC Binary (Areicbo 40 minutes)
780 SC Binary (Areicbo 55 minutes)

980 SC Binary (Perseus 120 minutes)
780 SC Binary (Persesu 165 minutes)

Biggest difference is in Temps.
980s Temp stay at 52C
780s Temp go from 71C to 50C depending on what slot they are in.

and Power Consumption. This last part I can't tell you since I switch out my UPS and now don't have a volt meter to monitor it but I know from how the line voltage regulator is working that it's drawing significantly less the the 780s computer

Zalster

archae86
archae86
Joined: 6 Dec 05
Posts: 3161
Credit: 7262195191
RAC: 1562979

RE: As I've about talked

Quote:

As I've about talked myself out of replacing a GTX 660 with 970 or 980, I'm at this moment somewhat serious about looking into the double 750 possibility, assuming either of my candidate hosts actually has two usable PCIe slots.

Could anyone helpfully point me to a bit of background on practicalities of installing two GPU cards and running Einstein on them? Also has the double 750 matter already been well documented here in practice?


I'm the first step on the road toward a dual 750 configuration update to a PC currently running a single 660. This PC has two 16x PCIe 3.0 slots, and with a little card movement I think things will fit (the EVGA 750 or 750 Ti cards I'm using are pretty short, which helps avoid congestion on the disk bay end of things).

I don't think I need help for the first step, which I expect to take today replacing a 660 with a 750 Ti. That, of course, I expect to be a reduction in both Einstein productivity and power consumption. I've ordered a moderately overclocked 750 Ti model, expressly intending to observe the Perseus productivity comparison to a simple 750 I have on another machine--as part of the decision process for the remaining three cards in the project "Ti or not?". Then I expect to order either a duplicate (if Ti gave me a meaningful increase) or a basic 750 and hope to have the pair running next week.

I'd appreciate a little guidance regarding the two graphics cards in one box matter. I know lots of people do this. Should I just expect that when I power back up after adding the second card that BOINC will see the second card and start using it?

Naturally I plan to have a very small queue and work fetch disabled at the time I switch over, and expect some work fetch transients even if all goes well.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.