Nvidia GTX 970

ExtraTerrestrial Apes
ExtraTerrestria...
Joined: 10 Nov 04
Posts: 770
Credit: 581798152
RAC: 139027

@Keith: any benefit to using

@Keith: any benefit to using SIV compared to simply setting a temperature target, maybe together with a fixed or custom fan curve?

@Manuel: you're not the only one with this P2 issue. There's been quite some discussion over here at Einstein, but I can't find the thread right now. It's pinned at GPU-Grid, though.

Regarding your actual problem: initially my machine also became unstable when I set higher memory clocks. But this turned out to have been a side effect of a different instability. Now I'm running happily at 1875 MHz (GPU-Z) and may have a bit of headroom left.

In your screenshots you're still running stock, right?

What I can suggest:

1. Try smaller clock speed improvements, like 1550 / 3100 MHz.
2. Test for memory errors with MemTest CL. It must not find anyerrors. Don't use the older MemTest G80, as it contains a bug which makes it always report errors in a specific subtest.

MrS

Scanning for our furry friends since Jan 2002

Manuel Palacios
Manuel Palacios
Joined: 18 Jan 05
Posts: 40
Credit: 224259334
RAC: 0

RE: I use NvidiaInspector

Quote:

I use NvidiaInspector to set the P2 clock +100Mhz so that I run the memory at 3605Mhz for my 970s. I use SIV to set the maximum GPU temp to 65C by having SIV modulate the fan speeds. I run 6 CPU tasks and 3 tasks per GPU and that keeps the system running around 92% CPU utilization and around 99% GPU utilization with no errors and no downclocks or reboots in the systems. I recommend SIV for controlling the GPU and also for system monitoring.

Cheers, Keith

Helllo, thank you for your reply. I have the SIV installed on my PC as it came as part of the software package that came with my motherboard. Furthermore, I ended up uninstalling the EVGA PrecisionX software from my computer and installed in its place MSI afterburner.

I then viewed Nvidia Inspector and curiously under P2 state, the memory clock maximum could be set to 3505 without having to click unlock max. Thus, I moved the slider over, set it to run at 3505 and stayed doing other tasks on my computer for several hours to make sure the system was stable.

As it is with these things, perhaps the EVGA software was somehow inhibiting the card's functions. Whatever the case may be, the BRP v1.52 tasks were running well and completing in and around 2h30min with 2x. I am now going to see how 3x performs as archae has already indicated that 3x was more efficient.

It seems RAC wise I still have a ways to go, his corei3 + GTX970 machine gets ~100,000 RAC, so the card by itself must be 95% of that with two it should be quite interesting to see where this machine ends up.

Thanks!

Manuel Palacios
Manuel Palacios
Joined: 18 Jan 05
Posts: 40
Credit: 224259334
RAC: 0

RE: @Keith: any benefit to

Quote:

@Keith: any benefit to using SIV compared to simply setting a temperature target, maybe together with a fixed or custom fan curve

MrS

MrS, thank you for your suggestions. With regards to the above quote, I have had greater stability by setting the temperature target to 85C and the power limit to 105%....However I have not increased the voltage to the card, it runs at stock 1200mv.

The GPU clock on the cards is slightly overclocked +36 for both 970s. The first 970 runs at 1402mhz and the 2nd 970 runs at 1427 mhz (both under boost clock). My case is well ventilated and the cards run at ~46-52C with the fans at 40%.

ExtraTerrestrial Apes
ExtraTerrestria...
Joined: 10 Nov 04
Posts: 770
Credit: 581798152
RAC: 139027

@Manuel: so it works now? I

@Manuel: so it works now? I had no trouble using Precision X in the past, but prefer not to keep any GPU-modiifying tool continoulsy running. You might also need to reboot after installing / uninstalling them.

And yes, you only need "unlock max" if you want to go beyond stock clocks.

Quote:
greater stability by setting the temperature target to 85C and the power limit to 105%


Do you mean clock speed stability or "not crashing". On your screenshots you seem to be far away from both limiits, so I don't see how changing them would matter in any way. Apart from that: around 1.4 GHz at 1.2 V doesn't seem too much for GM204. Mine currently runs at 1.37 GHz @ 1.12 V and consumes 162 W (my power limit).. but that's with GPU-Grid.

MrS

Scanning for our furry friends since Jan 2002

Manuel Palacios
Manuel Palacios
Joined: 18 Jan 05
Posts: 40
Credit: 224259334
RAC: 0

RE: @Manuel: so it works

Quote:

@Manuel: so it works now? I had no trouble using Precision X in the past, but prefer not to keep any GPU-modiifying tool continoulsy running. You might also need to reboot after installing / uninstalling them.

And yes, you only need "unlock max" if you want to go beyond stock clocks.

Quote:
greater stability by setting the temperature target to 85C and the power limit to 105%

Do you mean clock speed stability or "not crashing". On your screenshots you seem to be far away from both limiits, so I don't see how changing them would matter in any way. Apart from that: around 1.4 GHz at 1.2 V doesn't seem too much for GM204. Mine currently runs at 1.37 GHz @ 1.12 V and consumes 162 W (my power limit).. but that's with GPU-Grid.

MrS

Sorry for the ambiguity, I meant that I have not experienced more crashes. I did not realize that the GPU clocks had so much headroom left as I already got a SC version of the GPU. I'm not positive I want to overclock the cards too much as I need them to last me a while. My main gripe was that P2 was inhibiting the memory from running at its rated speed.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 5009
Credit: 18892287507
RAC: 5992232

Actually, SIV is setting the

Actually, SIV is setting the temp target. I used to use custom fan curves for the case and CPU fans but now I just run the standard fan curves in the BIOS. I use the -GPUCTL modifier for SIV to get control over the GPU fans. I never bothered with any of the other fan app controllers since they mostly are geared to gamers and I don't game. SIV is all I need to control and monitor the systems. I did find a production benefit to bumping the GPU memory clock up from stock. I didn't try to find the ultimate overclocked memory speed that a gamer would. A nice little overclock that doesn't push the cards very hard in fact. I tried a little core overclock but didn't notice any improvement in production. The cards run overclocked anyway with their stock boost speeds since the cards aren't gaming and always have plenty of power headroom when just doing distributed computing. My four 970s run at 1367, 1354, 1356, and 1348 Mhz. Just running SIV simplifies everything and I'm happy with the performance and production.

Cheers, Keith

 

archae86
archae86
Joined: 6 Dec 05
Posts: 3160
Credit: 7261175241
RAC: 1542292

Manuel Palacios wrote:I am

Manuel Palacios wrote:

I am now going to see how 3x performs as archae has already indicated that 3x was more efficient.

It seems RAC wise I still have a ways to go, his corei3 + GTX970 machine gets ~100,000 RAC, so the card by itself must be 95% of that with two it should be quite interesting to see where this machine ends up.


1. I'll remind you that my previous tuning of the 970 was on Perseus work, probably running on 1.39, which is quite a different beast than 1.52. My guess is that you will see a gain from 3X, but not very much. Let us know. One of these days, I should try 4X, just to see.
2. On a sample of 180 returned WUs on 1.52, my GTX 970 got a mean elapsed time of 10,095 seconds, or 2:48:15. On a formal computation of productivity from the GPU alone, with (unrealistically) zero provision for down time for reboots and such, this would give 112,975 credits/day from that GPU alone. For reference on 1.39 the same computation gave 71,459. I do run a couple of Einstein CPU jobs on the host, which in good times should more than make up for the GPU credit lost to short down-times. As I post the actual RAC for that host is 107,398 and climbing. Three cheers to Bikeman and anyone else who was involved.

Manuel Palacios
Manuel Palacios
Joined: 18 Jan 05
Posts: 40
Credit: 224259334
RAC: 0

RE: Manuel Palacios wrote:I

Quote:
Manuel Palacios wrote:

I am now going to see how 3x performs as archae has already indicated that 3x was more efficient.

It seems RAC wise I still have a ways to go, his corei3 + GTX970 machine gets ~100,000 RAC, so the card by itself must be 95% of that with two it should be quite interesting to see where this machine ends up.


1. I'll remind you that my previous tuning of the 970 was on Perseus work, probably running on 1.39, which is quite a different beast than 1.52. My guess is that you will see a gain from 3X, but not very much. Let us know. One of these days, I should try 4X, just to see.
2. On a sample of 180 returned WUs on 1.52, my GTX 970 got a mean elapsed time of 10,095 seconds, or 2:48:15. On a formal computation of productivity from the GPU alone, with (unrealistically) zero provision for down time for reboots and such, this would give 112,975 credits/day from that GPU alone. For reference on 1.39 the same computation gave 71,459. I do run a couple of Einstein CPU jobs on the host, which in good times should more than make up for the GPU credit lost to short down-times. As I post the actual RAC for that host is 107,398 and climbing. Three cheers to Bikeman and anyone else who was involved.

I turned to 3X and i'm experiencing similar times on my setup. The two GPU's stand on an average of 11XXX runtime with 2 cores freed, one for each GPU.

I get between 85%-92% GPU usage with the 2 free cores, and with 1 free core to feed the GPU I get 72-82% GPU usage at 3X work units per card. This is with the Corei5 running at 3.9ghz.

cliff
cliff
Joined: 15 Feb 12
Posts: 176
Credit: 283452444
RAC: 0

RE: 2. On a sample of 180

Quote:

2. On a sample of 180 returned WUs on 1.52, my GTX 970 got a mean elapsed time of 10,095 seconds, or 2:48:15. On a formal computation of productivity from the GPU alone, with (unrealistically) zero provision for down time for reboots and such, this would give 112,975 credits/day from that GPU alone. For reference on 1.39 the same computation gave 71,459. I do run a couple of Einstein CPU jobs on the host, which in good times should more than make up for the GPU credit lost to short down-times. As I post the actual RAC for that host is 107,398 and climbing. Three cheers to Bikeman and anyone else who was involved.


Hi,
I'm running 1 WU per gpu:-) Tasks are currently completing in 1:05:17 on the 970
and slightly less on a 980 at 1:04:13..

I used to run 2x per gpu with perseus, but temps were a little high.
Now running at 55c on the 970 and 42c on the 980..

Cant afford to replace either right now, far to damn expensive, hence the slower and cooler regime:-)

Regards,

Cliff,

Been there, Done that, Still no damm T Shirt.

Manuel Palacios
Manuel Palacios
Joined: 18 Jan 05
Posts: 40
Credit: 224259334
RAC: 0

Hi cliff, Are you running

Hi cliff,

Are you running the 980 and 970 on the same system? You seem to have a run time ~3800 seconds as an average on your validated tasks with 1 task per card. I did a calculation of my validated times and at 3X WUs per gpu I got a mean of 11,936.4 run time for the parkes 1.52 app.

Good scaling, it seems like I have some tinkering left to do with that memory clock. All in all, I look forward to the improvements in the next iterations of the app as the performance increases have been very notable. Great job to all involved.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.