Nvidia Pascal and AMD Polaris, starting with GTX 1080/1070, and the AMD 480

Jim1348
Jim1348
Joined: 19 Jan 06
Posts: 463
Credit: 257957147
RAC: 0

Running BRP4G on my two GTX

Running BRP4G on my two GTX 750 Ti's (minimal factory overclock, running at 1210 MHz) under Win7 64-bit requires 40 watts per card according to GPU-Z.  But that does not account for all the card power, and the digital meter on my UPS shows that they use about 48.5 watts additional when they are operational.  But that is not quite all either, since when you remove the cards from the PCIe slot entirely, you save another 8 watts.

So as measured from the wall, the cards use 48.5+8=56.5 watts.  However, accounting for the 91% power supply efficiency, the cards themselves us 56.5 x .91 = 51.4 watts.  That is close enough for me. 

PS - Each work unit takes about 2150 seconds when they are the only BOINC projects running on my i7-4771.  When also running four cores on 1.02 gravity wave CV, the BRP4G take about 2250 seconds.

 

archae86
archae86
Joined: 6 Dec 05
Posts: 3157
Credit: 7229808196
RAC: 1154979

Todderbert wrote:My normal

Todderbert wrote:
My normal 750Ti reports 45watt usage.  My 750Ti FTW with the Six pin connector reports less but consumes the same power as my normal non six pin 750Ti.

Adding together reports here with my own experience, I gravely doubt that reported power numbers from 750 and 750Ti cards based on the card's own reporting are at all reliable.  I urge people to give stronger weight to numbers based on actual power meters at the system wall socket--properly averaged over time.  While it is true that this method cannot isolate power consumed by the card itself from power consumed by extra system activity in support of the card and power consumed by deviation from 100% incremental efficiency in the system power supply, it has the merit of being directly measurable, and by differential comparison of configuration and operating conditions one can get the real marginal power cost of GPU computing.  Arguably that is of more interest than is the power directly consumed by the card.

By the way, while the kill-a-watt dominates discussions of power meters here, I'd point out that only the more expensive of their two models has a reset button allowing averaging to be started once the intended condition is met.  Further, the Ensupra model I currently use offers more consistently adequate resolution, as some range breaks in the Kill-a-watt drive the resolution quite low.

My 1050 is currently due to arrive November 1.  I'm currently collecting comparison data for a 750 (base model) in the same box, so should be able to offer comparison data for 1050 vs. 750 as the week goes by.

As overclocking Pascals has been a bit odd in various ways, but different from the also odd experience of overclocking Maxwell2, I hope people who overclock will report tools and methods used, and report any difficulties encountered.  Across the whole Pascal line so far, the automatic stuff seems to give us most of the core clock speed available (often far above nameplate rating) but there has been opportunity on some models to get the memory clock up appreciably.  As BRP6 was quite responsive to memory clock changes, this was worth pursuing.  I don't have personal experience on this point for BRP4G.

Mumak
Joined: 26 Feb 13
Posts: 325
Credit: 3528304207
RAC: 1454504

Yes, GPU own power reporting

Yes, GPU own power reporting is never 100% reliable as there's no GPU that would feature digital PWMs capable of measuring current+voltage on all rails (that would be a quite expensive implementation). Most GPUs feature such PWMs on VDDC (GPU Core), some on and MVDDC (GPU Memory) or VDDCI (GPU Memory Controller/Aux). From my experience with AMD Polaris for example their own power reporting works as a sum of VDDC (Vcore really measured via PWM) + VDDCI power estimated (based on activity counters) + constant RoC power.
No idea how NVIDIA does their power measuring internally, but I assume it's similar to the AMD method.

-----

Anonymous

As of 1549 EST Bestbuy is

As of 1549 EST Bestbuy is showing some 

EVGA - NVIDIA GeForce GTX 1050 Ti 4GB GDDR5 PCI Express 3.0 Graphics Card

for $149.00

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2143
Credit: 2960912657
RAC: 698420

I've been playing around with

I've been playing around with some new kit too. Measurements taken with a UK-specification Killa-Watt clone - quite possibly identical internally, but the model numbers don't line up. The first number it showed was 245 volts, on a 230v nominal circuit...

The new computer is an i5-6500 CPU @ 3.20GHz on an OEM motherboard (Dell Optiplex) - so no overclocking, and no dual GPU use. But it did come with dual licences for Windows 7 and Windows 10: power figures are for Windows 7 only so far.

Using integral HD 530 GPU (for both driving the display and crunching):

  • Booted to Windows desktop, not crunching: 15W
  • Running four CPU tasks (2x Numberfields, 2x SETI): 54W
  • Adding SETI GPU task to the above: 65W

Replacing HD 530 with NV GTX 1050 Ti, again for display and crunching:

  • Booted to Windows desktop, not crunching: 21W
  • Running four CPU tasks (2x Numberfields, 2x SETI): 60W
  • Adding GPUGrid CUDA 8.0 task to the above: 122W (variable, depending on task - this is highest I've seen)

So, I reckon the 1050 Ti is drawing about 6W when idle. and up to 68W when crunching. The model is Asus 'Expedition' - "Built for non-stop action", according to the sales blurb, with twin ball-bearing fans. It's rated at 75W TDP, and powered from the bus only - no 6-pin auxilliary power.

I haven't attached it to Einstein yet, but I hope we might use it to debug the intel_gpu/Skylake problem in due course: it's already helped perform that function for SETI.

Mumak
Joined: 26 Feb 13
Posts: 325
Credit: 3528304207
RAC: 1454504

I did some more tests on a

I did some more tests on a XP32 machine with a different 750Ti (+ 6-pin) running 1x BRP4G and the power difference between GPU idle and load is ~45 W. I assume running 2WUs would get close to 50 W.

So I think we can conclude the 1050 Ti has a ~20% better BRP4G performance while maintaining approximately the same power consumption as the 750 Ti. A new performance/power king :-)

-----

Trotador
Trotador
Joined: 2 May 13
Posts: 58
Credit: 2122643213
RAC: 0

Mumak wrote:I did some more

Mumak wrote:

I did some more tests on a XP32 machine with a different 750Ti (+ 6-pin) running 1x BRP4G and the power difference between GPU idle and load is ~45 W. I assume running 2WUs would get close to 50 W.

So I think we can conclude the 1050 Ti has a ~20% better BRP4G performance while maintaining approximately the same power consumption as the 750 Ti. A new performance/power king :-)

 

Thanks for the data, it is good but certainly not good enough to push substituing one card for the other. It seems that 750Tis are still good choices (if found cheap).

archae86
archae86
Joined: 6 Dec 05
Posts: 3157
Credit: 7229808196
RAC: 1154979

My 1050 (not Ti) card is

My 1050 (not Ti) card is currently generating results on my oldest PC where I swapped it for a plain-Jane 750 card.  While I have run that 750 overclocked ever since the 970 matter introduced me to the possibilities of graphics card overclock on Einstein, in preparation for this comparison I ran it at stock clock for a day.

At the stock condition running 2X GPU BRP4G tasks, plus 1 CPU task, the GPU productivity on this host is over 40% higher for the 1050 than for the 750.  No free lunch, so the system power consumption is up by almost 6 watts.   Still the power productivity is nicely improved on an incremental basis, and dramatically improved on a total system basis.

Oddly enough, the nominal price I paid for this particular 1050 card was the same to the penny as that I paid for this (early) 750: $119.99.  

This 750 came with pretty low stock clocks, and when overclocked actually came pretty close to matching my 750 Ti SC cards.  I don't expect the 1050 to gain nearly so much from overclocking, so my final productivity comparison in my actual use condition is not likely to be so favorable as this stock vs. stock comparison.  After some seconds of running BRP4G, this 1050 raised the GPU clock to 1670.5.  The memory clock is reported by GPU-Z as 1752.0, which corresponds to 3504 as reported by some other utilities such as MSIAfterburner, or to 7008 in the fever swamps of the Nvidia marketing department.

Power self-reporting by the 750 and 1050 is NOT comparable.  You are warned.

 

Todderbert
Todderbert
Joined: 3 Jun 15
Posts: 1285
Credit: 645963019
RAC: 0

archae86 wrote:My 1050 (not

archae86 wrote:

My 1050 (not Ti) card is currently generating results on my oldest PC where I swapped it for a plain-Jane 750 card.  While I have run that 750 overclocked ever since the 970 matter introduced me to the possibilities of graphics card overclock on Einstein, in preparation for this comparison I ran it at stock clock for a day.

At the stock condition running 2X GPU BRP4G tasks, plus 1 CPU task, the GPU productivity on this host is over 40% higher for the 1050 than for the 750.  No free lunch, so the system power consumption is up by almost 6 watts.   Still the power productivity is nicely improved on an incremental basis, and dramatically improved on a total system basis.

Oddly enough, the nominal price I paid for this particular 1050 card was the same to the penny as that I paid for this (early) 750: $119.99.  

This 750 came with pretty low stock clocks, and when overclocked actually came pretty close to matching my 750 Ti SC cards.  I don't expect the 1050 to gain nearly so much from overclocking, so my final productivity comparison in my actual use condition is not likely to be so favorable as this stock vs. stock comparison.  After some seconds of running BRP4G, this 1050 raised the GPU clock to 1670.5.  The memory clock is reported by GPU-Z as 1752.0, which corresponds to 3504 as reported by some other utilities such as MSIAfterburner, or to 7008 in the fever swamps of the Nvidia marketing department.

Power self-reporting by the 750 and 1050 is NOT comparable.  You are warned.

 

So I am curious as to what the 1050's actual wattage is being used while its crunching? to compare with the 1050Ti.  Also will you be overclocking the RAM?  I may not since I like the performance of my 1050Ti currently running stock.

archae86
archae86
Joined: 6 Dec 05
Posts: 3157
Credit: 7229808196
RAC: 1154979

Todderbert wrote:So I am

Todderbert wrote:
So I am curious as to what the 1050's actual wattage is being used while its crunching? to compare with the 1050Ti.  Also will you be overclocking the RAM?  I may not since I like the performance of my 1050Ti currently running stock.

I'll never know what the card is drawing itself, as I lack appropriate instrumentation, and I don't trust the self-reporting at all (it was reporting the card as using near 50% of TDP at zero-BOINC idle, which from simple arithmetic was wildly false).  I will be able to give the change in system power draw between idle state and GPU 2X only.  However at the moment I am running GPU 2X plus 1 CPU task, and I neglected to measure the idle state before beginning the run. 

Yes, I intend to overclock both core clock and memory clock, and based on previous Pascal results think it likely that I'll see more Einstein productivity benefit from memory overclocking.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.