GPU TDP as proxy for comparing task efficiency?

cecht
cecht
Joined: 7 Mar 18
Posts: 1,506
Credit: 2,794,053,951
RAC: 2,199,048
Topic 216328

I'm looking for some advice or insight. When comparing GPUs for task energy efficiency (credits per W), in lieu of using a watt meter, can the rated thermal design power (TDP) of a GPU be used as a proxy to compare expected power draw when running GPU tasks?  For example, the TDP of a Radeon RX 580 is 150W and  for a Radeon RX Vega56 is 210W. Given that the Vega will run tasks about twice as fast as the 580 (work with me here), could I expect that a host running two RX580s would yield ~40% fewer credits/W than if it ran one Vega56? (2 x 150 W / 210 W = 1.43).

Ideas are not fixed, nor should they be; we live in model-dependent reality.

archae86
archae86
Joined: 6 Dec 05
Posts: 3,156
Credit: 7,174,664,931
RAC: 713,086

I'll interpret your question

I'll interpret your question with respect to Einstein crunching purposes as whether the actual power used by a board in Einstein crunching is, if not actually the TDP, at least in a fixed ratio to the published TDP.

I think that very much depends on what you are comparing.  I've been aware of board/applications pairings in the past which routinely could be at the board power limit, and so fulfill TDP by direct action, while other boards routinely have run far below all power limitations.  Using your proposed method to compare two boards, one in each of these camps, would suffer a larger error.

Nevertheless, I think it is not a bad approximation for a start, but best supplemented by actual appropriate use of a power meter on a system running the specific application of concern.  It need not be your own system and power meter, but unless based on that kind of direct measurement, I would not trust such comparisons to better than, perhaps, the 30% relative error level.

cecht
cecht
Joined: 7 Mar 18
Posts: 1,506
Credit: 2,794,053,951
RAC: 2,199,048

archae86 wrote:I'll interpret

archae86 wrote:
I'll interpret your question with respect to Einstein crunching purposes as whether the actual power used by a board in Einstein crunching is, if not actually the TDP, at least in a fixed ratio to the published TDP.

I like your phrasing of the problem better than mine. :-)

archae86 wrote:
...but unless based on that kind of direct measurement, I would not trust such comparisons to better than, perhaps, the 30% relative error level.

Yeah, I was afraid of that. Perhaps someday I'll have all the components needed to run a direct comparison (and post the results!), but until then, I believe I'll go with the "not a bad approximation" school of thought and get an RX Vega on Ebay. 

My goal is to get into the Million RAC Club (yes, I know it's not really a thing) without having my wife wave the power bill in my face and say, "What are you DOING with those computers?"

Ideas are not fixed, nor should they be; we live in model-dependent reality.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4,885
Credit: 18,416,261,028
RAC: 5,869,955

Don't know anything about AMD

Don't know anything about AMD cards.  Nvidia cards can show you the current gpu wattage while crunching a task via the nvidia-smi application.  Available in both Windows and Linux driver packages.  The TDP of a Nvidia card is never reached while crunching in my experience.  Maybe up to 80% of TDP for the card and typically only about 60%.

 

archae86
archae86
Joined: 6 Dec 05
Posts: 3,156
Credit: 7,174,664,931
RAC: 713,086

Keith Myers wrote:Nvidia

Keith Myers wrote:
Nvidia cards can show you the current gpu wattage while crunching a task

There is a bit of an issue of both how accurate that number is, and just what it includes.  One of our members who actually writes code to show this sort of status suggested that not all power consuming portions of the card are included in those numbers.  And for an absolute certainty none of the extra power burned by the rest of the system to support Einstein (CPU, memory, power supply running at less than 100% incremental efficiency) is included.  But if the OP's interest is in total added power consumption visible to wife on the family power bill, all of these (and even some others) count, as well.

There is a reason I advocate buying and using a power meter.  The most popular last time I looked went by the rather cheesy name of a Kill-a-watt, but there are other choices.   In the US, they run you about $30.  You might be surprised what you can learn by using one.

cecht
cecht
Joined: 7 Mar 18
Posts: 1,506
Credit: 2,794,053,951
RAC: 2,199,048

archae86 wrote:... You might

archae86 wrote:
... You might be surprised what you can learn by using one.

I like surprises.  My original quandary was motivated by which GPUs to buy, but your point is well taken. I will pick up a power meter and look forward to wrangling Watts and compute parameters.

 

 

Ideas are not fixed, nor should they be; we live in model-dependent reality.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4,885
Credit: 18,416,261,028
RAC: 5,869,955

Your interest of course is

Your interest of course is with regard to crunching Einstein tasks and which card is most power efficient.  I could point you to the interesting Seti performance per watt and credit per watt tables that Shaggi76 puts together every couple of months.  You might find some of that information interesting. Includes both AMD/ATI and Nvidia hardware.

https://setiathome.berkeley.edu/forum_thread.php?id=81962&postid=1945872

 

cecht
cecht
Joined: 7 Mar 18
Posts: 1,506
Credit: 2,794,053,951
RAC: 2,199,048

Thanks for that link to SETI

Thanks for that link to SETI GPU performance Keith; there's a wealth of information there!  Do you have any idea from where Shaggie76 gets watt data for those analyses? From the Github summaries, his perl scripts seem to only deal with credits per hour.

Ideas are not fixed, nor should they be; we live in model-dependent reality.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4,885
Credit: 18,416,261,028
RAC: 5,869,955

I think he just uses the

I think he just uses the published manufacturer data for TDP for each card.  But he also might be asking individual users what kind of power usage they see on their cards.  With Nvidia cards, you can use the provided nvidia-smi utility.  I don't know if there is an equivalent for ATI.  Also there are countless posts in the Number Crunching forum from users posting screenshots of nvidia-smi output.  I have posted dozens myself.  He might just read forum posts and gather individual data that way too.  I'm just guessing.  I could go back through that thread and read the posts and see if someone already asked and answered that very question already.

 

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4,885
Credit: 18,416,261,028
RAC: 5,869,955

I did a search on that thread

I did a search on that thread and it never got directly discussed but I did turn up two posts for calculation by Hal9000 on how he calculates watts per task on Raspberry Pi, GTX 750Ti and R9-390X.

https://setiathome.berkeley.edu/forum_thread.php?id=79570&postid=1786051#1786051

 

https://setiathome.berkeley.edu/forum_thread.php?id=79086&postid=1765599#1765599

 

cecht
cecht
Joined: 7 Mar 18
Posts: 1,506
Credit: 2,794,053,951
RAC: 2,199,048

Okay, thanks Keith Myers, I

Okay, thanks Keith Myers, I learned a lot from those SETI discussions. For power usage, HAL9000 at SETI@Home uses mostly TDP, but sometimes GPU-Z, and bases his GPU comparisons on number of tasks processed (Watt-hour per task).  From the discussions I've read here and at S@H, however, it seems there are always unaccounted variables when calculating task speed and power efficiency.

I'll be retiring in a couple months and plan to pick up a second host and third GPU.  I'll then have two hosts with hopefully widely different CPU abilities and three GPUs also with widely different abilities. With these pairings, I want to attempt nailing down some of these variables with direct measurements. It will be a limited comparison, but hey, it's a start.

Drawing upon ideas from others, I've come up with a plan. To put the focus on GPU power efficiency for E@H credit generation, I'd calculate credits/Wh, like Shaggie76 does for SETI GPU comparisons. I like the idea of using a day as the time interval because daily E@H credits for a host can be obtained from either BOINC Stats or Free-DC CPID stats (using "yesterday's" credits). KWh measurements will be obtained using a power meter, like you suggested. (I've ordered mine!) A "measurement day" would need to be synced for credits and KWhs, so I need to figure out when a day starts and ends for BOINC Stats or Free-DC. I see that the daily credit tallies don't agree between those two sources, for reasons I don't understand, so I'd stick to one or the other. In any case, several days of credit and KWh measurements would be taken and averaged to smooth over daily variability in hosts credits.

To separate power draw of a GPU card from that of its host, baseline (idle) host Wh or KWh would be measured without running E@H tasks. Then, to directly compare GPUs, each GPU would be swapped out in that host and run E@H GPU-only tasks to obtain its daily averages. For simplicity's sake, comparisons would be made for GPU tasks running singly, not concurrently, so CPU core availability shouldn't be a confounding variable.

Because host OS (Linux, Windows) can affect how well GPUs run tasks (AMD cards in particular seems to have a noticeable advantage on Linux systems), I'm actually considering loading different OS's on each host and measuring the different GPUs under each OS-host pair. Like I said, retirement. The trick will be to get all this done under the same E#H LATeah workunit data series.

I know there are other variables in comparative GPU performance, but I can't (or won't) control for these:

GPU memory - From what I've perused from E@H host stats, it seems that AMD 4xx-5xx 8GB cards run tasks faster than the respective 4GB cards. Don't know about NVIDIA. I won't have enough cards to directly measurement of this.  There is plenty of host data on E@H stats, but I can't discern whether the more-is-faster GB advantage is inherent to speed of computing or an effect of running more concurrent tasks on GPUs with more memory.

GPU mods (OC, power limit, under V, etc.) - A big bucket of worms. I'll either run default settings for each card or pre-optimize settings for each card and run the evaluations with those. Anybody out there have any thoughts on the best approach for wrestling with these types of variables?

Host memory - Can host memory be limiting for GPU tasks?

Are there other variables that I need to consider (or ignore)?

Ideas are not fixed, nor should they be; we live in model-dependent reality.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.