AMD radeon 6000 series (big navi)

archae86
archae86
Joined: 6 Dec 05
Posts: 3,156
Credit: 7,177,094,931
RAC: 739,059

Gravity Wave GPU task

Gravity Wave GPU task observations on a 6800 XT system

Earlier I posted some observations from running Einstein GRP tasks on my  transplant survivor 6800 XT system.

Subsequently I've done some running at 1X, 2X, 3X, 4X, and 5X for currently available Gravity Wave GPU tasks on this system.  Between the two sets of observations I learned how to turn up the fans on the GPU card.  Otherwise this is the same system used for the GRP observations.  Importantly, it has the same modest 6-core (no HT) Intel  i5-9400F CPU @ 2.90GHz [Family 6 Model 158 Stepping 10].

While Einstein GRP famously has not got a particularly strong dependence on the host CPU and other resources, Einstein GW is quite CPU hungry.  A particular consequence for these observations is that a system boasting a CPU with much stronger per-core performance would probably beat my results substantially, especially at the lowest multiplicities.  Conversely, a system with a CPU with much weaker per-core performance would greatly underperform mine.  Also a system congested by other matters competing for CPU resources could do much worse.

Main observations:

1. System uses far less electric power running GW than it did running GRP.
2. There is an even larger benefit going up the first few steps of the multiplicity ladder than the already very large benefit for GRP.
3. Overall GPU utilization is distressingly low, even at 5X multiplicity it only averages 56%.
4. Memory controller utilization is far below GPU utilization, so seems not likely the limiting factor it probably presents for this card running GRP tasks at higher multiplicities.
5. While the card was able to run 5X on the particular set of GW GPU tasks available to me, the GPU memory usage got high enough to suggest this might not quite be safe from adverse effects of VRAM exhaustion at 5X.
6. As the 5X production advance over 4X was small enough to cast doubt that it was even true, I'd advocate running 4X for current GW GPU tasks on this card.

param1X2X3X4X5X
tasks/day178.6311.5459.3525.39567.1
GPU_Util17.7%33.4%43.7%51.3%56.1%
MemCon_Util4.5%7.2%9.3%13.2%14.1%
card_power69.282.297.7111.3116.5
System_power131.8166.6191.4213.0223.8
Max_Mem_Ded3381813484501061513132

Additional notes:
1. At the moderate fan speeds I was using, the card ran so cool that I saw no point in listing temperature by condition.  At the hottest case of 5X, the average GPU temperature reported was 49C and the average GPU memory junction temperature was 51C.

2. Einstein GW tasks vary considerably in compute effort required from the GPU.  I attempted to make my multiplicity comparisons a bit better than they could otherwise have been by first acquiring a stock of several hundred tasks, then sorting them in task name order, and suspending four of each five in sequence for the first test condition.  Obviously, for the second test condition I activated one in each four, and so on.  While I have hope that especially at the lower multiplicities this worked well enough to make my comparisons meaningful, there is no hope that a batch of results logged by another user on another machine will be especially well matched to these.

3. While it is a good idea to compare power efficiency, it is a trap to suppose that any (of the many) card level power report captures the whole impact on actual wall socket power consumption (the only kind that counts so far as paying the utility company is concerned) of running Einstein.  I invite you to pay attention to how much bigger the steps in going to higher multiplicities are for the System Power (directly recorded by measurement on the power cord to the PC box) than for "card power", for which I used the HWiNFO GPU PPT number, as the highest reported.  In the particular case of Einstein GW work, a good deal of the extra power increment goes directly to the CPU, but there are increases elsewhere in the box as well--motherboard RAM, motherboard power conversion inefficiency, PC power supply conversion inefficiency....

4. While the other numbers are generally averages, typically over several hours running dozens of tasks, the GPU memory peak reading is a peak.  I did not try hard to control for possible other things besides Einstein consuming some VRAM.  But the fact that many of my tasks were delayed in validation because the first couple of quorum partners were 2 GB cards which generated compute errors with the usual stderr complaint of CL_MEM_OBJECT_ALLOCATION_FAILURE shows there are still VRAM-hogging tasks being distributed.  I include this line mostly as a caution to people who may be tempted to run higher multiplicities than will work well for them without monitoring for adverse effects.

Credit: Following up on a post footnote from cecht, I found the means to generate this BBCode table from my tab-delimited text file at TheEnemy table converter

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6,258
Credit: 8,901,653,658
RAC: 10,033,243

Archea68, It does look

Archea68,

It does look like you could upgrade your Intel CPU to more threads and hyper-threading if the motherboard supports it.

Since it apparently is an 8th gen you might even be able to upgrade to an 9th gen i9-9900 (8c/16t) type cpu with a bios update.

Are you going to add another gpu to that box? ;)

Tom M

 

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)

archae86
archae86
Joined: 6 Dec 05
Posts: 3,156
Credit: 7,177,094,931
RAC: 739,059

6800 XT Power Limitation

6800 XT Power Limitation Observations

My previously reported transplant survivor 6800 XT system continues to run Einstein well.  After my look at GW task behavior I returned to a steady diet of GRP tasks.

I hoped to find that I could reduce power consumption with a smaller loss of Einstein production for a net productivity gain.  As my personal previous experience with a VII, some 570 cards, and some 5700 cards all had me using the Power Limit Slider in MSIAfterburner, I looked there first.  It was my notion that generally when one cut things back using a Power Limit that some clever interaction of the driver, the card firmware, and the card made good choices of where to set both the GPU clock rate and the GPU voltage so that results were correct, but high production was maintained given the power level.

To my dismay, the user interface limited this slider movement to -6%.   I moved on to clock rate.  I imagined I'd get some power reduction out of clock rate, but that I would need to adjust the voltage down on my own, searching for the lower bound at a given clock rate, then moving up for safety.  I was surprised and pleased to find:

1. When I set the clock rate maximum lower, the card ran at lower GPU voltage without my saying anything.
2. The combined impact of lower clock rate and lower voltage gave very considerable power savings.
3. The loss in Einstein productivity with clock rate was far less than proportional.
4. The results validate.
5. Therefore, a big improvement in system level power productivity showed up across a considerable range of choices.

The table below shows results from running about a dozen tasks each at five distinct Core Clock maximum settings as put into MSI Afterburner.  The tasks were Einstein Gamma-Ray Pulsar GPU tasks with task names starting LATeah3003L00, issued by the project near 7:00 UTC on March 2, 2021.  I used HWiNFO to compute averages for somewhat over a half hour of run time at each test condition.  The first four columns are specific test conditions I set at intentionally reduced GPU clock rates.  The "2,324" column is operation at default regarding clocks and voltage (but not at default fan speed--which for all five cases was controlled by the same user-specified fan curve from Afterburner).

GClk Max MHz1,6001,9002,1002,2502,324
GPU Clock MHz1,584.81,886.42,083.92,234.72,310.6
Card Power Watts146.1156.0179.4204.9221.7
GPU Temp C57.659.262.565.667.2
GPU Fan rpm17251767190120202104
Memory Junction Temp C70.171.974.276.077.0
GPU voltage V0.8810.9061.0001.0871.137
Task elapsed time0:09:210:09:010:08:500:08:470:08:43
Tasks/day at 4X616.3638.9651.5656.3660.4
Credit/day2,124,7272,202,8862,246,3282,262,7822,276,680
Wall socket kWHr0.1570.1690.1430.1980.372
Wall socket Watts224.2236.7266.5298.2318.4
credit/day per system watt9,4779,3058,4307,5877,150
credit/day per card watt14,54514,11712,52011,04510,268

I consider credit/day per card watt to be the primary figure of merit for the card.  It suffers in making comparisons from the fact that current Einstein GRP tasks have considerably higher credit "yield" than has been true for many months in the past.  Also that yield has more short-term variation than I used to observe.  Still, I think the five columns here can usefully be compared among themselves.  

In previous posts I have lamented that this 6800 XT card did not provide nearly so good power efficiency as I had hoped.  I amend that conclusion to say that by setting a reasonable GPU clock maximum, I see power efficiency considerably better than I have ever observed before.  In particular, it beats the XFX brand 5700 card running at -35% power limitation with the mining BIOS select by a handsome margin.

The slider allows GPU clock max settings down to 1155.  I think the voltage may reduce very little going down from 1600, so the power saving will be less.  But the Production will probably still be a lot--so this might be my summer setting.  The slider allows GPU clock maximum to be set as high as 2800.  I seriously doubt the card will give correct results up anywhere near that high.  I suppose the power penalty for going up from default would be substantial, and the task completion benefit small.  I don't plan to look up there any time soon.

I currently intend to run for some days with a GPU core clock maximum setting of 2000.  I estimate this will save me about about 70 watts of system level power consumption compared to default, while only reducing my task completion rate by less than 3%.

mmonnin
mmonnin
Joined: 29 May 16
Posts: 291
Credit: 3,311,749,874
RAC: 456,819

Well if GPU core clocks

Well if GPU core clocks aren't the real limitation on E@H GRP tasks, how does adjusting memory clocks fare? E@H has typically responded well to memory clocks was always the most similar to mining in that sense compared to other BOINC projects.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4,894
Credit: 18,432,531,334
RAC: 5,726,248

Still works for Einstein. 

Still works for Einstein.  Both GRP and GW tasks shuttle quite a bit of information in and out of the cpu to the gpus.  Though not so much for GRP compared to GW.  Still benefits that the faster the memory transfers happen, the faster the gpu can crunch the data.

 

archae86
archae86
Joined: 6 Dec 05
Posts: 3,156
Credit: 7,177,094,931
RAC: 739,059

mmonnin wrote:how does

mmonnin wrote:
how does adjusting memory clocks fare? 

The MSIAfterburner range on Memory Clock on my GTX 6800 XT ran from the default of 2000 up to 2150.  That's right, there was no means offered to slow the memory clock down.

Leaving the GPU clock maximum set to my current cruising level of 2000, I tried 2050, 2100, and 2150 on the Memory Clock.

2050 appeared to work just fine, with a small decrease in task elapsed time, and small increases in GPU Memory Power and GPU Memory Junction Temperature. 

There were disconcerting transients when I asked for changes.  The most obvious, but probably harmless one was that the GPU fan speed would coast down in a linear ramp for several seconds.  For example, I recall seeing it drop from 1800 to about 1300, then almost instantly popping back up to about 1850 before settling down to more normal behavior.

But the truly troubling cluster of behaviors happened when I raised the limit from 2100 to 2150.  Not only did GPU fan speed ramp down, but instantly the reported GPU power consumption dropped by tens of watts.  This was a bit puzzling, as the reported GPU clock rate and GPU memory clock rate did not drop.

Unless a miraculous lower power state had been entered, this sounded bad.  Within a couple of minutes it was clear that my tasks in progress were barely progressing at all.   I tried backing down looking for the edge of this behavior, and thought that perhaps at 2120 max I no longer saw the full deadly loss of performance.  But at that speed there were periods of anomalously low power consumption.  So I thought I was near the boundary of the death zone.

I contemplated that I might choose to run with memory max set to 2070, and did a comparison run.  It appeared that similar tasks did indeed improve from an elapsed time of 8:58 at the default 2000 to 8:45 when run at 2070.  I did not get a power number, but think it did not increase much, so power efficiency was probably modestly improved.

But this is overclocking, a procedure I used to do with enthusiasm, but from which I have shied away in recent years.  And the deadly behavior I observed at 2150 is not far enough removed from, say, 2070 to make me feel the extra risk is worth the slight benefit.

Your card may well differ, as may your appetite for risk.  I don't really know what is going on here.  But it appears that my sample of this card does not have a lot of operating margin when running Einstein GRP to use faster memory clock.

I'm back to cruising at 2000 GPU clock max, 2000 GPU memory clock max.

 

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6,258
Credit: 8,901,653,658
RAC: 10,033,243

Archae86,Are you running

Archae86,

Are you running 3 Gamma Ray tasks or 4 GR tasks on your Rx 6800 XT?

If it is with 4X it looks like the GR production could be north of 2,000,000 RAC once you have run it long enough.

Tom M   

 

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6,258
Credit: 8,901,653,658
RAC: 10,033,243

If the price of Big Navi

If the price of Big Navi cards ever begins to track the MSRP then it sounds like anyone not running Radeon VII or the "other" high-end cards could benefit from an upgrade.

This probably means almost no one in the top 50 will upgrade :)

Tom M

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)

archae86
archae86
Joined: 6 Dec 05
Posts: 3,156
Credit: 7,177,094,931
RAC: 739,059

Tom M wrote:Are you running 3

Tom M wrote:

Are you running 3 Gamma Ray tasks or 4 GR tasks on your Rx 6800 XT?

If it is with 4X it looks like the GR production could be north of 2,000,000 RAC once you have run it long enough

The card has been running 4X GRP GPU tasks most of the time since February 24th.  There was a considerable interruption when I tested running GW tasks, and some slight impact in the time I tried adjusting Memory Clock limits.

At current validation rates (a bit over 99%), my usual fraction of time offline (mostly running Windows Update), and the current GRP task behavior, the average production of my case transplant 6800 XT machine is definitely somewhat over 2,000,000 credits/day.  It would have been somewhere pretty near 1,500,000 credits/day with the GRP tasks seen before the big credit inflation of a few weeks ago when the 3000-series tasks showed up in late January 2021.

I think that it is moderately less credit-producing than my experience with the VII (correcting for differences in the contemporary units) and moderately more power-efficient when run at my current maximum GPU clock frequency limitation of 2,000 MHz.

Being more power-efficient than the VII at both the card and the system level is a big deal, as the VII was the best I've seen.  (While it was my most power-efficient and most productive, in my service it was my least well-behaved, and I gave up trying to keep it running after less than a year.  My purchaser found it reliable when down-clocked a bit, so perhaps I would have loved it had I tried that.)

Price and availability is another kettle of fish, however.  Also XFX is muddying the waters by providing what appear to me minor variations on the card I have.  A few weeks ago I started seeing the "Core" MERC 319 6800 XT (68XTALFD9), whereas mine is the "Black" MERC 319 MERC 319 6800 XT (68XTACBD9).  

Even more recently I've seen "QICK" 319 instead of MERC 319 on some 6800 product from XFX.

Then, of course, there are the several other vendors, each with differing card designs.  As very high reported memory temperatures has been a common concern since the first Navi 10 cards, and has been seen as an even worse problem reported here on at least one 6900 XT card, I'm for the time being inclined to try to stick to the MERC 319 models, which based on the reported temperatures on my card seem to have paid a bit more attention to memory cooling than the AMD reference design card.  I'd want to see reviews on an alternate model specifically praising good memory cooling to consider another.

Value for money is very, very much a problem at the moment.  Market prices are way up, supply is very limited, and people are trying to resell the few cards which trickle out.  Today the only reseller advertising on the Amazon marketplace for my card wants $2,300 for it, and the lowest eBay non-auction price is $1,500.  It is hard to make a fair comparison with alternatives.  Even used 5700 cards (non XT) seem to be fetching well over their original purchase price, with auctions currently over $600 attracting bids.  I believe availability and pricing for good NVidia cards is also quite bad.

archae86
archae86
Joined: 6 Dec 05
Posts: 3,156
Credit: 7,177,094,931
RAC: 739,059

I imprudently overpaid to get

I imprudently overpaid to get a PowerColor Fighter branded RX 6800.  In my specific circumstances, this model had the compelling advantage of fitting in my case, with a stated length of 300mm.  That is 40 mm less than the "too big for my cases" XFX MERC 319 models.  While probably having somewhat inferior cooling performance to the MERC 319 models, I have concluded that the cooling capability of the MERC 319 6800 XT which I own is overkill for my application.  So it was a good trade to get a card which would fit in my box, and which I hoped still had adequate cooling.  Personally I liked that it lacked LEDs and some other bling.  In theory it is a "value" card.  Good luck with that.

Initial indications are promising.  I skipped intermediate steps, and started right up at default settings and 3X multiplicity on current Einstein GRP (3003...) tasks.  Those gave elapsed times of about 6:40, though the temperature was getting a bit higher than I liked. 

While I have had good luck controlling fan speed and maximum clock speed of my XFX 6800 XT from MSIAfterburner, I have so far failed usefully to control either on my PowerColor 6800 from that platform.  So I went directly into the AMD Radeon software, and found it much more to my liking than when I gave up in frustration a couple of years ago.  Maybe it has changed, maybe I just got lucky.

Anyway, I asked for a 2000 MHz maximum GPU clock and set a manual fan curve much more aggressive than the provided default, and was richly rewarded.  Both clock speed and voltage came down, so there was a big power reduction.  Elapsed time only went up to about 6:53, and current GPU temperature is reported at 52C and memory junction temperature at 66C, at a pretty tolerable fan speed of 1600 rpm.  Reported GPU power consumption is 138 Watts.  Actual measured power for the whole PC box at the wall socket is about 240W.

If I can wean myself from my historic preference for MSIAfterburner, and learn how to get the AMD settings to "stick" on reboot, this is at the moment looking very promising.

Now if only these cards were actually available from primary vendors for anywhere near the originally stated prices!  I'm afraid I don't expect that situation to get better very soon.  In fact the aftermarket pricing got worse in the last month.  Yes, I paid too much.  Don't ask.

In the next little while I shall probably be tinkering, but if you want to look for yourself, here is the link to my 6800 box.  It immediately previously held two 5700 cards, one power limited to -40%, and the other to -47%.  So, good as the 6800 seems to be, the RAC will be coasting down a bit.  

This card is so able to cool itself running "my way" that I'm strongly tempted to try running it as a two-card combination with a 5700 in the lower slot.  Not before tomorrow.

 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.