AMD radeon 6000 series (big navi)

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6529
Credit: 9631778163
RAC: 2859554

solling2 wrote: Tom M

solling2 wrote:

Tom M wrote:

...

Or you could spend $20? (inflation may have driven them up to $50 though) on a small mining rack :)

...

A PCIe Riser Card would allow to keep the running system, at lowest costs, wouldn't it? :-)

Good point. It can be picky/tricky to get it to run reliably though.

Tom M

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)  I want some more patience. RIGHT NOW!

Andrew Petkin
Andrew Petkin
Joined: 1 May 06
Posts: 7
Credit: 584150460
RAC: 0

I have 12 x gtx 1060 24/7

I have 12 x gtx 1060 24/7 with risers

Andrew Petkin
Andrew Petkin
Joined: 1 May 06
Posts: 7
Credit: 584150460
RAC: 0

I have 12 x gtx 1060 24/7

I have 12 x gtx 1060 24/7 with risers

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6529
Credit: 9631778163
RAC: 2859554

Andrew Petkin wrote:I have

Andrew Petkin wrote:

I have 12 x gtx 1060 24/7 with risers

Congratulations.

And for being a fan of the Amd cpu.  Those AMD FX-8320E Eight-Core were impressive cpus for their generation.

For years I ran gtx 1060 cards on Seti@Home.  They gave a very good bang for the buck!

Tom M

 

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)  I want some more patience. RIGHT NOW!

archae86
archae86
Joined: 6 Dec 05
Posts: 3160
Credit: 7255255683
RAC: 1451403

My guts-transplanted system

My guts-transplanted system that until now ran a 5700 at 3X is now running a 6800 XT.  I ran roughly 20 units at 1X, and have gotten some validations.  2X running just started.  I'll run that just enough to get stability indication and productivity and power numbers, then on to 3X.

transplant survivor 6800 XT system

As I carried on a bit about the very large size of the card I chose (an XFX MERC319 model stated to be 340mm long), I should add that even the Corsair Carbide 200R case, which claims 430mm length capability, actually could not quite handle the thickness of this card, which conflicted with the top of the small SSD card cage perched atop the 4-bay HDD cage.   Happily the entire SSD cage and the left side of the HDD cage were a single injection molded piece, held in the case by just four screws and three large tabs slid into slots in the case bottom.  A few minutes with a coping saw and a reciprocating saw got rid of the SSD cage portion and things fit nicely now.

I'll post some actual observations tomorrow.  For the moment I'm just relieved that the machine survived the transplant procedure.

archae86
archae86
Joined: 6 Dec 05
Posts: 3160
Credit: 7255255683
RAC: 1451403

The guts-transplanted 6800 XT

The guts-transplanted 6800 XT system now has some run time on 1X, 2X, and 3X Einstein GRP work.

It uses a lot of power, but so far keeps the reported GPU and GPU memory temperatures under satisfactory control.  It gives a huge output production benefit to multiplicity increase above 1X.  Memory controller reported utilization is above reported GPU utilization.

param1X2X3X
tasks/day362.1516.5596.2
credit/day1,242,6111,772,1522,045,771
GPU_Util56.8%80.5%92.1%
MemCon_Util63.7%85.2%94.9%
GPU_temp79.0C80.0C81.1C
MemJun_temp80.6C84.1C86.0C
card_power159.7203.2229.1
System_power232.0293.3324.9

My 5700 cards of two different models reported distressingly high memory junction temperatures from the beginning.  I'm pleasantly surprised that this particular aftermarket model of 6800 XT card (the style that XFX brands as MERC319) seems perhaps to have paid some more attention to memory cooling.

While I intend to try 4X, the rapidly rising Memory controller Utilization numbers suggest I am approaching a ceiling.

The power consumption is higher than I'd like.  I intend to try using power limitation as specified from MSIAfterburner in hopes of getting the power down more than the production goes down.

All the tasks were from the 3002 batch of unusually high-paying (short-running) Gamma-Ray Pulsar work.  Thus the credit/day numbers are not properly comparable to current reported RAC for any system on Einstein, as even the ones running GRP 24/7 still have their RAC depressed somewhat by the memory of the lower-paying work which came before the 300n tasks.

Following up on a post footnote from cecht, I found the means to generate this BBCode table from my tab-delimited text file at The enemy table converter

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6529
Credit: 9631778163
RAC: 2859554

Hurray!!!

Hurray!!!

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)  I want some more patience. RIGHT NOW!

archae86
archae86
Joined: 6 Dec 05
Posts: 3160
Credit: 7255255683
RAC: 1451403

The GPU and Memory Junction

The GPU and Memory Junction temperatures I posted in my comparison table at 16:36 UTC on February 23 are an artifact of a situation in which the version of MSIAfterburner I was running, configured as I had it, pegged the commanded GPU fan speed to 34% regardless of conditions.  I finally installed version 4.6.3 Beta 5, and the fans promptly roared, and the GPU temperature plunged over 15C in the first three minutes.  After stumbling about a bit I've currently got my 6800 XT responding to my user curve, settling in current conditions to a GPU temperature of 64C and a memory junction temperature of 72C while running 4X Einstein Gamma-Ray Pulsar tasks at 0% power limitation.

In a word, this version of the 6800 XT card appears to have plenty of cooling capacity.  

In other news, I tried 4X and 5X multiplicity. 

4X added 4.3% output production to my 3X observation.
5X added a further 0.4%, which is down near the observational noise, and had very slightly degraded power productivity compared to 4X.

Memory controller utilization averaged 99.0% at 4X, and 99.5% at 5X. 

Now that I am happier with temperatures and fan control, I tentatively think 4X operation is probably my standard for Einstein GRP.

I plan to do a little Gravitational Wave running.  Sadly, I don't think I can give more than a very crude indication of productivity, as the variability of work content of GW tasks is very substantial, and does not follow a reliable pattern.  Also my host CPU is not the screaming beast you'd want to give best support to a fast card running the CPU-hungry Einstein GW GPU application.

After the GW exercise, I plan to return to GRP, and with a baseline established with a fan curve I like, I'll attempt to improve power efficiency by using the Power Limit parameter.  I hope I find some improvement.  Despite the generally impressive performance and good behavior of this card, in my system it currently has slightly inferior system level power productivity to what I had in the same system with a 5700 (non XT) card.  That is not what I hoped.

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6529
Credit: 9631778163
RAC: 2859554

archae86 wrote:Now that I am

archae86 wrote:
Now that I am happier with temperatures and fan control, I tentatively think 4X operation is probably my standard for Einstein GRP.

I am currently running my Rx 5700's at 1/3 a CPU per GPU thread.

That might free up some CPU threads to run GW or other CPU tasks.  I like World Community Grid for its diversity of projects.

On another machine, I believe I am running the default 0.9 CPU per GPU thread for GW tasks (RX 570/rx580).

The GW appears to be much more sensitive to the CPU parameter than the GR tasks.

archae86 wrote:
 in my system it currently has slightly inferior system level power productivity to what I had in the same system with a 5700 (non XT) card.  That is not what I hoped.

I am sorry to hear that.  I have the impression you are aimed at more productivity per watt than I am so. 

On the other hand, you are getting more gpu tasks processed on that machine than before. :)

Tom M

 

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)  I want some more patience. RIGHT NOW!

DanNeely
DanNeely
Joined: 4 Sep 05
Posts: 1364
Credit: 3562358667
RAC: 0

archae86 wrote: archae86

archae86 wrote:

archae86 wrote:
I took delivery today on the specific XFX 6800 XT card I had my eye on for weeks. (XFX RX-68XTACBD9)
<snip>
If it also will not fit there I'll need to ponder my next move.

Grumble.  I looked inside the second case, and don't see what looks like 2 inches of spare room for longer than the current GPU.

Finally, I found the model numbers and claimed capacities for my two cases:
Antec Gaming Series Three Hundred Two: 318mm max video card
Corsair Carbide series 400R: 316mm max GPU length

So neither one give me any reason to expect my 340mm card to fit.  I intend to look at both cases with a view to the alarming possibility of cutting into the HDD cage to make a recess for the end of the graphics card.  I don't know whether there is other interfering structure, nor whether the card cage contributes enough structural stability to the overall case for this to be a bad idea.

I also don't know a great way to make the cuts, though I hope my Milwaukee aircraft snips might just do the job without making metal dust.

Failing that, the main options are to buy a new case and transplant my current contents, or to sell this nice card on eBay.  As I already opened the box and peeled off the cosmetic protection film, I'd have to call it used.  But I suspect in the current GPU mania I could recover a large fraction of what I spent.  

 

FWIW I've removed HDD cages from several old cages when GPUs grew beyond the length of an ATX mobo and never had any problems.  The ones i did surgery on had riveted cages; and I abused a flat blade screwdriver as a chisel to shear them off; and then either mounted the HDD in the floppy slot or with an adapter in a 5.25 bay.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.