I recently changed the GPU cards in one of my boxes and installed dual identical RX-580 cards. They are overclocked using Afterburner. I am noticing my primary card is really really getting hit hard when I am running 1.22 Gamma-ray and 1.46 M/W work units. I only compare them when the box is running the same type w/u on both cards. There is close to a 10° C difference in temps (70° C vs 60° C). The fan curves are set to be at 100% at 65° C. I tried reducing the overclock on the primary GPU back to almost stock levels (1375 MHz). It did not help. The second card is doing just fine at 1425 MHz. I have Crossfire disabled. I have the utilization factor set for 1 CPU and 1 GPU for both types of w/u. The 2.07 GW w/u only give me about 3° to 4° C difference. That I can live with. I have another box with dual GTX-760 that doesn't have any issues.
Am I missing a setting somewhere or something else somewhere? I am trying to keep my GPU temps below 65° C. Do I need to? Is 70° C ok to live with?
Is there a better program than Tthrottle to help throttle back when temps get too high. I seem to have issues with it on the box with the dual RX-580 GPU cards. When it throttles, the temps actually seem to go higher.
Thanks
Copyright © 2024 Einstein@Home. All rights reserved.
The problem most of the time
)
The problem most of the time is the distance between the cards, less airflow on one of the cards leads to higher temps. If the side is off you could point a desk fan at the cards, or if you have it on and a place in the side for a fan it could help. Most motherboards put the 16x pci-e slots right next to each other, I'm guessing you game because you have the cross-fire cables installed, if not just remove them and see if there is an 8x slot you can use instead as crunching does not require a 16x slot. A 4x slot will slow things down but not an 8x slot.
Hi Mikey, The side cover
)
Hi Mikey,
The side cover is off the computer. There is a little more room under the second card than there is between the two cards. If I use one GPU slot, it will run at 16x, two slots will default to 8x. I can go into the BIOS to see if I can set the slots to run at 4x. I do not game, the boxes are mostly used for crunching. The new RX-580 boards don't have an external Cross-fire jumper, it must be through the MB. I have it disabled in the Radeon Adrenalin program. Any idea if enabling Cross-fire would help or hurt the situation? Furmark, Heaven or Kombustor never loaded up the GPU card enough to reach the temps I am getting when running 1.22 Gamma-ray w/u.
Thanks.
From other sources I get the
)
From other sources I get the impression that usually running Cross-Fire doesn't help and could hurt for crunching.
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Tom M wrote: From other
)
That is true, at this point in time Boinc has no programming to use more than one gpu for crunching a single workunit and hooking them together can cause issues.
Ron Kosinski wrote: Hi
)
Switching to 4x would slow things down in the data transfer between the pc and the gpu and that means slower running workunits.
The Gravitational Wave workunits can use up to 4gb of gpu memory while the Gamma Ray tasks run just fine on a gpu with 2gb of onboard memory, so yes they should run a bit cooler.
Enabling cross-fire will not help and yes could hurt.
mikey wrote:The
)
It's actually working the other way. The 1.22 Gamma-ray w/u temps run about 5°C higher than 2.07 GW and 1.46 MW w/u. It's ironic, I added the RX-580 cards because my old cards (GTX-760) kept crashing on the GW w/u.
I remember reading in a post somewhere that I may need to add a line in my cc_config file in effect saying "enable all GPU's", but I think it was in the context of when running different GPU cards in the same box.
crossfire is an ideal
)
crossfire is an ideal solution to balance all workload on more than double the potential.
QE https://is.gd/ProcessorLasso
I've run double card in
)
I've run double card in closed cases for years, ever since a time when I noticed that Nvidia's then price vs. performance curve gave a good bit more bang for the purchase price buck from two medium-price cards than from one higher-end card.
Without paying much attention, I got used to the observation that my "upstream" card would run hotter, perhaps 5 to 10C.
I finally had to pay more attention when I tried to run double card with one of my XFX 570 cards in the lower slot and a shiny new XFX 5700 card in the top slot.
The 5700 card went crazy hot in a hurry. That particular card has a pretty good cooling solution, but is fat.
That particular 570 card ejects a lot of air sideways from the top.
So hot air coming right off the 570 was going down into a tiny card-to-card gap, and the 5700 was not seeing my moderate temperature box air as input.
In my situation, I decided to buy an much thinner 5700 card for my top position. Also, the XFX 5700 I have does not eject hot air sideways toward the next card up to nearly the degree that the 570 card did. It does fine as a lower card, and does not sabotage the other model I use as upper card to nearly the degree that my particular 570 did.
My personal learnings from this experience were to pay more attention to the direction of air flow coming out of cards I'm interested in running in two-card setups, and to pay more attention to card thickness.
Regarding thickness, I had imagined that pretty much all the interesting capable cards were right about two slots thick, but found that they go up to at least 2.6, and vary quite a bit from model to model. That thickness does not matter much if it is your lowest card and the next thing down is your power supply several inches away, but matters rather more if it is your upper card on a 3-slot spacing.
QuantumHelos
)
Unless you can provide definitive, specific proof that ANY distributed computing gpu application can use Crossfire as designed for graphics load sharing for actual distributed computing tasks in a shared load configuration, I am going to call b&!!$h)T
Well GPU tasks are
)
Well GPU tasks are multitasking, depending on libraries and compiler..
Like BLAST QE