Discussion Thread for the Continuous GW Search known as O2MD1 (now O2MDF - GPUs only)

Zalster
Zalster
Joined: 26 Nov 13
Posts: 3117
Credit: 4050672230
RAC: 0

cecht wrote:cecht wrote:...

cecht wrote:
cecht wrote:
... If it's a matter of limited PCIe bandwidth, as Zalster suggested, then I (we) may be stuck because all the consumer Intel CPUs that I've looked at list  "Max # of PCI Express Lanes" as 16 and "PCI Express Configurations: Up to 1x16, 2x8, 1x8+2x4".
Intel's i9 X-series of CPUs have 36+ maximum PCIe lanes, but require a motherboard with a LGA2066 socket.

 

Why I always preferred the i7 or i9 X series. 5960X or 6950X or even 9980XE. All with 40 or 44 PCIe lanes. For the 5960X I used Asus X99 boards with PLX chips. (more complicated to explain here) but it simulated 16 PCIe for all 4 slots of those boards.  Oh course, now with AMD's threadrippers, you can get those high PCIe lanes without PLX chips on the boards. Keith is the best I know on AMD Chips. Sorry, off topic.

 

Edit...

I just switched over a machine to do GW-GPU but only 1 at a time.  Maybe next week I'll have time to play with it for multiples.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4964
Credit: 18752370377
RAC: 7121254

Still somewhat dependent on

Still somewhat dependent on the mobo vendor how they want to use all the PCIe lanes in Threadripper cpus.  I see too many "bling" designs for my taste at the expense of connectivity.  No reason that TR can't support multiple PCIe slots at X16 but too many stop at 3 slots.  Very few have 4 double wide slots to support 4 cards.

And they still hamstring slots typically to X16/X8/X16/X8 when they could all be X16 if they didn't use up a lot of lanes for things like USB, RAID, NVME and SATA.

 

Mr P Hucker
Mr P Hucker
Joined: 12 Aug 06
Posts: 838
Credit: 519315189
RAC: 13902

Keith Myers wrote:Still

Keith Myers wrote:

Still somewhat dependent on the mobo vendor how they want to use all the PCIe lanes in Threadripper cpus.  I see too many "bling" designs for my taste at the expense of connectivity.  No reason that TR can't support multiple PCIe slots at X16 but too many stop at 3 slots.  Very few have 4 double wide slots to support 4 cards.

And they still hamstring slots typically to X16/X8/X16/X8 when they could all be X16 if they didn't use up a lot of lanes for things like USB, RAID, NVME and SATA.

 

Unless you use watercooling or PCI-E extension ribbons, you need triple width slots for GPUs so the fans can get air in.  I would guess most people buying the boards just want to link 2 or 3 together for gaming and don't bother with the fancy cooling and wiring.

If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.

Zalster
Zalster
Joined: 26 Nov 13
Posts: 3117
Credit: 4050672230
RAC: 0

Peter Hucker wrote:Unless you

Peter Hucker wrote:
Unless you use watercooling or PCI-E extension ribbons, you need triple width slots for GPUs

Keith does custom watercooling. I run hybrids so both of us can sandwich in those 4 cards on a 4 slot MoBo.

Mr P Hucker
Mr P Hucker
Joined: 12 Aug 06
Posts: 838
Credit: 519315189
RAC: 13902

Zalster wrote:Peter Hucker

Zalster wrote:
Peter Hucker wrote:
Unless you use watercooling or PCI-E extension ribbons, you need triple width slots for GPUs

Keith does custom watercooling. I run hybrids so both of us can sandwich in those 4 cards on a 4 slot MoBo.

I used to watercool things (for quietness).  But it turned out to be a lot of hassle and expense.  And as far as Boinc's concerned, I'd rather spend the money on more chips than cooling.  I guess it depends if you're using 2nd hand or new cards, and how many GPUs and how many PCs you want, and whether you want them compactly inside the cases or spread out across the desk as I do!  Mind you, I've found that these riser things aren't that reliable, even the good ones.  Boinc only runs with no crashes if I have a maximum of one card connected with a 1x-16x adapter, the rest have to be plugged straight in, or on 16x ribbons.

I'm just surprised they make motherboards where you need to use watercooling to be able to use all the slots.  Maybe they're aiming them at enthusiasts.  I suppose I could use one with 4 ribbons, plus one 1x-16x riser.  So I could get 5 in one machine.  My best machine at the moment has 3 - two ribbons and one 1x-16x riser.

I'm looking at your photo and see you have a watercooler with just one 120 or 140mm fan cooling each?  How is that enough?

If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.

Zalster
Zalster
Joined: 26 Nov 13
Posts: 3117
Credit: 4050672230
RAC: 0

I used to run 5 machines, all

I used to run 5 machines, all with 4 cards in them. But I started to break them up in to 2 or 3 card machines as I was overloading the circuit breakers in my house.  That has worked out better for me and allows me to move more machines between projects and not worry about burning down the house...lol

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4964
Credit: 18752370377
RAC: 7121254

Hybrid gpu cards use both a

Hybrid gpu cards use both a blower fan on the VRM's and memory and a 120mm radiator and fan on the gpu die.  So hybrid cooling.

A 120mm radiator can dissipate 250W of heats easily and most of the heat from a gpu comes from the die. My hybrid cards can keep my cards in the 32-45°C range even when they are butt to face next to each other. Granted the middle cards in the sandwich have the worst of it with only a 1/8" gap to pull air into the blower fan so they will always run a few degrees hotter than the card in the top slot or the bottom card on the outside of the stack with no input air restriction.

Custom cooling gets rid of the air cooler heat sink and usually provides a 1 slot width cooling plate.  No airflow needed on the card itself then.

 

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3958
Credit: 47042942642
RAC: 65090295

I don't mind the custom

I don't mind the custom watercooling, it gives great results if you plan and execute it properly. I have 7 cards plugging directly into a motherboard with [7] x16 (physical) slots by getting the cards down to single slot width with waterblocks on the GPUs. ASUS P9X79-E WS using an E5-2667v2 CPU. this CPU only has 40 PCIe 3.0 lanes, but thanks to the use of a couple high end PLX switches embedded into the motherboard, I can get [x16/x8/x8/x8/x16/x8/x8] to each card, getting me 72 lanes out of it. really nice to get that much power in a small space and good temps. but the gear is niche and needed a lot of planning, modification, and expense to get the right parts. all heat is dumped outside via a window mounted radiator. 7x RTX 2080s and they only get up to 50-60C. in a warm room with several other systems running.

I run GPUGRID on this system, where PCIe bandwidth matters. It doesnt matter for Einstein. It might have mattered more in the past, but I've never seen more than like 1-2% PCIe use on the tasks that are being sent now.

My other systems are on air cooling with a mining-style open frame rack using PCIe ribbon risers to space the cards out. If it's just for Einstein you can get by with the USB risers, but I'm focusing more on GPUGRID and running Einstein as a backup so I want to keep the increased bandwidth.

 

I know Keith and Zalster like the Hybrid cards, and it's a good plug-n-play solution if you want to drop 4 cards inside a closed system. just make sure you have a case that can accommodate the 4 120mm radiators.

_________________________________________________________________________

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4964
Credit: 18752370377
RAC: 7121254

First off houses in the US

First off houses in the US run on 120V single phase. The max you can pull from the standard 15A breaker is 1800W.

The hybrid cards use the standard cheap AIO coolers that are common for cpus. They are completely self-contained and require no maintenance. Unlike a custom loop which is expensive and needs maintenance.

The benefits of custom water cooling at a fraction of the cost and no maintenance like with air cooling.

Custom water blocks do in fact cool all the parts of the card, VRM's, memory and the die. But they are much larger than the typical AIO coldplate which is only large enough to cover the a cpu IHS. And cost 10X more than the AIO hybrid cooling solutions.

A 240mm AIO for cpu cooling can dissipate 500W.

 

Zalster
Zalster
Joined: 26 Nov 13
Posts: 3117
Credit: 4050672230
RAC: 0

Juan and I run Hybrids. Keith

Juan and I run Hybrids. Keith has custom looped systems on some systems. I think he might have a few hybrids

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.