New PC build PSU

Phil
Phil
Joined: 8 Jun 14
Posts: 579
Credit: 228493502
RAC: 0

RE: The one other thing to

Quote:
The one other thing to think about when running multiple pc's is the heat they will put out!

Since I'll have my own mini-farm eventually I have a friend of mine who owns an HVAC company who will be helping with the issue of heat in the room.

Quote:
Ok, I accept that you have your reasons and that these cases have their virtues.

As stated earlier I need a larger case because my hands continue to get worse every year and have difficulty sometimes working on standard size pc cases. As for the choice of rack mount style, my crunchers will live in my ham radio shack. To save space, even tho the cases are much larger, I can install a single rack and use vertical space that is normally wasted in a room.

Quote:
How did you come-to the number 10? Is that the amount of rack-space you have available, or is there some other measure at work?

10 crunchers is an estimated number. The final number of computers will depend on space available. Heat in the room will be handled as above, and power will be supplied by dedicated circuits installed just for the crunchers.

Quote:

Ok, I accept that you have your reasons and that these cases have their virtues.

I see a potential problem.

Even in other cases blowing 200mm case fans in on the front and having one or more on the sides, one on the bottom, and one or more on the top, it's still challenging to keep multiple GPUs cool. The more powerful the GPUs are, the harder it is to keep the case air, and by extension the GPUs, cool.

I'm actually ahead of you on that one (I think, haha). The plan is on an incremental build with testing for power consumption and heat as we go along, i.e. build the basic box, test it, then add video cards one at a time, etc. etc. The plan is to get the right combination of components that will work before I go ordering enough parts to heat my house with next winter then find out it's all wrong!

Quote:
The good power supplies are almost always immediately identifiable by price, unfortunately.. You didn't really indicate a desire to save money, so if money is no object, find the expensive stuff and buy it.

While cost is a factor, I'm willing to pay for quality without getting crazy about it. As stated by tbret below, things get out of date quickly. I'm not willing to pay for the latest fire-breathing dragon processor because I can pay much less for it 6 or 12 months from now.

Thank you all and Happy Crunching!

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6588
Credit: 311903557
RAC: 124816

My personal preference is for

My personal preference is for Corsair. For me ( anecdotal ) :

- none have failed ( yet ) and in some instances going well after 5+ years.

- none have been damaged with all the power supply variance I get locally.

- just blow them out say 3 monthly.

- one can select the 'modular' type ie. only plug in the cables you need.

- they are really quiet.

- about 750W is fine for most systems of interest to me.

- the 'sweet spot' is a broad plateau.

- the price is not outrageous.

Cheers, Mike.

( edit ) For me, the price is relative to what it is supplying eg. a 2K$ machine ought have good clean power ( possibly at proportional expense ). I add other forms of protection prior to the box cable though. Not the least of which is neutral leak ( how much current leaves a phase but doesn't return ) triggers in addition to standard over-current fusing at the house's distribution box.

( edit ) Also FWIW : PSU quality may occasionally affect networking. While the signalling is typically differential, they are on floating/capacitive grounds, and thus there are limits to common mode variances ( ie. it is not desired to need to consider a household's wiring in one's thinking .... ).

( edit ) A mate of mine is an MRI radiographer. While the magnetom ( the core of the device ) and the room it is in are appropriately RF shielded, the power draw is typically in the tens of KW's ( often more than doubling from resting state to full power ). Hence the current pull onto the property has serious transients and requires special arrangements with the supplying utility. The switching is done via good old electromechanical relays. So they don't have any other devices ( eg. computers ) on the same load at all. Separate circuits altogether. But there is still some capacitive/inductive coupling, so especially good cable shielding is mandatory. Err .... so don't install an MRI at home folks ! :-)

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

tbret
tbret
Joined: 12 Mar 05
Posts: 2115
Credit: 4861254633
RAC: 36453

Phil - There are so many

Phil - There are so many things I don't know about so many things that "giving advice" is a little daunting. But I do run quite a few computers and I've had to evolve the hard, expensive, way over time and I almost think it would be irresponsible of me not to try to communicate more clearly than I have.

Stop me where I have it wrong:

A) You are going to all of this time, trouble, and expense in order to build a crunching farm because you have a desire to become a contributor to the science being done via distributed computing.

B) You would be unhappy if you spent a lot of time and money and were not contributing very much.

D) You are willing to put time into experimentation and research in order to make the most effective use of your initial money and the on-going expense associated with contributing.

E) There are things you can assume your experience will have in common with thousands of other people; many of whom are vocal on various projects' message boards.

F) The question you have asked here is one facet of your acknowledgement that you could use some "expert" opinions regarding component selection.

With that in mind, here's my long-winded, final, input:

Everything else being equal, the most important thing you will do is select your GPUs. GPU selection will come down to two things; cost and heat. The more expensive your GPUs the more effectively you will crunch numbers. Typically, the more numbers you crunch the more heat you will generate.

So, if money were no object at all, your *prime* consideration would be getting rid of heat *inside* your video cards. To do that the card has to be surrounded by air that's cool enough to transfer the heat, or the card has to be water cooled.

Water cooling is not really very practical (for amateur crunchers as opposed to HPC centers) in a stack of server cases and costs a lot of money to purchase initially. It requires maintenance and has leak and condensation hazards associated with it. Every dollar you spend on water-cooling is money that is not directly spent increasing your number-crunching (see "B" above).

If, however, you went the water-cooling route, you could pack a considerable amount of crunching into a small space and choose where to deposit most of the heat instead of having the heat necessarily deposited near the GPU.

Having said that, a rack-mountable server case such as the one you have selected is going to make things miserably complicated.

So, once again I accept that the server case is the case you will use and that you will choose air cooling for your crunching equipment.

Therefore, the limitation you will face is the ability to remove heat from the case you have chosen; yet, you cannnot make heat the primary component selection criterion because selecting only components that do not generate much heat will only result in your not contributing very much, because cool running GPUs do not contribute very much because they don't burn much electricity because they are "slow."

You can purchase a large enough number of "weak" GPUs to equal the productivity of one large GPU, but real estate and open PCIe slots becomes an issue; plus six low-power-consuming video cards will cost more consume as much power as one high-output video card and if in close proximity to one another will get hot despite individually being cool.

Most motherboards will have a maximum of two PCIe slots that run at x16. For most projects that does not matter and running at x8 or x4 is fine, but for the current projects here at Einstein@Home it does matter.

Assuming that you want to crunch Einstein, and considering the case you are going to use, your practical restriction for optimization will probably be two video cards per computer based on available PCIe slot speeds.

That gives you the potential of running twenty GPUs in a ten computer rack. With 20 GPUs you can and will make a meaningful contribution to distributed computing's number crunching. You will likely be a top participant despite not having any single machine that would be a "top host."

In order to have a "top host" you will need at least four video cards running in a single computer, far beyond the point of diminishing returns (see also: PCIe bus speed reduction per PCIe slot used above two).

Therefore, to optimize your production per host, you will likely want to get two video cards per machine and get the best video cards you can. "The best" video cards you can use two-of (which is usually more cost effective per work unit crunched than buying one, massively awesome card, anyway) are going to be determined by how much heat you can dissipate and THAT is going to tell you how much power supply you must have.

It does not tell you how much power supply you *want* however. You will want a lot of overkill. You really do not want to run a power supply 24/7 at its capacity. Capacity diminishes over time as the components "wear." Also, as has already been pointed-out, you really want to operate your PSU so that its average output is somewhere in the 50% range (+/- 20%) of its stated capacity. It will generate less heat (which increases lifespan and reduces its need of internal cooling) and has the bonus side-effect of costing you less money to provide with cool air (air conditioning).

As you build ten of these machines you will find that electricity cost is not an insignificant portion of their total cost of ownership and the demand placed on your cooling system is not an insignificant portion of your total electricity cost. You will also discover that there is a "tipping point" where your cooling system will "tip" from being adequate to being inadequate with the addition of just one more system. This "tipping point" number will change depending on what project you are running, as well.

Overclocking is generally a killer. Overclocking, whether done at the factory or DIY, makes the cards require additional power and produce additional heat in excess of the marginal increase in computational output. Yes, it is desirable, but only if everything else is over-engineered and you don't care about your power cost or system longevity and realize it will require additional power supply capacity plus additional cooling.

About cooling: It does not matter if the room is kept just above freezing temperature. (well, not much) What matters is your ability to move air to and through the GPUs. The CPU will add heat, yes, but it is a small consideration unless you are trying to overclock the CPU / RAM combination. If you do that, you will increase your productivity only marginally while increasing the heat and power consumption to a greater degree.

My point is that you can cook a GPU in the freezer surrounded by 120mm fans if it is in a sealed box and does not have an adequate supply of moving air.

My best guess, looking at your chosen case, is that you will only be able to adequately cool two cards if each card has a TDP rating of something on the order of 180w (+/- 10%) and is designed to eject hot air behind the case.

Could I be wrong?

Absolutely, no question, and I almost assuredly am. My point isn't to exactly pinpoint a number, or to make a specific recommendation, but rather to help you find a "starting point" so you don't waste a lot of time and money buying two 300w cards and a 1500w power supply only to find-out you can't cool it all, or conversely for you to go buy two 75w cards and a 450w power supply only to discover you aren't happy with their output.

I have one machine, until very recently a top Einstein producer, with the following:

(2) GTX 470
(1) GTX 560Ti-448
(1) GTX 560Ti Super Overclocked

Those are being powered by one 950w PSU above and one 850w PSU below. The case has four 140mm fans and two 230mm fans blowing in; one 140mm and one 230mm fan blowing out; the power supply fans are each 140mm sucking air out as well. The CPU is air-cooled and its fan is blowing from inside the case up almost directly through the 230mm fan on top and leaving the case immediately.

And yet... it is difficult to keep the GTX 470s within operating temperatures and the GTX 560Tis nevertheless run in the low 60C range with all of their fans working correctly and set to maximum via Precision X when the machine is allowed to run the AstroPulse project. It runs somewhat cooler (but not cold) here at Einstein. It cannot run GPUGRID without over-heating.

I have another almost identical setup with four 660Ti cards that runs cold and another almost identical with one GTX 770 and three 670s and it runs cool-enough.

Another of my machines is in a somewhat smaller, but still large, case running two GTX 560s and one GTX 560Ti. This case has many fans as well, top, bottom, sides, front and back and all of these cards are running in the 60C range on SETI, the high 50C range on Einstein, and over-heat on GPUGRID.

Another, in the same type of case, used to run three 670s and ran warm.

The difference is the size, but mostly the amount of airflow, in the case.

The limit to what I can install in any case is completely determined by heat.

The limit you will encounter is heat.

My suggestion is that you prioritize your purchases by anticipated heat.

I did have a stray thought:

Have you seen the photos of the BOINC enthusiasts who run stacks of motherboards and video cards without a case? There is nothing to direct airflow, that's true, but there is no impediment to getting cool air to the boards and cards with a common box fan. Your ac guy may even have access to good HVAC squirrel cage fans from household air-handlers and be able to really, really blow some serious air across components if you went that route and it would take no more space (possibly less) than the server cases you are considering.

Quad Titans

You're going to an awful lot of trouble so you might consider going caseless. And you certainly can see and get to things more easily if it is all sitting out in the open.

I don't recommend it if you have a cat.

I'm finished being annoying. It's a fun hobby and I hope you find your perfect setup whatever it is.

I look forward to seeing how it goes.

Phil
Phil
Joined: 8 Jun 14
Posts: 579
Credit: 228493502
RAC: 0

Wow, awesome post tbret. Very

Wow, awesome post tbret. Very detailed and well thought out. It's hilarious that you should mention cats as I just took on a housemate and he has 2 cats. I have no intention of "open" electronics. I spent a lot of years as a tech in the Marine Corps and it just plain goes against the grain not to button things up lol.

As stated by you, there are many things you don't know (same for us all), however, with 224 million credits you obviously have had some success with number crunching. Don't sell yourself short :-) All of your points are, IMHO, valid so I won't respond to each to save post space.

The first 2 boxes will be experimental in nature, for most of the reasons you have brought to light. I just didn't articulate it properly, to myself or in this thread. So, with that being said, I have a "concept" machine in mind. This is considered by me to be a starting point to see how the whole heat thing goes.

My starting point machine is as follows, no actual part numbers yet. Just concept. This will probably start a fire storm of critique, but that's the whole point of these forums, is it not? These are general choices, no actual part numbers chosen yet.

Fairly fast i7 processor in the 95 watt class.
2 780 class GPU's, not the ti models
1 sata drive
16 mb of ram

not sure about motherboard yet, not sure that will make much difference heat-wise. correct me if I'm wrong.

I currently have no plans to overclock the cpu.
I currently have no plans to water cool.
GPU's will be held to a temp not to exceed 60C
processors not used to support the GPU's will be used for cpu work units
GPU's will be models that exhaust their own air out the back of the case

This basic concept machine is just a place to start checking crunching performance and heat dissipation. I have considered going for 3 GPU's, but was not aware of the pci-e throttling. I'll have to educate myself on that.

Quote:
Having said that, a rack-mountable server case such as the one you have selected is going to make things miserably complicated.


We will have to agree to disagree on that point, BUT, if I'm wrong I will gladly bow and grovel and (gulp) admit it...I do love a good challenge hehe.

So, I'm getting closer. Thanks for your input, plenty of food for thought.

tbret
tbret
Joined: 12 Mar 05
Posts: 2115
Credit: 4861254633
RAC: 36453

RE: We will have to agree

Quote:

We will have to agree to disagree on that point, BUT, if I'm wrong I will gladly bow and grovel and (gulp) admit it...I do love a good challenge hehe.

We don't disagree. I allowed myself to be misunderstood. I was opining that the server case would make water-cooling a nightmare. That's why my next line was about assuming you would go with that case and air-cool.

Three quick comments based on what you've told us:

1 - get a kilowatt or more of PSU. (You will gloat early with 750 and rue the day three years from now.)

2 - you only need 8 GB of RAM. You really need even less, but 8GB is cheap enough.

3- find some way of keeping the CPU cool without circulating the heat into the case.

Enjoy! The research and initial acquisition is more fun than ownership.

Phil
Phil
Joined: 8 Jun 14
Posts: 579
Credit: 228493502
RAC: 0

Ok, so water cooling is off

Ok, so water cooling is off the table, we can put that dog to bed lol.

Quote:
1 - get a kilowatt or more of PSU. (You will gloat early with 750 and rue the day three years from now.)

I'm quickly coming to that conclusion as I add things up.

Quote:
2 - you only need 8 GB of RAM. You really need even less, but 8GB is cheap enough.

I was only going for 16 GB assuming future programs would be larger, but shifting processing to GPU's kind of negates that assumption. Besides, the extra 8 GB would mean around 80 dollars of electronics sitting there doing basically nothing. Bad investment. I'll go with 8 GB.

The above being said knowing I need some more studying in that area. Right now the plan is to use (made up number here) 4 of 8 processors for the care and feeding of the GPU's. The other 4 processors would run CPU work units.

Quote:
3- find some way of keeping the CPU cool without circulating the heat into the case.

I'm going to start with the assumption that choosing GPU's which exhaust outside the case will drastically help temps inside the case. No way to know until I actually build a box and can eyeball the layout and then fire it up and push it with some crunching.

Question, since you obviously have more experience with GPU's for crunching. I've looked (briefly) at some mb's that will support 2 x16 GPU's. For the purposes of crunching, does x16 make that much difference? Will a card of the 780 class push the bus that hard?

Thanks.

mikey
mikey
Joined: 22 Jan 05
Posts: 12658
Credit: 1839055536
RAC: 4419

RE: RE: 2 - you only

Quote:

Quote:
2 - you only need 8 GB of RAM. You really need even less, but 8GB is cheap enough.

I was only going for 16 GB assuming future programs would be larger, but shifting processing to GPU's kind of negates that assumption. Besides, the extra 8 GB would mean around 80 dollars of electronics sitting there doing basically nothing. Bad investment. I'll go with 8 GB.

The above being said knowing I need some more studying in that area. Right now the plan is to use (made up number here) 4 of 8 processors for the care and feeding of the GPU's. The other 4 processors would run CPU work units.

This is not necessarily true, at Einstein especially I have seen pc's with 4gb of ram take over an hour longer to process a gpu unit then a similar machine with 16gb of ram in it. The 4gb machine did speed up when I stopped using ANY cpu cores for crunching though, when I used even one though it was back to being slower. The 16gb machines can cruise thru the units even if I only leave 1 cpu core free, they seem to have fewer errors too.

tbret
tbret
Joined: 12 Mar 05
Posts: 2115
Credit: 4861254633
RAC: 36453

RE: Question, since you

Quote:

Question, since you obviously have more experience with GPU's for crunching. I've looked (briefly) at some mb's that will support 2 x16 GPU's. For the purposes of crunching, does x16 make that much difference? Will a card of the 780 class push the bus that hard?

I'd like to answer you very specifically, but I can't.

I don't have a 780 and don't correspond with people running more than one 780 at Einstein. You might want to look through the "top hosts" list and PM someone who does.

Nobody I can remember complained about 680s, but at Einstein a GTX 690 just needed more "umph" than a x16 connection could provide. That really disappointed a few people.

Since you are going to use 780s, I don't think you want a cheap motherboard. The better ones with the fastest chipsets should all have two x16 slots.

I think I hear the "But I really want three 780s" gears turning.

Maybe, Phil. Maybe.

Your first challenge is the 7 expansion slots in that case. Each of your cards will take two. You have to be sure the motherboard slots line-up correctly with the expansion slots on the case. You'll only need six, but they have to be the right six and that will depend on the motherboard.

Often the third PCIe slot on the motherboard is one slot too low for a seven expansion slot case. You'll notice that the cases I settled-on have ten expansion bays. My motherboards have six PCIe slots in order to get four of them in the right place.

My three-card cases (I feel like such an adolescent) are these:

Rosewill "Thor"

Notice that these, also, are 10 expansion bay cases although I only have need of six. Again, the ones you need have to be in the right place and that's why I bought these cases initially. My smaller, preferred, cases would not allow for three, double-slot, cards using the motherboards I already owned.

You may find you have-to buy a quad-SLI capable motherboard to get three cards in that case, or you may find you can't use the "top" slot and that's really a bummer.

If you add the third 120mm fan in the optical drive space and then all of the interior fans, then get a really, incredibly high cf/m pair of 80mm fans AND pay close attention to the fan in the power supply... maybe. It'll sound like a Harrier taking off in your room, but it might work.

I didn't go buy my monster cases for fun or to impress myself. I don't even like them. It's what it took to mount and cool four 170w cards.

This is what I ended-up buying for the four-card systems out-of desperation:

Full Tower

There are others. These were on sale. I'm sure many other people have-had many other successes with many other cases.

I'm also sure a lot of people have-had failures with many other cases. What those folks tend to do is get frustrated and blame the card manufacturers for inadequate cooling, then curse some brand or other for being "so stupid," then throw a fit and blame IBM for ever conceiving of the PC, then begrudgingly get another case with more fans and live happily ever after.

I've been there and done that and ended-up at 2:00am with a hole saw mounted to a 3/4hp drill in my right hand.

What you could do if you are really serious about this is cut a hole in the side of the case, use a closed-loop liquid cooler on the CPU with fans exiting that hole.

Don't laugh. People go to great lengths.

New subject-

The way to do this core-thing is to get the 780s crunching at their best, then add one core of CPU-crunching and see what happens. If nothing happens, add another core.

That won't go-on for long before you notice the GPU work is slowing.

I think you will be surprised to see that even when not-CPU-crunching, more cores than you would have thought will have *some* traffic on them.

I gave-up and quit running CPU tasks at all (except occasionally by accident), but then I'm running AMD CPUs and they are notoriously bad crunchers so it wasn't worth any effort. Your experience with an i7 will be different and should be better.

New Subject ---

Mikey says he knows of cases where 8GB of RAM didn't seem to be enough. I'm wondering if that's someone trying to crunch on six cores and two "Hyper-threads" and pushing multiple GPU work units down each of multiple GPUs.

In any case, I've not seen an instance where 8GB wasn't enough, but if he says he has seen that, then I'm sure he has. What's another $80, right?

Let us know when you get the first one built and what your experiences are. You will probably be of help to others as they contemplate the same sort of thing if you share the successes and any failures you encounter.

tbret
tbret
Joined: 12 Mar 05
Posts: 2115
Credit: 4861254633
RAC: 36453

I just went and paced and

I just went and paced and smoked a cigarette.

Let me try this; an analogy as bad as they are.

Let's say you had three cards per case and a 100w CPU, plus motherboard, plus RAM, plus a drive and a 80% efficient power supply, .

Now, let's say during nominal crunching each card was really pulling 150w (crunching doesn't really use every part of a card at 100%) so the whole rig is really pulling 600w 24/7. I think that's conservative enough.

Imagine along with me that you took that case and mounted six one hundred watt lightbulbs in it and closed it up.

How much air would you have to move through that case to keep those lightbulbs at 60C or less?

It's not a perfect visualization, but it ain't bad.

Anonymous

I would strongly encourage

I would strongly encourage you to read this thread regarding PCIe slot performance. Read the entire thread or scroll down looking for "message 127974 (written by "ExtraTerrestrial Apes")and read from that point. I learned a lot. It was not how I thought things worked.

No sense is loading a PC with high end GPUs if your not going to get the performance you expect.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.