New PC build PSU

Phil
Phil
Joined: 8 Jun 14
Posts: 579
Credit: 228493502
RAC: 0

Hi all, This is in

Hi all,

This is in response to Mikey and tbret.

Nothing is carved in stone yet, and as usual ya'll have given me plenty to consider.

I'll consider the 8mb vs. 16mb argument when the time comes to buy ram. That has been placed in my notes as needing addressed, but I'd rather discuss that in another thread, as this thread is about power and heat dissipation. (Smiley face goes here)

I'm going to try and simplify the thread a bit (within reason) so we don't get too far afield here.

So, on with the whole power issue.

Let's simplify a bit, and say I want to run 3 "high-power" cards (not necessarily 780s). Assuming of course your concern about the 7 expansion slots can be solved. ( I now know from your comment to be looking out for a problem in that area.)

I've mentioned this before, but it got lost in the pile of information flying around in this thread. Wouldn't cards that exhaust out the rear of the case have a substantial affect on the interior case temp? With the understanding that a certain amount of heat will radiate from the cards into the case interior instead of being exhausted.

In reference to your analogy with the light bulbs. Not bad at all, very easy to understand. Let's relate to the above. What if 4 of those bulbs were exhausting their own heat out the back door? Now I'm only dealing with 200 watts to get rid of somewhere.

Now, I'll address the motherboard because that's the foundation of the entire system. I'm assuming I'll need a mobo that can run 3 x16 cards at the x16 speed. So far I have invested about 2 hours online and have only found a couple that would do 2 x16. Maybe I haven't found the right place yet, or, if you wouldn't mind, link what you are using.

Finally, I do plan on documenting this process in a thread when I'm done. I'm sure lots of people will be able to scratch some knowledge from it.

As always, thanks for the input. You were right when you said the design and assembly part was more fun. Sitting there watching numbers go up is like watching grass grow.

Phil

Phil
Phil
Joined: 8 Jun 14
Posts: 579
Credit: 228493502
RAC: 0

Thanks for the link robl.

Thanks for the link robl. I'll go take a look.

tbret
tbret
Joined: 12 Mar 05
Posts: 2115
Credit: 4818385116
RAC: 216510

RE: I've mentioned this

Quote:

I've mentioned this before, but it got lost in the pile of information flying around in this thread. Wouldn't cards that exhaust out the rear of the case have a substantial affect on the interior case temp? With the understanding that a certain amount of heat will radiate from the cards into the case interior instead of being exhausted.

In reference to your analogy with the light bulbs. Not bad at all, very easy to understand. Let's relate to the above. What if 4 of those bulbs were exhausting their own heat out the back door? Now I'm only dealing with 200 watts to get rid of somewhere.

Phil, I didn't lose that question, I answered it a different way.

"What if 4 of those bulbs were exhausting their own heat...." Well, they aren't outside the case, right? So they aren't exhausting their own heat.

Bulbs don't exhaust anything and neither do GPU-chips. A fan has to pull cool air from somewhere and interface with the hot object and the heat has to transfer from the object to the air which THEN gets exhausted.

There is a mathematical function for this heat transfer. I don't know it, but basically it says you can increase the velocity through the heat exchanger or you can reduce the temperature of the air over the heat exchanger and either will cool the heatsink. Conversely, it doesn't matter how hard the fan blows if the air is hot or not moving very much.

The cool air supply has to equal or exceed the demand or else the transfer is inhibited and it doesn't matter where the air goes because it never transferred heat because there was nothing to transfer the heat to because the air was already hot and/or not moving fast enough.

Your question assumes an adequate volume of cool-enough air or that the fans will "pull it" from "somewhere" without enough help.

This IS a really bad analogy, but what happens if you park your car on blacktop on a 95F day in August and leave the engine running with the ac on? Hey, the radiator fan's running, right? Why doesn't that happen when you are moving 75mph?

Why doesn't it happen parked at the base of a glacier in February?

Your question assumes the video card has an adequate supply of cool air available.

I'm telling you that the challenge is providing an adequate supply of cool air. Yes, you could probably drop the ambient room temperature to 15F and it would take less volume, but in any room in which you don't have to wear a parka, the challenge is to get a volume of air to the cards.

You are assuming the volume of air will "just appear" in a box that's closed on four sides and mostly closed on a fifth. Either that or you are assuming the cards' fans are enough by themselves to create a wind-tunnel through the case. They aren't.

All those fans in my cases aren't just blowing "across" the cards. I promise, if I turned the card's own fans off but left 100 other fans blowing on them I would burn them up in a matter of seconds -- UNLESS I had a hurricane force wind blowing over them, of course, or maybe... -100F air.

That's why I earlier mentioned an air handler's cage fan meant to heat and cool an entire house. You've got temperature and volume and surface area of the heatsink. Unless you rebuild the cards you control two of three. Unless you want to freeze your ham off, there is a limitation to one of them.

But you know --- try it and laugh at me and call me names if I'm wrong.

I understand what you want to believe. It just isn't so.

Phil
Phil
Joined: 8 Jun 14
Posts: 579
Credit: 228493502
RAC: 0

Hi all, Response to

Hi all,

Response to tbret.

I think maybe I just have a misunderstanding. I do understand that I need an adequate supply of cooling air. You can't blow air out of the case without it coming from somewhere, I agree with that.

I was trying to use your own analogy to compare the bulbs to the cards. Maybe I failed miserably lol.

This is what I was trying to say. Using cards that exhaust outside of the case should leave less heat inside the case than cards that simply blow over a heat sink and the hot air remains inside the case.

I sincerely hope that I have not driven you crazy with this, and I think we can debate this till the cows come home and not come up with a perfect answer without actually trying it out.

I plan on an incremental build up to the point the system starts to complain then see how things are.

How bout this. I'll start building and take it a step at a time. Add one card at a time, etc. I'll take that particular case to it's limit (without cutting holes or getting crazy) and report the process and final results.

And finally,

Quote:
But you know --- try it and laugh at me and call me names if I'm wrong.

Never gonna happen. I have much funnier ways of dealing with such things. All in good fun of course, we're all here to learn and enjoy the science. Besides, the chances of me being absolutely right are...I don't wanna go there lol. I'm betting we both land in the middle.

Phil

tbret
tbret
Joined: 12 Mar 05
Posts: 2115
Credit: 4818385116
RAC: 216510

RE: This is what I was

Quote:

This is what I was trying to say. Using cards that exhaust outside of the case should leave less heat inside the case than cards that simply blow over a heat sink and the hot air remains inside the case.

YES, which is why I said you *must* use that type of card. Unfortunately, that isn't all you must do.

Quote:

I sincerely hope that I have not driven you crazy with this, and I think we can debate this till the cows come home and not come up with a perfect answer without actually trying it out.

Would it surprise you to discover that I have been hunting my cyanide pills all night? Yeah, we can argue/debate/discuss until Sol goes all red giant on us, but that doesn't change physics.

Quote:

I plan on an incremental build up to the point the system starts to complain then see how things are.

This is a GOOD PLAN. I'm just trying to get you to see where you are going to end-up if you keep going down the road toward the destination you have already described. I've been down the road to Damascus and the reality is that if you keep going that way, you'll end-up in Damascus.

Quote:

How bout this. I'll start building and take it a step at a time. Add one card at a time, etc. I'll take that particular case to it's limit (without cutting holes or getting crazy) and report the process and final results.

I think the whole third video card thing is academic.

You could just do what's demonstrated early in this video and stop worrying about it.

http://www.youtube.com/watch?v=aepBI9GJpQ0

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5845
Credit: 109969162418
RAC: 30384805

RE: RE: This is what I

Quote:
Quote:
This is what I was trying to say. Using cards that exhaust outside of the case should leave less heat inside the case than cards that simply blow over a heat sink and the hot air remains inside the case.

YES, which is why I said you *must* use that type of card. Unfortunately, that isn't all you must do.


I've had other priorities which have prevented me from responding for the last several days. I want to add a few thoughts to the discussion so hopefully better late than never.

I have cards of the above type - double size backplate with ventilation slots and a plastic shroud over the heatsink that directs the airflow (supposedly) out through the ventilation slots. The ones I have do a pretty lousy job of preventing air leakage internally. By 'feeling' airflows, I reckon most of the moving air leaks internally. Maybe more expensive cards are better designed but that really should be checked or perhaps not counted on, for preventing the problem.

In the same message, Phil also said,

Quote:
Besides, the chances of me being absolutely right are...I don't wanna go there lol. I'm betting we both land in the middle.


I'm betting Phil risks landing much closer to tbret's end than in the middle :-).

@Phil,
My guess is that tbret's advice has been pretty much correct and that it will be difficult and expensive and also much harder on your failing hands (you mentioned these earlier) if you continue planning on three high end cards per server case.

Take a look through the top computers list here on Einstein. On the first page 20 out of 20 are AMD setups, even though there are more NVIDIA rigs than AMD rigs contributing here as a whole. If you are looking for productivity, why aren't you considering the current equivalent of a HD7970 (R9 280X I believe) rather than a GTX 780?

Also notice that the current #'s 9, 10, 12 (and several more closely following) have only a single GPU and yet have RACs around 130K. The ones above these have multiple GPUs and on a per GPU basis, the best of these (#1) is below 100K per GPU. There is a significant production penalty from running multiple GPUs on the one motherboard, quite aside from the heat problem.

My experience has been that you will get better output per $ of total cost (capital and running) if you come back to the mid range rather than the top end of all the parts you need. There is a considerable 'top end enthusiast premium' that manufacturers are very eager to apply. I object to paying that.

Take a look at #s 54 and 56 in the top hosts list (Linux 3.12.16-pclos3). They have single HD7850s which cost $US130 and have RACs around 75K. They draw less than 200 watts from the wall and are extremely easy to keep cool. Or you could look at #s 58 and 60 (also Linux 3.12.16-pclos3). There's a power meter on #58 at the moment and it's fluctuating between 158 and 163 watts. They have RACs around 72K since they only have i3 (dual core + HT) rather than 15 processors.

Obviously, you do need extra space for more single GPU hosts but that can also be a benefit (if you can find the space) for heat dissipation. If you cram lots of crunching power in a small space you will have heat dissipation problems.

Even if you could perfectly remove the heat inside the box to the room outside, you've still got to deal with the heat in the room. To me, it was crazy to spend heaps on air conditioning to transfer the room heat to the outside. I was fortunate to have the opportunity to design a much less costly solution.

The photo below shows a part of the room where I run most of my hosts.

There are three bays of pallet racking (surprisingly cheap out of China) of which the middle and right hand bays can be seen. Between the bays, there are two work tables with two screens, keyboards and mice on each table. The cables are long enough to reach to the middle machines in each rack. Each 'shelf' in the rack is spaced sufficiently for easy access and for airflow. The shelves are strips of plywood with big gaps so that hot air can escape vertically upward as well as horizontally in the space between shelves. The middle rack has space for 40 hosts (8 shelves of 5 hosts) with lots of storage for retired hosts (or whatever) above the working hosts.

At the end of the room in the top left corner, you can see the inlet for a circular exhaust fan that can move a serious amount of air. There is a corresponding inlet fan well out of picture behind where the photograph was taken from. The airflow through the room (and through the racks) is quite noticeable.

Each host is mounted backside facing out for easy connection of cables when needed. A host can be lifted out of its position and placed on a table (just visible at the right of the photo) for easy servicing. The cases are just old style desktop units with all unnecessary covers, bays, CDs, etc removed. In normal operation, each host has a power cable and network cable only connected. The following photos show some of the details of a host.

The photo below is a top view of one of the hosts with an HD7850 GPU. You can see two 175W PSUs, one for the motherboard and one for the GPU.

Below is a side view of the same machine showing the open ventilation path from the sides and from above.

This whole setup is located in Brisbane, Australia. The climate is sub-tropical and the daily ambient temperature range in summer is around 22 to 30C and in winter is around 9 to 22C. Quite often in summer, it can get to around 33-35 with occasional forays to around 40.

There seems to be no problem with computers failing, even when the outside temperature gets to 33-35C. The room temperature gets to around 36-38C - too hot to actually work in there but the computers seem to be able to cope. I have a script that I run when a hot day is predicted. It simply visits each host and suspends BOINC for a selected period, eg perhaps 10AM to 4PM or so. The room temperature responds quite quickly when I run the script. If I recall correctly, I ram the script on about five days last summer. I never have to run it in winter like now.

I hope that the above might give you (or anyone else reading this) some ideas of what can be done to cope with the heat produced when running a 'farm' :-).

Cheers,
Gary.

Mumak
Joined: 26 Feb 13
Posts: 325
Credit: 3317658607
RAC: 1194908

And this is the photo of the

And this is the photo of the pain PSU providing power to Gary's room:

-----

robl
robl
Joined: 2 Jan 13
Posts: 1709
Credit: 1454554096
RAC: 3616

Gary, I must salute you

Gary,

I must salute you for your support of distributed computing and the investment of real estate, computers, time and electricity donated. I certainly hope you find a pulsar or gravitational wave. The most annoying event for you and others like you who have made such a hefty investment would occur when the guy down the street running a single Pi finds one first. :>)

I do have some questions. They are not intended to start any type of "flame war", but to clarify for my benefit and others the reason for why you did something. For example:

1. backend out. I had not considered this but I like the idea. I have 3 desktop machines mounted on a rack front end out. The have their covers in place and vent all internal heat to the back. This causes a "heat cloud" to form between them and the back wall. Venting to the front (backend out) in my setup would be a better idea and allow for greater heat dissipation/distribution.

2. All of your boxes are open. This "open" box approach poses an old argument: which is most effective/efficient open box or closed box. With all of your machines generating heat I would think that heat rising from the "bottom" machines would cause over heating problems for the machines above. In a closed box system the heat of each machine in theory would be vented into the room (backend out) where an oscillation fan could distribute it and the large exhaust fan could remove it. Having said all of this that single large exhaust fan you pointed out might be generating sufficient air flow over your open PCs and the open/closed argument becomes moot. An ambient room temp of 100F is scary. I have always worked in commercial environments where the ambient temperature supports "hanging meat". I would think that with that large exhaust fan the "heat cloud" along that one wall containing your rack mounted PCs would be intolerable.

3. Do you provide any type of UPS for this equipment?

Thanks for you patience. Again I am not questioning what you have done. It is obviously working based upon your RAC. Just trying to achieve enlightenment.

I have three large desktop machines which are vertically mounted on a rack. I am now considering mounting them backend out. This makes sense just from a cable access perspective. They are closed but the heat buildup between the backend and the wall they are against is not good. By taking your approach the heat would be vented out into the room making for better heat dissipation. If however my next month's electric bill is >= to last month I might have to cut back on my summer's commitment to E@H. I live in Florida and my ambient room temp is around 80 with the a/c running 90% of the day. I also run home automation to control hydroponic pumps. This contributes to the electrical bill but I believe the bulk is from crunching.

Again congrats on you setup. You have made quite a commitment and we in the community appreciate efforts.

Filipe
Filipe
Joined: 10 Mar 05
Posts: 176
Credit: 366501167
RAC: 42913

Gary, Impressive set up

Gary,

Impressive set up that you have there.

The AMD cards are the top hosts because of there wider memory bus than the nvidia ones (384bits). In light of your knowlegde, do you think that the new 14nm upcoming GPU's, with possibbly a 512bits memory bus and lower energy requirements are worth waiting for? (Better performance for less energy?)

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5845
Credit: 109969162418
RAC: 30384805

RE: ... The most annoying

Quote:
... The most annoying event for you and others like you who have made such a hefty investment would occur when the guy down the street running a single Pi finds one first. :>)


I don't care who 'makes the find' - it's sufficient for me simply to be involved. I derive my satisfaction from the contribution itself.

Quote:
1. backend out.


I've always done this just for the ease of attaching screen, keyboard and mouse. In the current room arrangement, it also helps by pushing hot air into the path of strongest airflow.

Quote:
2. All of your boxes are open.


In the experiments I've conducted, open boxes have always given better results. Whilst there is a temperature gradient from bottom to top, it's only a few degrees (the bottom levels are quite cool) and it can easily be minimised by allowing greater horizontal spacing between boxes. There are quite nice updrafts rising through those spaces, a bit like a chimney effect. The proof of the pudding is when machines don't lockup or crash - and in general they run for months without problem, particularly when they had been converted to Linux.

Most of the machines in the two racks you can see have been running continuously for more than 4 years, some more than five. When first built, 12 of them had HD4850 GPUs and crunched for Milkyway. Only one of those GPUs has failed. The remaining 11 continue to crunch - MW on the GPUs and E@H on the CPUs. Over the last 18 months, I've added GPUs to existing machines and built new machines as well. There have been very few hardware failures, especially considering the age of lots of components - lots of 12 year old disks and PSUs.

Quote:
3. Do you provide any type of UPS for this equipment?


No. If the power goes off, I restart everything by hand, fixing any damage in the process. Fortunately, the power is reasonably reliable - maybe six months at least between outages.

I made a conscious decision early on not to double my electricity bill by trying to maintain a low ambient machine room temperature with air-conditioning. I have a separate office adjacent to but insulated from the machine room. I guessed that machines running continuously might actually be reasonably reliable, even though a bit warmer than what would be desirable, as long as I kept removing the hot air. I'm very satisfied with the performance - it has really exceeded my expectations.

Cheers,
Gary.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.