Advice needed on New Build

Gamboleer
Gamboleer
Joined: 5 Dec 10
Posts: 173
Credit: 168389195
RAC: 0
Topic 196572

Hello,

I'm preparing to do my first self-build. The machine will be a Windows system and will be used primarily for office work and BOINC GPU computing (Einstein@Home primarily, possibly SETI); gaming performance is not a consideration, though obviously I'll have a pretty decent start having high-end GPUs installed; I don't intend to overclock and don't need large file storage.

Budget is only a consideration in "bang for the buck"; I want to maximize work units per dollar spent both on hardware and electricity consumption.

I've been buying components when on sale, and already have:

32GB Corsair Vengeance 1600MHz (4 x 8GB)
1TB 7200 RPM HDD
H100 Water Cooler
2 * HD 7970 GPUs (but see below).

Some considerations:

- It appears I've probably purchased the wrong GPUs for BOINC. Although great for gaming, from what I can tell the CUDA app for Einstein is much faster, and SETI's OpenCL is still in beta. The cards I have are still returnable.

- Since this is my first self-build, I want to be a little conservative about cooling. Rather than sticking 4 GPUs in one case and then realizing I'm not experienced enough to cool them effectively, I'm going to start with 2. However, I want to make a build that can could take 3 or 4 GPUs later (unless it makes more sense to just build a second 2-GPU system, that is).

- I'm going to spend some extra cash on a Mountain Mods UFO2 case for the extra build room and ability to upgrade my cooling potential. Plus, I like the looks.

Questions:

- GPU Model: What's the best price / performance GPU to get right now if I'm mostly doing Einstein with some SETI on the side? geForce 660Ti? If I get a 3MB model instead of the stock 2GB, will that enhance my task times or number of possible simultaneous GPU tasks?
- Number of GPUs: I'm unsure whether it would be better to just go with two GPUs in one box (meaning I could run them with a less expensive 8-core processor and save money on the power supply) and build another system later if I really wanted to up my numbers, or if I should go for 12 cores now in anticipation of a 3 or 4-GPU box later.
- Motherboard: the P9X79 Deluxe has a 16x16x8 configuration. To the best of my knowledge, Einstein peformance drops 30% when a card is run in an 8x slot, though I think that number was for PCI-E 2.0, and 3.0 has more bandwidth. That means that, for any given GPU, I'll get 200% performance with two GPUs, and 270% with three. If I go with a P9X79 WS, it can handle 4 GPUs, but I believe they all run at 8x in that configuration, which means 280% performance (4 x 70) of a single card. That hardly seems worthwhile, and seems to push for just doing a 2-GPU or 3-GPU build if Einstein is my primary target.

If I do go with a 2 GPU system to run both at 16x, and 8 cores are enough to fully use the two cards, any recommendations on processor and motherboard?

Thanks much for any advice.

dmike
dmike
Joined: 11 Oct 12
Posts: 76
Credit: 31369048
RAC: 0

Advice needed on New Build

Quote:
To the best of my knowledge, Einstein peformance drops 30% when a card is run in an 8x slot, though I think that number was for PCI-E 2.0, and 3.0 has more bandwidth. That means that, for any given GPU, I'll get 200% performance with two GPUs, and 270% with three.

I'd like someone to comment on that if they know for sure. I really don't see how an 8X slot would drop performance. The amount of data is so small I can't see it saturating 8x even.

Here is a guy testing xfire on 8x and 16x. There is almost no difference in his performance results;

http://forum.guru3d.com/showthread.php?p=4375117

BOINC may be different, but it likely has more to do with cores and clock speed than bus bandwidth. I could be wrong though... just seems to me that 8x on 2.0 is more than enough to handle BOINC, as that's not really where the stress is.
Consider that PCIe 2.0 is 500MB/s per lane, so 8x would be 4GB/s. Boinc is just not moving that much data crunching a WU.

Gamboleer
Gamboleer
Joined: 5 Dec 10
Posts: 173
Credit: 168389195
RAC: 0

Edit: Deleted.

Edit: Deleted.

Gamboleer
Gamboleer
Joined: 5 Dec 10
Posts: 173
Credit: 168389195
RAC: 0

Here's a link to the

Here's a link to the discussion about performance under 8x. Granted, it's just a few anecdotes but seems relevant. Some of the posts are contradictory or probably explained by things other than the slot the graphics card is in:

http://einsteinathome.org/node/196134

Horacio
Horacio
Joined: 3 Oct 11
Posts: 205
Credit: 80557243
RAC: 0

RE: - It appears I've

Quote:
- It appears I've probably purchased the wrong GPUs for BOINC. Although great for gaming, from what I can tell the CUDA app for Einstein is much faster, and SETI's OpenCL is still in beta. The cards I have are still returnable.


Not really wrong for BOINC, just not the best for Einstein, for now... AMD GPUs are way better on double presicion calcs, but Einstein dont need DP, neither SETI...

Quote:
- GPU Model: What's the best price / performance GPU to get right now if I'm mostly doing Einstein with some SETI on the side? geForce 660Ti? If I get a 3MB model instead of the stock 2GB, will that enhance my task times or number of possible simultaneous GPU tasks?


If you are going to change the AMD cards for nVidia, the Keppler ones (almost all of the 600 series) use less power, so in the long time you will save some money even if you paid more for them... But for performance over price the previous versions (500 series) may be a better option... for the price of a gtx680 you can buy 2 560Ti and you will get more work done (and more RAC), but you will use a lot more electricity and you will have much more heat and noise in the room... (and of course it will requiere a really bigger suply to put 4 560Tis, and a lot of carefull cooling, while for 2 680 you will have less trouble...)

Quote:

- Number of GPUs: I'm unsure whether it would be better to just go with two GPUs in one box (meaning I could run them with a less expensive 8-core processor and save money on the power supply) and build another system later if I really wanted to up my numbers, or if I should go for 12 cores now in anticipation of a 3 or 4-GPU box later.
- Motherboard: the P9X79 Deluxe has a 16x16x8 configuration. To the best of my knowledge, Einstein peformance drops 30% when a card is run in an 8x slot, though I think that number was for PCI-E 2.0, and 3.0 has more bandwidth. That means that, for any given GPU, I'll get 200% performance with two GPUs, and 270% with three. If I go with a P9X79 WS, it can handle 4 GPUs, but I believe they all run at 8x in that configuration, which means 280% performance (4 x 70) of a single card. That hardly seems worthwhile, and seems to push for just doing a 2-GPU or 3-GPU build if Einstein is my primary target.

If I do go with a 2 GPU system to run both at 16x, and 8 cores are enough to fully use the two cards, any recommendations on processor and motherboard?

Thanks much for any advice.


Einstein apps for GPU are hybrid which means that the apps need to use the CPU for some calcs and that implies a lot of data transfers through the PCIe bus. I've tested an old GT9500 at 16x2.0 and 1x2.0 and the speed ratio was 5:3
Im not sure if the ratio is still valid as the current version of the apps have been otimized and it seems that now they rely a lot less on the CPU, but still, faster the bus, better the performance. (by the way, a PCIe 8x 3.0 has the same speed as a PCIe 16x 2.0, but only the 600 series use the 3.0)
Also, while more GPUs will give more productivity, its not lineal, 2 GPUs wont double your speed even if both are at 16x, as having more GPUs will require more on other resources (Like CPU, Memory, etc)... But ussually from the point of view of the money spent, its better to spend on more GPUs (and expensive MB and PSU) than on a whole extra host...
Ussually, in Einstein you will need to "reserve" a CPU core for each GPU, so 8 cores will be enough even for 4 GPUs... More cores will allow you to run more CPU tasks but I think its not worth the money... (for example one GT430 which is a cheap low power low performance GPU gives me about the same RAC than the 8 cores of an i7-2600), so a good option will be an ivy bridge i7 (the sandy bridge line doesnt support PCIe 3.0) and a MB with at least 2 PCIe16x. (for example an Asus P8Z77 WS or a Gigabyte GAZ77-UDP7. I had bad experieces with Asrock and MSI, so I wont use those brands on motherboards)
You can get best prices on AMD CPUs/MBs, but AFAIK they use more power and generates more heat (For subjective reassons I dont like the AMD CPUs, so I dont have too much experience with them...)

Anyway, wait for other opinions, there are really a lot of different points of view about all this, and may be what is "best" for me, its not even good for you...

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5842
Credit: 109382979485
RAC: 35950486

The first thing to realise is

The first thing to realise is that you are best to separate crunching from your normal computing needs. Design one machine for the latter and splash out to precisely meet those needs. By all means do crunching on that machine as a sideline if you wish but don't let crunching dictate the specs of that machine.

For crunching, you will get the best bang for your buck by staying well away from high end parts of all descriptions. Below, I'll document a machine I've just built in the last few days which I think would be hard to beat in the bang for buck stakes. My comments are based solely on E@H performance so you need to understand that I'm NOT discussing BOINC projects in general.

Below are the parts and prices I paid for the various components. There are plenty of things you don't need to spend money on. For example I'm using an old PIII generation desktop case and 175W PSU I had lying around. The PSU has a 20 pin ATX power connector (not 24 pin but that doesn't matter) and the 4pin supplementary 12V CPU power connector. It will do 100W on the 12V rail so I simply use two of them, one to power the motherboard and one to power the GPU. Works like a charm. There's no need for either a HDD or an optical drive. I use an 8GB or 16GB pen drive as a HDD and load the OS with a Samsung USB slim DVD drive ($29) that can be plugged in when needed. The extra parts I needed to buy were:

Motherboard - cheapest Sandy Bridge board I could find - Intel DH61WW for $39
CPU - Intel Celeron G550 2.6GHz dual core for $45
RAM - G.Skill 4GB kit 1333 for $20
HDD - 16GB Patriot Swing USB2 for $10
GPU - GTX650 1GB (kepler series) for $115
OS - Linux

So the total cost of parts was just $229.

I set the machine up without the GPU to see what the power draw was like - 33W at idle and 53W at full load crunching two LV1 tasks. These take around 4.2 hours on the G550.

I then installed the GTX650 and downloaded the latest NVIDIA drivers. I reserved one CPU core and set preferences to process two GPU tasks simultaneously. I took a punt that this would be about the optimal configuration. I haven't had time yet (or the inclination) to prove otherwise as I'm very happy with the current results.

The machine has been running for just a couple of days and at full load (2 GPU tasks + 1 CPU task) it is drawing a massive 113W from the wall socket. I have a power cord that is double ended (designed to run computer and monitor from one plug) so I can easily run the two PSUs from one wall plug which is plugged into a kill-a-watt style power meter. There was plenty of room to sit the second PSU inside the case and tape it down with some duct tape.

The two GPU tasks are completing in 61 mins and the CPU task is taking maybe 5-10 mins longer than it was prior to installing the GPU. I'm using nothing but stock cooling and this host is the coolest in my whole fleet. Both PSUs are completely cold to the touch.

The RAC is currently climbing rapidly past 10K and theoretically it should stabilise at somewhere close to 24K. For a $229 machine drawing 113W it's a pretty pleasing result.

Quote:
- GPU Model: What's the best price / performance GPU to get right now if I'm mostly doing Einstein with some SETI on the side? geForce 660Ti? If I get a 3MB model instead of the stock 2GB, will that enhance my task times or number of possible simultaneous GPU tasks?


I'd be very interested to see a better bang for buck than the GTX650, taking into account the power draw. On mine, I did try (extremely briefly) running three tasks but there was zero improvement over running two - perhaps even a loss in efficiency.

Quote:
- Number of GPUs: I'm unsure whether it would be better to just go with two GPUs in one box (meaning I could run them with a less expensive 8-core processor and save money on the power supply) and build another system later if I really wanted to up my numbers, or if I should go for 12 cores now in anticipation of a 3 or 4-GPU box later.


Work out the cost of all the high end parts you would need to set up such a beast and then work out how many single GPU budget machines you could build for (say) half the money and you would then be able to answer your own question.

Quote:
- Motherboard: the P9X79 Deluxe has a 16x16x8 configuration. To the best of my knowledge, Einstein peformance ....


What does such a motherboard cost? If you are on a budget and wanted to future proof a little you should buy the cheapest PCIe3 board you can find. I chose not to do this at the moment because to use PCIe3 you need an i5 (NOT i3) Ivy Bridge CPU and the cheapest one I could get at the moment was well over $200 compared with my Celeron at $45. I know there is a big difference between PCIe1.x and PCIe2 but I'm not sure there is that big a gain from PCIe2 to 3. From what many others have said, it's important to be running 16x which all el cheapo single slot boards will do. When budget CPUs that do PCIe3 are available, I'll revise my thinking.

These are just my own personal opinions and I freely admit that I'm totally biased towards the el cheapo end. These days I pay most attention to the power draw. For example, for the same $115, I also bought a GTX550 Ti. That GPU does 2 tasks in about 48 mins but the power draw from the wall is 177W using the same basic computer setup. I wont be buying any more 550 Tis. I haven't (yet) investigated anything above the GTX650 mainly because they are more than twice the price here - may be different in other countries.

Cheers,
Gary.

dmike
dmike
Joined: 11 Oct 12
Posts: 76
Credit: 31369048
RAC: 0

RE: I know there is a big

Quote:
I know there is a big difference between PCIe1.x and PCIe2 but I'm not sure there is that big a gain from PCIe2 to 3.

There is zero difference from 2.0 to 3.0 (in the context of BOINC). Even if there was, this would have to be related to the difference in encoding in 3.0 and not the added bandwidth.

Quote:
From what many others have said, it's important to be running 16x which all el cheapo single slot boards will do.

This is debatable. Some say there is a difference, some say there isn't. As it stands I'm not aware of any application that can currently saturate 2.0 bandwidth, much less even begin to touch 3.0.

Having said that, a 3.0 x8 slot is equal to a 2.0 x16 in terms of bandwidth. As for x8 in 2.0 I'd like to see a real comparison of performance vs x16 2.0 because I'm of the default position that there is no difference. I doubt that .2 CPU is moving > 4GB/s and is being bottlenecked by the bus.

I could be wrong!

I'm about to power down and move the 660Ti to a 2.0 x8 slot and see if there is any difference. Will report back later.
------------------
edit;
Well, apparently my mobo doesn't want one card in a secondary slot. Only works with a card there if in SLI so nevermind I guess :)

Gamboleer
Gamboleer
Joined: 5 Dec 10
Posts: 173
Credit: 168389195
RAC: 0

Horacio: thank you for the

Horacio: thank you for the detailed information. I looked up the 560ti, and it does draw a lot of power (210w peak). The 660ti only draws 150w and the 670 170w; that is indeed a handsome difference and, as you say, I see I only get 3.0 support for the 600 series. That's very helpful to know for eventually building up to 4 cards.

EDIT: Regarding below, realized immediately after posting that I may have problems fully utilizing the GPU if the CPU is only a 2-core. The MB has support for Core2Quad, but these are expensive ($75 & up used) on eBay, though it may still be worthwhile to upgrade the CPU. I'll leave what I typed as I originally said it anyway. Thoughts?

Gary: yours was a totally unexpected answer, and I'm glad you made it, because I have two barebones computers that I found tossed in a dumpster behind a government building as I was out bike riding last week: both are HP DX7500's business minitowers and have been stripped down to the case, motherboard and Core2Duo CPU, but they have attached Windows OEM licenses (one Vista Business, one Win7 Pro) and the MB has a single 16x PCI-E 2.0 slot, placed such that it could accommodate a double-slot card. You've inspired me to practice my surgery on them before I work on the "big client".

The only issue with those would probably be the cooling: since they are business machines, they have a fan atop the CPU, and one blowing out the back, but no other space for fans. Would this be enough for a single Kepler series, do you think, since they're enclosed with their own cooling fan?

Any suggestions on power supply if I do the above?

Sid
Sid
Joined: 17 Oct 10
Posts: 160
Credit: 920862000
RAC: 285217

RE: RE: I'm about to

Quote:
Quote:

I'm about to power down and move the 660Ti to a 2.0 x8 slot and see if there is any difference. Will report back later.
------------------


From my experience it is up to 20% slower on X8 then in X16 especially if we are talking about two cards simultaneously with 4 WUs on each.
CPU is 2600K so it is not bottleneck.
dmike
dmike
Joined: 11 Oct 12
Posts: 76
Credit: 31369048
RAC: 0

RE: RE: RE: I'm

Quote:
Quote:
Quote:

I'm about to power down and move the 660Ti to a 2.0 x8 slot and see if there is any difference. Will report back later.
------------------


From my experience it is up to 20% slower on X8 then in X16 especially if we are talking about two cards simultaneously with 4 WUs on each.
CPU is 2600K so it is not bottleneck.

Well, I got it to work. Mobo has a funny little chip that you have to flip around to use the additional PCIe slots.

There was no performance difference at all on 8x. Time for completion was the same 24-25 minutes as on x16

This is with only 1 GPU running 1 work unit. I don't see how adding another GPU would slow it down as the PCIe lanes are not shared between slots. If it slows down, then there is something else slowing it down, not the PCIe bandwidth. With 4WU I don't know. I went to 3 once and lost efficiency so I am back to 1.

Gamboleer
Gamboleer
Joined: 5 Dec 10
Posts: 173
Credit: 168389195
RAC: 0

Thanks, everyone. I

Thanks, everyone. I appreciate the 8x tests and the components advice.

Gary has ruined me for a high-end system. I am now going to use the barebones Core2Duo (case, motherboard, CPU, Vista Business license) I fished out of a dumpster and add:

- 250GB 7200 RPM HDD - had one on hand.
- 4GB DDR2 800 MHz RAM - $50 (paid a little more for new since I want it now, though I hate paying the "old RAM" tax).
- A Corsair Builder Series CX 500 Watt 80Plus Bronze PSU. $50
- Budget DVD-RW drive, $20
- EVGA 660ti Superclock video card, $300.

Since I had to get the video card anyway, that's a $120 premium to resurrect the system. If I get tired of it, I can always replace the PSU with an el cheapo model, swap to the on-board video, and donate the computer to one of my relatives.

If it goes well, I suppose I'll start haunting Craigslist and buy all the $100 quad-core PCI-e 2.0 desktops I can find. A $50 PSU swap, a new video card, and a crunching I go again.

I'll be sure to post results later. I'm hoping to get the system up this weekend.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.