Advice needed on New Build

Gamboleer
Gamboleer
Joined: 5 Dec 10
Posts: 173
Credit: 168,389,195
RAC: 362

After a little more research

After a little more research and shopping, I went with a GTX650 overclock at $109, and an Antec VP-450 PSU for $35.

dmike
dmike
Joined: 11 Oct 12
Posts: 76
Credit: 31,369,048
RAC: 0

BTW, here's a halfway decent

BTW, here's a halfway decent site to compare graphics cards;
http://www.hwcompare.com/

Yeah, it's hard to debate against the 650 considering low power consumption and price. For CUDA crunching the 660 Ti will have almost (but not quite) double the performance of the 650 (I'm basing this off my experience with the GTX 260 maxcore in my other machine, which the 650 is close to in performance and spec).

But, if you had two of them, you'd outperform the 660ti by itself, it would cost you almost $100 less, AND you'd be drawing less power. This is considering of course that both 650 cards perform as if they were single cards in a box (which may or may not be the case, I don't know).

It really gets to a point that the price of a card/system becomes disproportionately high to the amount of performance increase that it offers. Like Gary has ultimately pointed out, the real goal is to get the best performance per dollar spent, not best overall performance.

Great advice from Gary, and good choices on your part.

Oh, and I'm with you on the RAM tax heh. I just get sick looking at upgrading my aging DDR2 system where on DDR3 I could get double the RAM at half the price, no joke.

Jeroen
Jeroen
Joined: 25 Nov 05
Posts: 379
Credit: 738,419,435
RAC: 0

RE: - Motherboard: the

Quote:

- Motherboard: the P9X79 Deluxe has a 16x16x8 configuration. To the best of my knowledge, Einstein peformance drops 30% when a card is run in an 8x slot, though I think that number was for PCI-E 2.0, and 3.0 has more bandwidth. That means that, for any given GPU, I'll get 200% performance with two GPUs, and 270% with three. If I go with a P9X79 WS, it can handle 4 GPUs, but I believe they all run at 8x in that configuration, which means 280% performance (4 x 70) of a single card. That hardly seems worthwhile, and seems to push for just doing a 2-GPU or 3-GPU build if Einstein is my primary target.

Hello,

Available PCI-E bandwidth and bandwidth between the CPU and GPU does have a significant impact with the Einstein BRP4 application due to the use of both the CPU and GPU. The more bandwidth you can give to the application the better the processing time will be. For example with 2nd gen and newer Core i7 processors, installing in a PCI-E 3.0 x16 slot over PCI-E 2.0 x16 slot offers significant improvements in processing time via BRP4. The same goes with moving from a x8 to x16 slot.

Another example is with the x58 platform. I have found that in addition to having the cards running via PCI-E 2.0 x16 slots that increasing the QPI link speed can help improve processing time with multi-GPU configuration. This is due to the PCI-E controller running via the x58 chipset and the QPI link providing connectivity between the CPU and chipset. By increasing the base clock, I can increase the QPI link speed to about 7.2 GT/s and get better performance with BRP4 in a multi-GPU configuration. This helps to eliminate any bottlenecks between the CPU and GPUs. I suspect the same would apply with Hypertransport links via AMD CPUs but I have not tested that myself.

MAGIC Quantum Mechanic
MAGIC Quantum M...
Joined: 18 Jan 05
Posts: 1,292
Credit: 407,857,927
RAC: 48,024

RE: BTW, here's a halfway

Quote:

BTW, here's a halfway decent site to compare graphics cards;
http://www.hwcompare.com/

Oh, and I'm with you on the RAM tax heh. I just get sick looking at upgrading my aging DDR2 system where on DDR3 I could get double the RAM at half the price, no joke.

Yeah I agree about the DDR2 prices compared to the DDR3 not to mention 8GB vs those 1GB's dmike.

The hwcompare is good with the card comparisons and Amazon prices but I still check the price @ Tiger Direct and they always end up getting my orders.

I got my cards and PS's for a lower price and add Ram to my orders since the price is pretty much the same......but I have just been going through my pc stash pile to get the DDR2's since I won't pay that much (I do see those cheap Kingstons at a fair price though for a 2GB DDR2)

I have an old stack of DDR's that will never be used again.

(Btw Jeroen, your ID: 5146669 host is quite the machine)

 

dmike
dmike
Joined: 11 Oct 12
Posts: 76
Credit: 31,369,048
RAC: 0

I suspect that multi GPU

I suspect that multi GPU configurations would be different from a single GPU configuration. I'm no expert and it seems like you've a lot more experience with this than I do, Jeroen.

But, my observation on my machine is that a 660Ti running in a 2.0 slot has no difference in processing time between 8x and 16x. Given this, I surmise that this is due to perhaps the card not maxing out the 8x 2.0 bandwidth otherwise I'd see a performance drop or increase going from one slot to another. To extrapolate upon that, this would mean that going to an 8GT/s 3.0 slot would also offer zero performance gains as the increase in bandwidth makes no difference as the bus is not the bottleneck at x8 2.0

Having said that, I have no way to see this in action on multi GPU applications. I tend to believe what you're saying though and anything above one card may benefit from higher bus bandwidth. Just saying, in a single card setup with what I have, that's not the case.

Horacio
Horacio
Joined: 3 Oct 11
Posts: 205
Credit: 80,557,243
RAC: 0

RE: I suspect that multi

Quote:

I suspect that multi GPU configurations would be different from a single GPU configuration. I'm no expert and it seems like you've a lot more experience with this than I do, Jeroen.

But, my observation on my machine is that a 660Ti running in a 2.0 slot has no difference in processing time between 8x and 16x. Given this, I surmise that this is due to perhaps the card not maxing out the 8x 2.0 bandwidth otherwise I'd see a performance drop or increase going from one slot to another. To extrapolate upon that, this would mean that going to an 8GT/s 3.0 slot would also offer zero performance gains as the increase in bandwidth makes no difference as the bus is not the bottleneck at x8 2.0

Having said that, I have no way to see this in action on multi GPU applications. I tend to believe what you're saying though and anything above one card may benefit from higher bus bandwidth. Just saying, in a single card setup with what I have, that's not the case.

You tested it doing just 1 WU at a time and that combined with the new version of the apps (with reduced usage on CPU) could be the reasson for you not seeing any change in performance...
Can you test it again on both busses doing 2 WUs? it would be interesting to see if you get better output doing 2 at a time and if there is some improvement if its the same on both slots...

mdawson
mdawson
Joined: 23 Feb 05
Posts: 77
Credit: 6,575,069
RAC: 0

Alec, I'll throw in my

Alec,

I'll throw in my $.02 worth. My machine is of recent vintage and build, but my design parameters were different than yours. I wanted a gaming machine that is also my main work machine and a cruncher. I have an x58 series board and a GTX 680 powered by an I7 960. The first problem I ran into was cpu cooling. You've chosen the H100 which is an excellent choice. I have the H80, only for space limitations. At the time I built this rig, E@H wasn't very efficient at cpu/gpu tasks, so I opted for the 4 core (8 virtual cores) processor with 12gb ram so I could relegate E@H to the cpu's, and Collatz for the gpu. I'm not so concerned with rac as I am productivity. I felt that running two projects would leave at least one of them running when server side hiccups occurred. That was frequently the case with SETI and is why I dropped out of that project.

Cooling is going to be your biggest problem. You've got the cpu covered, but more gpu's equal much more heat. Then there's also PCI bandwidth and speed. You mentioned a mb that was 16,16,8. I don't see how you could squeeze 4 gpus in there unless you had two dual gpu cards. Once you max out the 16x bandwidth, everything will drop to 8x, and that is just too slow. Personally, I'd recommend only two video cards so that they will both run at 16x. If they are dual gpu, then so much the better, but you pay handsomely for that. Even with one gpu in my system, internal heat is high. It is being vented but I have to leave the side case off.

Configuring your system is another matter. You'll need to get good at manipulating cpu affinity. For me, I crunch E@H on 6 cores, leaving two cores open. One for general Windows programs, the other for Windows itself. My system seems to need one cpu core just to manage all the rest. On all cores except core 0, my cpu utilization is in the high 90% range. Core 0 varies with whatever program[s] I'm running, but it has never exceeded about 88%, even while gaming. If you're exclusively crunching E@H, then you will want to reserve one core for each task running on the gpu. To me, that is a lot of wasted time on the cpu, which is another reason I don't crunch E@H gpu tasks. My system runs full bore all the time and I have very little if any hiccups. I don't overclock anything and productivity is constant and reliable.

Power supply is another issue. The more gpu's you have, the more power you'll need. Get a decent sized pwr supply. I don't know what my power draw is, but then I don't really care. Yeah, electricity is getting expensive, but it's not THAT bad.

I don't know if any of this will help you, but I wanted to share my experience. Good luck with the new machine[s]. I'm anxious to hear what you wind up doing and what some of your crunch times are. One last thing, if you happen to run two PSU's, I would think that you would need to tie the grounds together and keep the positives separate. This is a recommended practice when I use two power supplies on some control systems that I install. (AMX/Crestron etc)

Gamboleer
Gamboleer
Joined: 5 Dec 10
Posts: 173
Credit: 168,389,195
RAC: 362

I have the first machine

I have the first machine running now:

- Core2Duo E7400 @ 2.80 GHz
- Vista64 Business (only because the motherboard / CPU came with an OEM license - you can probably hear it oinking all the way from Arizona if you listen closely).
- 4GB DDR2 @ 800Mhz
- GTX650 overclock (about 10% boost in clock speed), 1GB model.

Two GPU tasks with one core free are coming in at 68 minutes each, 7 minutes slower than Gary's budget build using the same, non-overclocked card. His CPU is about 10% faster than mine and he's running Linux, much leaner than my bloated OS, with a faster bus and RAM. I haven't measured power draw, but the cores are running in the low 40's Celsius on an office machine with a heatsink / fan on the CPU and one exhaust fan, no other cooling (though I did re-do the thermal paste with Arctic Silver).

I've just ordered parts to rebuild the second machine with the same specs; this one comes with a Win7Po license.

My concern now is figuring out how many of these budget crunchers are enough. I can tell it's going to be very tempting to build "just one more" every time I see an orphaned computer with a PCIe 16x slot.

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3,515
Credit: 450,180,454
RAC: 86,452

RE: My concern now is

Quote:

My concern now is figuring out how many of these budget crunchers are enough. I can tell it's going to be very tempting to build "just one more" every time I see an orphaned computer with a PCIe 16x slot.

I know what you mean :-). The only cure for me against this is looking at the power bill ;-), but this cure might be more effective in Europe (especially Germany) than in the US.

Still I got myself a "new" used PC recently on a budget, it has

  • * two Intel Xeon CPU E5405 @ 2.00GHz (so 8 cores in total)
    * with two PCIe 2.0 16x slots and a decent power supply,
    * 8GB RAM (max 32 GB),
    * 1 TB HD

for around 400 EUR (w/ tax). It has a really nice case (giant & quiet fans, no screws, everything just snaps and "clicks" in), I love it, it will heat my attic in the winter. With two budget GPU cards it could easily make it into the top 100 PCs (equivalent to just above 38k RAC at the moment).

So if energy efficiency is not your highest prio, used Core2 era PCs with older, used GPUs can get you some decent bang for the buck.

Cheers
HB

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.