Looking to get started with GPU processing

tbret
tbret
Joined: 12 Mar 05
Posts: 2115
Credit: 4812894156
RAC: 101403

RE: I know this card is

Quote:

I know this card is laughable, but I'm pleased.

No no... don't laugh at a 750Ti. Not ever.

It's a good card, it just isn't "mighty-mighty."

It sips power and does useful work.

I'm happy to see you've dipped-in a toe and gotten both the Linux thing and the GPU crunching thing going.

But mostly it is a good thing that it's fun. When you can combine contributing with fun, that's a win-win situation.

Zalster
Zalster
Joined: 26 Nov 13
Posts: 3117
Credit: 4050672230
RAC: 0

That card is loving that cuda

That card is loving that cuda 55 app.

Your times for the Parkes cuda 32 were anywhere from 8600sec all the way up to a 20,000 sec task.

Looks like your average now for the cuda 55 is around 6600 sec

So you are looking at a minimum decrease of 22% in the time to complete (when compared to the 8600s, even better against those longer time).

Have fun..

AgentB
AgentB
Joined: 17 Mar 12
Posts: 915
Credit: 513211304
RAC: 0

RE: Looks like your

Quote:

Looks like your average now for the cuda 55 is around 6600 sec

And the times are looking stable, with no invalids nor errors. All good so far.

Just keep an eye on the temperatures for a day or two - that host will do more crunching now in a day than it has in years.

John Reed
John Reed
Joined: 23 Oct 10
Posts: 25
Credit: 11079168
RAC: 0

Several folks have mentioned

Several folks have mentioned that an AMD CPU will give the same GPU performance as an Intel CPU for a much lower price. I'm having tremendous difficulty finding an AMD mobo that supports PCI Express 3.0 at the lower price though. The NVIDIA cards that are worth having are 3.0. I know they're backwards compatible, but is there not a performance impact for E@H running a 3.0 card in a 2.0 slot?

Richie
Richie
Joined: 7 Mar 14
Posts: 656
Credit: 1702989778
RAC: 0

RE: is there not a

Quote:
is there not a performance impact for E@H running a 3.0 card in a 2.0 slot?

I believe some general conclusions can be made from that:
http://tpucdn.com/reviews/Intel/Ivy_Bridge_PCI-Express_Scaling/images/perfrel.gif

AgentB
AgentB
Joined: 17 Mar 12
Posts: 915
Credit: 513211304
RAC: 0

RE: Several folks have

Quote:
Several folks have mentioned that an AMD CPU will give the same GPU performance as an Intel CPU for a much lower price. I'm having tremendous difficulty finding an AMD mobo that supports PCI Express 3.0 at the lower price though. The NVIDIA cards that are worth having are 3.0. I know they're backwards compatible, but is there not a performance impact for E@H running a 3.0 card in a 2.0 slot?

I'm not going to make comment on AMD mobo's i know nothing of such things.

However in the past BRP applications were heavily PCIe bound and 3.0x16 was very significant. Now - not so much, but who is to say what happens in the future different apps will do things different. Now a x16 and x4 are almost RAC-wise indistinguishable.

a PS: I noticed you are stable running - so time to try (if haven't already) setting "GPU utilization factor of BRP apps" - in the E@H preferences account page - have you tried running at 0.5 (two tasks per GPU) ?

Edit: if i were to be searching for a mobo - www.newegg.com is very good for searching based on form factor / PCI / chipset etc. That at least will give you a short-list to chase about.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5842
Credit: 109398400004
RAC: 35719711

RE: Several folks have

Quote:
Several folks have mentioned that an AMD CPU will give the same GPU performance as an Intel CPU for a much lower price.


Don't make the mistake of equating purchase price to what it really costs to run a cruncher. The hidden cost of power should be something you factor in. AMD CPUs use more power, run hotter and need more attention to proper cooling and produce a lower CPU task output while enjoying that lower purchase price. Intel, charge more because they can and until AMD 'catch up' in the areas where they are lagging, that will continue to be the case.

So, if you are not interested in crunching CPU tasks, then yes, you can get a very similar GPU task output with either type of CPU.

As an example you might be interested in. I have two machines, each one with a single HD7850 GPU running 4x. The GPU crunch times in each are pretty much the same. One machine is a six core AMD FX-6300 that runs 3 CPU tasks and has 3 free CPU cores. The other has an Intel Pentium dual core - G3258 - a Haswell refresh CPU. It has 1 CPU task and one free core. That one CPU task takes about 4.6 hours on average. A CPU task on the FX-6300 takes about 11.5 hours. The G3258 CPU cost less than the FX-6300 and pulls about 165 watts from the wall. The FX-6300 pulls around 220-230 watts. So which one of these was my best investment?

Of course, there are many other things to consider as well. Nothing is ever as simple as it seems. And the factors change over time so that what is 'right' today may very well be quite different in a couple of years from now. When I planned the above machines a couple of years ago, the 'right' thing for me (for a whole bunch of reasons beyond those mentioned above) was to use a budget single slot PCI-e V2 board and a GPU with a 256bit wide interface. That has all changed now - both the app and the hardware so if I were building a farm today I would be looking at at least 2 PCI-e V2 slots and x8 as the number of usable lanes on each. You may even get nearly as good from x4. I would pay most attention to power consumption and build a prototype to test the real world performance. I would tend to favour Intel, even though it would be a higher capital cost, simply because efficient CPU crunching is important to me.

Quote:
I'm having tremendous difficulty finding an AMD mobo that supports PCI Express 3.0 at the lower price though.


Don't get hung up on PCI-e 3. I'm running GPUs in old PCI-e 1.x motherboards and, because of the optimisations made to the app earlier in the year, I'm now getting essentially the same performance as I do for the same GPU in much more modern boards. This is an example of a factor that has changed dramatically in the last couple of years.

Quote:
The NVIDIA cards that are worth having are 3.0. I know they're backwards compatible, but is there not a performance impact for E@H running a 3.0 card in a 2.0 slot?


I don't see any problems with single cards. I don't think there would be a problem with dual cards. It's a completely unknown ball game for me at higher card densities :-). You need to ask the people who run them.

Cheers,
Gary.

tbret
tbret
Joined: 12 Mar 05
Posts: 2115
Credit: 4812894156
RAC: 101403

RE: Several folks have

Quote:
Several folks have mentioned that an AMD CPU will give the same GPU performance as an Intel CPU for a much lower price. I'm having tremendous difficulty finding an AMD mobo that supports PCI Express 3.0 at the lower price though. The NVIDIA cards that are worth having are 3.0. I know they're backwards compatible, but is there not a performance impact for E@H running a 3.0 card in a 2.0 slot?

I like Gary's reply: "Nothing is ever as simple as it seems."

Modern AMD CPUs are *lousy* at crunching compared to modern Intel CPUs. Period. End of story. Done. That's pretty simple.

AMD CPUs on some motherboards are better than some Intels on some motherboards for GPU computing, but the best CPUs and the best motherboards for GPU computing are Intels.

For combination CPU/GPU computing, with two GPUs, there is no AMD CPU which will compete with a quad-core Intel.

For strictly GPU computing, there are combinations of AMD CPUs and motherboards which will yield essentially the same performance as some Intel CPUs and motherboards for less money.

Gary makes a point about power consumption. He's right. The Intel CPUs are amazingly more efficient than the AMDs (remember, this AMD architecture is seriously long-in-the-tooth and is due to be replaced soon). However, if you aren't crunching on an AMD and you are just using it to shuttle work to and from the PCIe bus, they work in a lower power state and don't get as hot or burn as much electricity as if you are using them to crunch.

In my opinion, and it is merely an opinion based on nothing but my not-too-scientifically studied approach with my own machines, buying an AMD FX-based CPU and crunching on it is an error. It's not worth the heat/noise/power consumption.

However, if you KNOW you aren't going to CPU-crunch and you JUST want to GPU crunch, then an 8-core AMD isn't going to doom your efforts. If you look at the top 20 hosts you can see that #3, #4, #8, #12, and #20 are all AMD CPU-based 8-core platforms. The point being that the AMD CPU isn't really holding these machines back, or if it is, the effect is not enough to matter much.

By the same token there are four, 4-core, i5s in the same Top 20, some ahead and some behind the AMD 8-cores.

Is it better to get another 5% of performance from each of two GPUs and add one or two CPU tasks while burning less power; or is it better to get 95% of the performance from three GPUs and run zero CPU tasks, and burn 35% more power?

I refer you back to the response that Gary gave.

John Reed
John Reed
Joined: 23 Oct 10
Posts: 25
Credit: 11079168
RAC: 0

RE: a PS: I noticed you

Quote:

a PS: I noticed you are stable running - so time to try (if haven't already) setting "GPU utilization factor of BRP apps" - in the E@H preferences account page - have you tried running at 0.5 (two tasks per GPU) ?


No, I didn't know I could do that(probably because I heeded the danger warning).

I did it a couple hours ago. I see I'm now running two BRP tasks. One says "1 NVIDIA GPU" and the other says "0.5 NVIDIA GPUs".

I don't understand how a GPU can run tasks simultaneously.

Considering I'm now doing something "dangerous", I figured this would be a good time to learn how to check temp with something more accurate than my finger. Searching came up with this command "nvidia-smi -q -d temperature", with a result of 60 C. It also tells me that slowdown temp is 96 C.

I guess I'm doing alright? Does it mean I could or should decrease my utilization factor further?

Zalster
Zalster
Joined: 26 Nov 13
Posts: 3117
Credit: 4050672230
RAC: 0

Actually both of those works

Actually both of those works units are 0.5 Nvidia GPU

Even though one of them says 1 Nvidia GPU it really is 0.5

that happens sometimes when you change things but don't restart BOINC manager, it will continue to show the old information about the work unit.

IF (you don't have to do this) you were to suspend boinc manager and turn it off then turn it back on again, the new info would show and it would be 0.5

If you leave it alone, it will finish without problem and when a new work unit starts, it will show the correct 0.5 Nvidia GPU

As far as Temps, I use Precision X that I downloaded from EVGA. ( I prefer to use EVGA in my machines) but that is personal preference. Depending on who made your GPU, they should have included a disc (hopefully) that contains a fan control for the GPU.

With my 750s I used to set the max temp at 75C with 100% full fans on the Fan Curve.

My low was 50% at 50C

For the 750Ti, about 2 work units per card is probably the best setting.

Look at it this way. If I do 1 work unit in 2 hours, how long does it take 2 at a time to complete?

If it is less than double the time of 1 then it makes sense.

Now if I did 3 at a time, now long does it take? My experience shows that it doesn't make sense to do 3 with that card. Some of the higher cards, maybe but you have to plot out time to complete vs number of instances.

If you did 4 at a time, you would see it takes an extremely long time to complete and isn't worth the hassle, you would be better off doing less in faster times.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.