I only can run 2 BRP task with my new GPUs??

Horacio
Horacio
Joined: 3 Oct 11
Posts: 205
Credit: 80557243
RAC: 0

RE: And then wait a while.

Quote:
And then wait a while. The units-at-time command passes from the Einstein server to your system in new work units--unlike most configuration parameters, I think.

In fact, this setting is updated in the host when the hosts request and gets new BRP WUs and only then. Once the setting is received the host will use that number in all the WUs no matter if they were sent before of after changing the setting.

SJC_Steve
SJC_Steve
Joined: 20 Jul 11
Posts: 28
Credit: 475253724
RAC: 720166

Here is my complete set of

Here is my complete set of E@H preferences;

Quote:
Processor usage
Suspend work while computer is on battery power?
Matters only for portable computers no
Suspend work while computer is in use? no
Suspend GPU work while computer is in use?
Enforced by version 6.6.21+ no
'In use' means mouse/keyboard activity in last 3 minutes
Suspend work if no mouse/keyboard activity in last
Needed to enter low-power mode on some computers --- minutes
Suspend work if CPU usage is above
0 means no restriction
Enforced by version 6.10.30+ --- %
Do work only between the hours of
No restriction if equal ---
Leave tasks in memory while suspended?
Suspended tasks will consume swap space if 'yes' yes
Switch between tasks every
Recommended: 60 minutes 60 minutes
On multiprocessors, use at most 10 processors
On multiprocessors, use at most
Enforced by version 6.1+ 100% of the processors
Use at most
Can be used to reduce CPU heat 100% of CPU time

As you can see there is nothing relating to GPU utilization. Could this due to my OS being Linux versus Windows?

Thanks,
Steve

SJC_Steve
SJC_Steve
Joined: 20 Jul 11
Posts: 28
Credit: 475253724
RAC: 720166

Sorry, I finally found the

Sorry, I finally found the settings for GPU utilization. I was looking under "Computer Preferences" and not under "E@H Preferences".

Thanks for all the help,
Steve

Jonatan
Jonatan
Joined: 20 Jun 10
Posts: 66
Credit: 25782906
RAC: 0

I will try to put GPU

I will try to put GPU utilization factor of BRP to 0,5... In principle, if you put 0.5, BOINC run two task per GPU.

I understand this...

And other question...

If increase in two the tasks per GPU, I divided the power of the GPU in two, not?

Now, The GPU and 2% of CPU ending a BRP CUDA task in 50 minutes approximately...

With two task per GPU BRP CUDA tasks will end in 1:40m??

And if so, which option is best?

Old man
Old man
Joined: 28 Mar 10
Posts: 4
Credit: 4479478
RAC: 0

RE: Now, The GPU and 2%

Quote:

Now, The GPU and 2% of CPU ending a BRP CUDA task in 50 minutes approximately...

With two task per GPU BRP CUDA tasks will end in 1:40m??

And if so, which option is best?

Ehh. Not 1:40. Maybe about 1:20m - 1.35m.

Gpu runs both tasks same time. One task dont use all of gpu performance because BRP CUDA task use cpu to feed the gpu.

Jonatan
Jonatan
Joined: 20 Jun 10
Posts: 66
Credit: 25782906
RAC: 0

RE: Ehh. Not 1:40. Maybe

Quote:

Ehh. Not 1:40. Maybe about 1:20m - 1.35m.

Gpu runs both tasks same time. One task dont use all of gpu performance because BRP CUDA task use cpu to feed the gpu.

Yes but the doubt that I had...it was, if i increase the tasks per GPU, also proporcional, the time of execute...

And i can deduce that yes

archae86
archae86
Joined: 6 Dec 05
Posts: 3145
Credit: 7024924931
RAC: 1806140

RE: Now, The GPU and 2% of

Quote:

Now, The GPU and 2% of CPU ending a BRP CUDA task in 50 minutes approximately...

With two task per GPU BRP CUDA tasks will end in 1:40m??

And if so, which option is best?

Most likely your hypothetical times will not turn out to be true, but rather your two task case will come in at less than double.

However, to answer your question exactly as asked, for the numbers you cited your system would likely be more productive and use less power running a single CUDA task. There is some extra CPU consumption to support n-fold GPU tasking, which gets worse for higher values of n. Assuming you are running BOINC tasks on your CPU as well as your GPU, you lose productivity on the CPU side--which needs to be compensated by a better than break-even GPU tradeoff.

The right answer is heavily dependent both on your system hardware, and on the particular applications. But I think most people with GPUs capable of running two tasks find that beats one, often by a rather appreciable margin. Further gains from running more than two are usually much, much less. Many people running n-fold GPU believe they have observed their system to be more productive if they constrain the number of ordinary CPU tasks to be fewer than their number of (virtual) cores. The most commonly cited case is to back down by one--though I've seen cases where backing down more than that is helpful. This helps in the case that the improved GPU productivity from decreased latency waiting for CPU service outweighs the lost CPU productivity--which is quite commonly true.

To get the best result, you really need to experiment on your particular system, with the particular applications of interest to you. But I'd suggest that an early test point should be running two GPU tasks by setting the Einstein@Home preference item:
GPU utilization factor of BRP apps = 0.5
combined with restricting the number of BOINC CPU applications to one fewer than your number of virtual cores, using the Computing Preference item:
On multiprocessors, use at most 75% of processorfor example, on a quad-core system (either physical quad not running Hyperthreaded, or physical dual-core running hyperthreaded).

Alex
Alex
Joined: 1 Mar 05
Posts: 451
Credit: 500257902
RAC: 206688

RE: If increase in two

Quote:


If increase in two the tasks per GPU, I divided the power of the GPU in two, not?

Now, The GPU and 2% of CPU ending a BRP CUDA task in 50 minutes approximately...

With two task per GPU BRP CUDA tasks will end in 1:40m??

And if so, which option is best?

Since I do have a GTX550Ti as well I can tell you:
optimum performance is running 2 wu's in parallel; it increases gpu usage from ~72% to near 90%.
Runtime is ~4140 with this setting; will be faster with the new 1.28 app (can already be tested @ albert).
Running 2 wu's is quite good for x16 pci-e 2 slots; might be better with pci-e3 slots and cards with wider memory interface. GTX550TI has 192bit mem bus.

i3 win7/64 7.0.31 driver 304.79 always one cpu free.

Alexander

Jonatan
Jonatan
Joined: 20 Jun 10
Posts: 66
Credit: 25782906
RAC: 0

I will continue with my two

I will continue with my two BRP tasks in the GPUs...Is the best option, because It will form a bottleneck from the graphic cards...

Thanks all and Congratulations by the new seven discovered pulsars...Yes we CAN!!!

Fred J. Verster
Fred J. Verster
Joined: 27 Apr 08
Posts: 118
Credit: 22451438
RAC: 0

RE: I will continue with my

Quote:

I will continue with my two BRP tasks in the GPUs...Is the best option, because It will form a bottleneck from the graphic cards...

Thanks all and Congratulations by the new seven discovered pulsars...Yes we CAN!!!

A bottleneck?

Just started to use an i7-2600+ 2 ATI HD5870 and doing 2 instances_per_GPU also
on an GTX470 and 480 GPU.
The ATI GPUs have each 1 CPU core, the NVidia cards not.

If the can do GPUgrid or 2 SETI MB WUs, they also can do 2 E@H WUs.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.