ABP2 CUDA applications

Greg
Greg
Joined: 10 Mar 05
Posts: 9
Credit: 116663922
RAC: 0

I believe so as GPUGrid and

Message 96414 in response to message 96413

I believe so as GPUGrid and Collatz work fine on the same system.

Greg
Greg
Joined: 10 Mar 05
Posts: 9
Credit: 116663922
RAC: 0

I've verified as best I can

Message 96415 in response to message 96414

I've verified as best I can that it's not an issue with the installation. I get the same error when running in standalone mode, while other CUDA 2.3 applications run fine. I grabbed the public source, but it doesn't contain the error message "Error acquiring" or other such snippets of the message anywhere in it. I suspect this is an incorrect check in the CUDA initialization that's failing on the Fermi GPU. Is the current CUDA client source available anywhere?

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3522
Credit: 726873841
RAC: 1241747

Interesting...I'll forward

Interesting...I'll forward your case to the devs.

CU
HB

Saenger
Saenger
Joined: 15 Feb 05
Posts: 403
Credit: 33009522
RAC: 0

RE: @Saenger: The

Message 96417 in response to message 96411

Quote:
@Saenger: The explanation given by Ver Greeneyes is correct. There is a certain part of the computation (here: Fast Fourier Transform, FFT) that is executed either exclusively on the GPU (Cuda version) or CPU (conventional app). No matter how you arrange the other work, the number of FFTs per second that your GPU can do will be the bottleneck if the GPU is sufficiently slow. Usually it's impractical to have the GPU and CPU collaborate closely on the same algorithm (e.g. FFT) at the same time, because between CPU and GPU, there is a bottleneck called PCIe bus. You want to push some data onto the card, then have the GPU crunch on this (using it's ultra fast on-board RAM but not the PCIe bus) and only at the end transfer data (results) back from the board over PCIe to main RAM.


OK, my set-up is thus useless for running GPU tasks. I've disabled my GPU for Einstein for now.

I'm no programmer, so I don't know whether a detection of this situation is a) possible at all server side or in BOINC and b) how hard this would be to implement.

From a non-programmer POV it should work like this:
Compare a list of GPU with the benchmark for the CPU, and if the comparision says the CPU is too fast for that GPU, don't send any GPU-work.
An Athlon XP1500 will probably be accelerated even with my GPU, while my CPU will not get any use even out of a 9600 probably.

Grüße vom Sänger

Speedy
Speedy
Joined: 11 Aug 05
Posts: 40
Credit: 23546889
RAC: 10925

Any word when the next cuda

Any word when the next cuda app is been installed. Any chance devs can refine the application from using 1 cpu & 1 gpu to say 0.2 or 0.5 cpu & 1 gpu?

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3522
Credit: 726873841
RAC: 1241747

RE: Any word when the next

Message 96419 in response to message 96418

Quote:
Any word when the next cuda app is been installed. Any chance devs can refine the application from using 1 cpu & 1 gpu to say 0.2 or 0.5 cpu & 1 gpu?

The problem with an app that declares less than 100% CPU usage is that Boinc will not set it to full "nice" (background task) level. So an app that uses 50% CPU but with a higher priority will be more disruptive to other tasks your PC has to perform than a 100% CPU app at full nice level.

Having said that, the aim of the next APB version will be to put more load on the GPU and less on the CPU. So if the real CPU usage can be brought down to really low levels, what you propose will be possible.

Of course the relative load of CPU and GPU depend on the relative speed of CPU and GPU used, I guess 10 - 15 % CPU usage is what other GPU projects' app will show, right?

CU
HB

Speedy
Speedy
Joined: 11 Aug 05
Posts: 40
Credit: 23546889
RAC: 10925

OK thanks for the update.

OK thanks for the update. Current tasks are only putting between 9 & 11% load on my gtx 470 example of low gpu load as you can see from task cpu time is almost same as runtime. Cpu time: 7,184.58 run time: 7,243.05.

If i can use another GPU project (Seti) as a comparison run time is 721.47 & cpu time is 119.89 How come Seti uses a lot less cpu time?

HB: Times are in seconds.

transient
transient
Joined: 3 Jun 05
Posts: 62
Credit: 115835369
RAC: 0

I do not use a GPU and I

I do not use a GPU and I still see differences run time and CPU time. That says to me differences are not necessarily related to tasks running on a GPU.

tullio
tullio
Joined: 22 Jan 05
Posts: 2118
Credit: 61407735
RAC: 0

Run time is always greater

Run time is always greater that CPU time except in multithreading app like AQUA that use more than one core.
Tullio

Jord
Joined: 26 Jan 05
Posts: 2952
Credit: 5893653
RAC: 92

Elapsed time in BOINC Manager

Elapsed time in BOINC Manager is the Run time you see on your tasks list. It is counted from task start to task finish.

CPU time is purely the time that the CPU worked on the data in the task.

In the case of GPUs, the CPU is only used to translate and transport the data to the GPU, which does all the calculations. So counting CPU time here is bad, as it isn't the CPU doing any of the real calculations.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.