ABP1 CUDA applications

|MatMan|
|MatMan|
Joined: 22 Jan 05
Posts: 24
Credit: 249005261
RAC: 0

As the GPU load is so low how

As the GPU load is so low how about allowing two WUs to be processed on one GPU at the same time?

Paul D. Buck
Paul D. Buck
Joined: 17 Jan 05
Posts: 754
Credit: 5385205
RAC: 0

RE: RE: I find that very

Message 95638 in response to message 95595

Quote:
Quote:

I find that very bad that it needs a 100% core and that you set cuda on Enable as default.

I understand and we tried not to enable the CUDA app by default. Unfortunately that would have involved a change in the BOINC core client code which is not under our direct control. Please note that this is one of the reasons why we set quite a few minimum requirements. This way the number of volunteers who receive CUDA work is as limited as possible.


Of course you could have changed the site adding the option to not do CUDA, made an announcement, given us a week to make our choices and then made the switch. That would have been far more friendly.

Quote:

WRT the efficiency of the current implementation: we are working on a number of improvements. The CPU part of the radio pulsar search received quite a few changes that will not only benefit the CPU-only application but also the CUDA version, thereby moving the computational ratio towards the GPU. These changes will be released as a new application called "ABP2" - probably in the next 1-2 weeks. In parallel to that we are currently working hard to move the remaining CPU part of the CUDA version more or less completely to the GPU.

Please note that even today the CUDA app wouldn't actually require a full CPU. However, as soon as you tell the client you use less than 100% it doesn't renice the process (reduce it's priority) anymore. From our point of view it's better to have the process claiming one CPU at the lowest priority than using, say, 60% at normal priority.


I am not sure how the other projects are doing this but GPU Grid never posed that kind of issue when I have run it ... as in causing problems with lag and load ... but fundamentally, you are "wasting" resources for a BOINC bug which I have not seen being discussed on the mailing lists as a problem (then again I may have missed it, in which case apologies), so, why no effort to get the real problem fixed? If this is the issue then it is going to affect all GPU projects not just yours...

And, in running MW, Collatz and GPU Grid though I have seen higher loading and some lag (acceptable to me because I don't do anything but BOINC on my windows systems) it still does not seem to be a viable trade-off for the low performance gain.

At any rate, I look forward to the changes ...

Jim Kleine
Jim Kleine
Joined: 25 Jan 06
Posts: 3
Credit: 1053126
RAC: 0

RE: Hi Jim, We (and the

Message 95639 in response to message 95636

Quote:

Hi Jim,

We (and the BOINC team) are already aware of that problem. Do you use Windows Vista or Windows 7 and run BOINC as a service?

Thanks,
Oliver

Hi Oliver,

This host is running Windows XP x64 edition. BOINC is running in the standard (boinc_master/boinc_project) service configuration.

I'm quite prepared to invest some testing time if it helps to advance this problem. In the meantime, I have unchecked the GPU units account preference.

Regards,
Jim

robertmiles
robertmiles
Joined: 8 Oct 09
Posts: 127
Credit: 21009668
RAC: 79469

RE: Hello, One of my

Message 95640 in response to message 95622

Quote:

Hello,

One of my computers is a laptop using a NVIDIA 9600M GS.
E@H refuse to upload Cuda WU on this laptop because the display driver 190.38 is requested.
After checking on NVIDIA web site, it appears that this driver 190.38 is not for laptops or M graphics cards series. The latest display driver available for GPU M series is the 186.81.

So if you're using a laptop with NVIDIA GPU, don't waste your time triying to run E@H cuda WU. it is not possible right now.

Lionel.

I have a similar problem, but with a G 105 M for which the latest driver available is 186.44. That means it's idle for now because no BOINC project I'm interested in can use it.

robertmiles
robertmiles
Joined: 8 Oct 09
Posts: 127
Credit: 21009668
RAC: 79469

RE: RE: 1. Are 450 MB

Message 95641 in response to message 95599

Quote:
Quote:

1. Are 450 MB really necessary? I only have 256 MB...

Yes, we really need that much memory.

Quote:

2. Why CUDA 2.2 is not enough? It is the Laptop version of these drivers. Therefore no Laptop can use the GPU or am I wrong?

I think you are wrong as there are no "laptop" versions AFAIK. You may just download the latest driver for your operating system. CUDA 2.3 is superior to 2.2 in a number of details...

Oliver

The Nvidia site says that they don't have a 190.* driver suitable for my laptop's G 105 M. I suspect that it needs some power-saving features that 190.38 does not support. At least it has 512 MB memory.

robertmiles
robertmiles
Joined: 8 Oct 09
Posts: 127
Credit: 21009668
RAC: 79469

I assume that you've seen the

I assume that you've seen the various messages on GPUGRID saying that at least one fft instruction tends to work on some GTX260 cards and not on others. You might want to ask them just which one, so you can think about making a separate version of the CUDA code for GTX260 cards, possibly after you make one for CUDA 2.2 with compute capability 1.2 (all that's now available on many laptops, including mine).

Oliver Behnke
Oliver Behnke
Moderator
Administrator
Joined: 4 Sep 07
Posts: 942
Credit: 25166626
RAC: 0

RE: Of course you could

Message 95644 in response to message 95638

Quote:

Of course you could have changed the site adding the option to not do CUDA, made an announcement, given us a week to make our choices and then made the switch. That would have been far more friendly.


Right, we also considered that approach. However, the BOINC web preferences implementation only shows the opt-out check boxes for deployed/enabled application types (i.e. NVIDIA GPU).

Quote:

but fundamentally, you are "wasting" resources for a BOINC bug which I have not seen being discussed on the mailing lists as a problem (then again I may have missed it, in which case apologies), so, why no effort to get the real problem fixed? If this is the issue then it is going to affect all GPU projects not just yours...


The usual question, bug or feature...

Quote:

At any rate, I look forward to the changes ...


Thanks, so do we.

Best,
Oliver

 

Einstein@Home Project

Paul D. Buck
Paul D. Buck
Joined: 17 Jan 05
Posts: 754
Credit: 5385205
RAC: 0

I posted

I posted this:

Quote:

On Einstein we have had a mini-debate about the transition of the CUDA application mostly because of the heavy use of the CPU along with the GPU meaning that throughput is going to suffer. Many of us would have wished to opt out of running these tasks but were caught by the surprise of the migration. One day no CUDA, next day it was live.

This is part of an exchange:

Of course you could have changed the site adding the option to not do CUDA, made an announcement, given us a week to make our choices and then made the switch. That would have been far more friendly.

Right, we also considered that approach. However, the BOINC web preferences implementation only shows the opt-out check boxes for deployed/enabled application types (i.e. NVIDIA GPU).

The point here is that the current server side code does not allow a transition time where the project can announce the change to have a CUDA/GPU application and allow for the users to select their opt-in or opt-out choice before the application is fielded. There should be a way that the settings can be made manifest before going live.

Rabinovitch
Rabinovitch
Joined: 2 Oct 07
Posts: 14
Credit: 69021765
RAC: 0

Hi all! Which apps should

Hi all!

Which apps should I uncheck in "Run only the selected applications" section of preferences edition page to receive only cuda-related tasks, and stop to receive pure-CPU WUs?

btw, somewhy I can't uncheck Hierarchical S5 all-sky GW search #5 and #6....

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.