BRP3 CUDA

Rechenkuenstler
Rechenkuenstler
Joined: 22 Aug 10
Posts: 138
Credit: 102,567,115
RAC: 0

RE: Hi! How does it

Quote:

Hi!

How does it compare to other BOINC CUDA projects?

CU
HBE

Due to the shortage of BRP Cuda tasks I ran some milkyway Cuda tasks with my gtx460.

Milkyway generates a GPU Load of 98%, BRP of only 52%.

Runtime for Milkyway tasks 8-9hours on CPU, 18.5 minutes on GPU
Runtime BRP. 12 hours on CPU, 55 minutes on GPU

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3,522
Credit: 699,353,944
RAC: 229,464

Hi Many thanks for the

Hi

Many thanks for the numbers.

Do you recognize a difference in the responsiveness of the Windows Desktop and applications when Milkyway or E@H cuda task run? Or in other words: does the near-saturation of the GPU by the MW app have a noticeable consequence for the "user experience"?

CU
HB

Rechenkuenstler
Rechenkuenstler
Joined: 22 Aug 10
Posts: 138
Credit: 102,567,115
RAC: 0

RE: Hi Many thanks for the

Quote:

Hi

Many thanks for the numbers.

Do you recognize a difference in the responsiveness of the Windows Desktop and applications when Milkyway or E@H cuda task run? Or in other words: does the near-saturation of the GPU by the MW app have a noticeable consequence for the "user experience"?

CU
HB

Hi

With the configuration I run on this machine, there's no difference. It's an I7 with a GTX460 and it is configured to use 5 CPU's for the working horse GW-HF. BRP CUDA uses a 6th CPU and the GPU with a load of 52% and a RAM usage of 320 MB.
That means I can play a game with high grafics requirements in addition to the BRP CUDA, without any loss of quality in the game. It's still in highest resolution and with very smooth streaming. So you can realy let the BRP CUDA run whenever you are working and whatever you are doing.

With the Milkyway CUDA I didn't do this test with the game or video streaming. But doing normal work like word processing, or internet browsing, there are normal responseivness from the desktop.

But using a VNC Viewer (Tight VNC) for watching the machine, there was a big difference in the responsivness. The VNC desktop was really slow with MW@H CUDA was running. With BRP CUDA this doesn't occur

Kind regards

Bernhard

What I should mention. MW@H requires GPU with double precision support (Fermi architecture with 64 bit support). I think NVIDIA minimum is GTX 260.

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3,522
Credit: 699,353,944
RAC: 229,464

Thanks again, very

Thanks again, very interesting. I couldn't do that myself because I don't have a Fermi-class CUDA card to run at MW@Home.

Further improvements of BRP app utilization of the GPU might depend on NVIDIA improving their CUFFT library, which is used by the BRP app for quite a big share of the overall computation (writing a better FFT implementation in CUDA from scratch will be quite an effort and certainly not trivial). For the moment I think BRP3 is a great improvement over the previous ABP2 CUDA app which had a much lesser GPU utilization.

CU
HBE


Rechenkuenstler
Rechenkuenstler
Joined: 22 Aug 10
Posts: 138
Credit: 102,567,115
RAC: 0

Yes. The BRP CUDA appliction

Yes. The BRP CUDA appliction is a real improvement. I prefer it against MW@H. The reason is that you can use it for longterm crunching 24 hours, 7 days.
Temperatures on GPU are moderate, and the machine has enough capacity for all other use. I doubt, that MW@H could be used that way. Temperatures at MW@H on the GPU are much higher. And I think you can't play a game in parrallel. Not every machine is used for crunching only.

Jeroen
Jeroen
Joined: 25 Nov 05
Posts: 379
Credit: 740,030,628
RAC: 0

I like the lower power

I like the lower power consumption running BRP3 compared to other CUDA enabled projects. I measured 375W consumption at the wall with both my 295 GPU cores running BRP3 and four Einstein tasks running via my 4.0 GHz i7 CPU. This leaves some room for adding extra GPUs without having the power consumption become excessively high.

Rechenkuenstler
Rechenkuenstler
Joined: 22 Aug 10
Posts: 138
Credit: 102,567,115
RAC: 0

I think, GPU Apps need more

I think, GPU Apps need more configuration settings. There are different types of users and strategies, that need different settings.
Two main aspects:

1. Long term (7/24) or short term. For longterm it might make sense not to squeeze out the harware ressources, but to set them to a more moderate level.

2. Crunchers or "normal users". Normal users want to run DC in ADDITION to their daily work and not INSTEAD of it. For GPU's it is currently only possible to turn it on or off (use GPU while computer is in use setting).

I think, for GPU there must be the same configuration options like they already exist for the CPU:

Use at most GPU's in % - if you have mor than one GPU like the GTX560
Use at most GPU cores in % - To set how many cores may be used e.g. 168 of 332 of a gtx460
Use at most core time in % - to set how many cycles of the gpu cores may be used.
Suspend Computation if non boinc graphics requirements exceed XX%

This would allow to configure the GPU's in a manner, that allows using GPU all over the day.

Jord
Joined: 26 Jan 05
Posts: 2,952
Credit: 5,893,653
RAC: 668

I always find these

I always find these 'preference wishes' so funny. What people do not seem to know and what we try to teach each time again, is that for most of these preferences to be used, you will have to pressure your graphics unit manufacturer to release a GPU application programming interface or API, so that the BOINC developers can add these things.

Without the help from the GPU manufacturers, these preferences cannot be done.
For instance: The architecture of all the CUDA capable cards has changed during the whole history of CUDA, so programming a sort of throttle function that works across all available platforms on all available CUDA GPUs isn't easy to do. In the end it'll have to work on all versions of all the platforms that are supported at this time.

Before one says that Efmer managed to do so in Tthrottle, may I point out that this program is for Windows only? That there's no version for all Linux distros and all OS X versions? Now, why would that be? .... {ponders} :-)

Quote:
Use at most GPU's in % - if you have mor than one GPU like the GTX560


So what to do when someone adds an 8800GTX, a 9800 and an ATI Radeon5770 to the GTX 560? How are you going to tell that BOINC should use only the GTX 560?

Quote:
Use at most GPU cores in % - To set how many cores may be used e.g. 168 of 332 of a gtx460


Without an API impossible to do, since no one knows how many cores any card out there has. Nvidia has been asked quite some times to come up with such an API, but so far, bubkis.

Quote:
Use at most core time in % - to set how many cycles of the gpu cores may be used.


Yeah and then the next one says that he wants to throttle 50% of the available streaming multiprocessors, while he wants the other 50% to run at full speed. Upon which the next one says he wants to throttle 25% of the SMs to 30%, run 30% at full speed and not use the last 45% at all.

And how is all this going to be done by BOINC? Guess?
Yep: API. Without a file telling how many SMs there are on any given Nvidia or ATI GPU, it is impossible to add any form of throttling to the GPU, other than what is done at this time: Throttle the CPU and you will automatically throttle the GPU as well.

Quote:
Suspend Computation if non boinc graphics requirements exceed XX%


The normal "Suspend computation if non-BOINC requirements exceed XX%" preference will already take care of this. A separate one needed for GPUs only is unnecessary. Since most of the non-BOINC programs use the CPU in one form or another, you'd want all of BOINC out of the way. Especially so since GPU apps use the CPU as well and until someone comes up with a way to run apps directly on the GPU, this won't ever change.

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4,307
Credit: 249,732,245
RAC: 34,773

RE: I always find these

Quote:
I always find these 'preference wishes' so funny.

At least they are thought from the perspective of the users, not from developers. Which is the way I like to think about applications that are not only used by the people that wrote it.

I think that the existing configuration options are already more than enough to confuse the average John Doe who just wants to contribute a little science with his computer.

IMHO the number of choices necessary to configure BOINC should rather be reduced than increased, at least in the obvious user interface.

Generally I like D.A.s proposal to hide the complexity of the many, many options BOINC already offers.

BM

BM

Jord
Joined: 26 Jan 05
Posts: 2,952
Credit: 5,893,653
RAC: 668

RE: At least they are

Quote:
At least they are thought from the perspective of the users, not from developers. Which is the way I like to think about applications that are not only used by the people that wrote it.


"Funny" is maybe not the right thing to have said. Yet each time these wishes for preferences for the GPU come up, the user expects that they're 'easy' to add, without having any knowledge of the complexity behind programming one line for BOINC so it runs and compiles the same on Windows, OS X and any Linux distro you throw it at.

Same thing goes for your science applications. Why do you still not have any application that runs on an ATI GPU on a Mac running OS X 10.5.0? Or any ATI application for any platform, for that matter. Other projects manage to do it, so it must be easy to do, so git-it-on. ;-)

As for hiding all the preferences, then they should've started with that.
The first 'confusing thing' to be hidden is the Messages log, it goes from a tab to a separate window, to be gotten to through menu access. With the to me rather confusing name of "Event Log". Now what am I to call the Event Log in Windows then? ;-)

All you'll get from introducing these things later on in BOINC life is that those people already established with the program will be slightly alienated as all these things that change are geared towards the newcomers. There's nothing in it for the old crew. Why not separate them, have one BOINC Manager for the newcomers, and one (Advanced) for the people feeling slightly adventurous and more established with the program.

Anyway, not going to hijack this thread here under the watchful eye of a developer. ;-)

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.