New CUDA BRP4 app versions 1.28

MAGIC Quantum Mechanic
MAGIC Quantum M...
Joined: 18 Jan 05
Posts: 1,304
Credit: 417,996,517
RAC: 98,048
Topic 196507

I am running these X2 on my GeForce 660Ti SC


Here is the time difference so far between the V1.28 and V1.25

 

Logforme
Logforme
Joined: 13 Aug 10
Posts: 332
Credit: 1,714,373,961
RAC: 10,670

New CUDA BRP4 app versions 1.28

I run 3 at a time on my gtx 580 and the times has gone from 3600 to less than 2500, remarkable. Seems to validate against other versions as well.
Good work.

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3,516
Credit: 455,602,067
RAC: 38,691

Hi MAGIC! Your runtimes

Hi MAGIC!

Your runtimes should be even better, currently your BOINC seems to checkpoint every second which is hurting performance.

Probably this is caused by a setting in your preferences that allows BOINC to "write to disk at most every 0 sec" (in the disk related settings). You should set it to something like 60 seconds.

The next generation of E@H apps will have a safeguard against excessive checkpointing, we have already made sure the code to implement this is in the current BOINC API code.

Cheers
HB

Sid
Sid
Joined: 17 Oct 10
Posts: 145
Credit: 467,138,770
RAC: 313,011

I run GTX 560 Ti and for 6

I run GTX 560 Ti and for 6 tasks in paralell time was 7800 sec. Now it is 6400 sec. Good job. Thank you.

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3,516
Credit: 455,602,067
RAC: 38,691

Well, thank you for your

Well, thank you for your contribution to E@H , and thanks for the feedback!

Happy crunching
HB

Jeroen
Jeroen
Joined: 25 Nov 05
Posts: 379
Credit: 738,423,020
RAC: 0

I am very happy with the

I am very happy with the performance improvements of the 1.28 application.

I am finding that I can run one to two tasks per GPU now to fully take advantage of the GPU whereas before I would run two to three tasks per GPU. 91%+ load is now possible with a single task running. Thanks and great work with the updates done for the application.

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3,516
Credit: 455,602,067
RAC: 38,691

Thanks for the feedback and

Thanks for the feedback and your substantial contribution to Einstein@Home!

As a general note: Of course, higher GPU utilization will also mean slightly more heat generated, so I guess now is a good time to also check temperatures and clear the dust bunnies out the cases :-)

Happy crunching
HB

MAGIC Quantum Mechanic
MAGIC Quantum M...
Joined: 18 Jan 05
Posts: 1,304
Credit: 417,996,517
RAC: 98,048

RE: Hi MAGIC! Your

Quote:

Hi MAGIC!

Your runtimes should be even better, currently your BOINC seems to checkpoint every second which is hurting performance.

Probably this is caused by a setting in your preferences that allows BOINC to "write to disk at most every 0 sec" (in the disk related settings). You should set it to something like 60 seconds.

The next generation of E@H apps will have a safeguard against excessive checkpointing, we have already made sure the code to implement this is in the current BOINC API code.

Cheers
HB

Hello Bikeman,

Thanks, I just went and checked and my 660Ti and 550Ti were both set at 0 so I changed them to 60

My other one on this laptop was set at 60 already (also running CudaX2)......not sure why those two were set @ 0 seconds so I hope to see the improvement now.

-Magic

(edit: my 550Ti still has to run about 50 more 1.25's before it starts the 1.28's)

 

MAGIC Quantum Mechanic
MAGIC Quantum M...
Joined: 18 Jan 05
Posts: 1,304
Credit: 417,996,517
RAC: 98,048

So far that doesn't seem to

So far that doesn't seem to be making any difference as far as time per task Bikeman,

Any other tips?

Or is it because I run 2-core T4T's and LHC's at the same time as these cuda X2 tasks?

http://einsteinathome.org/host/4109993/tasks&offset=0&show_names=0&state=3

 

DanNeely
DanNeely
Joined: 4 Sep 05
Posts: 1,313
Credit: 1,721,502,621
RAC: 949,934

22 minutes for my first 1.28

22 minutes for my first 1.28 vs ~36 for 1.25 on my 560 (running 1 at a time); also dropped the CPU load from 5-6% to 2-3%; or roughly from .4 to .2 cores.

Horacio
Horacio
Joined: 3 Oct 11
Posts: 205
Credit: 80,557,243
RAC: 0

On my host with the 2 560TIs:

On my host with the 2 560TIs: 46 mins (1.28) vs 70 mins (1.25) CPU usage dropped almost at half from 1300 secs to 700 secs.
On the host with one GT430 146 mins (1.28) vs 157 mins (1.25) with no noticeable difference in the CPU usage.

Im doing 2 WUs per GPU with 1 CPU core free for each GPU in both hosts.
As was said, the speed up is much more awesome on faster GPUs.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.