I run 3 at a time on my gtx 580 and the times has gone from 3600 to less than 2500, remarkable. Seems to validate against other versions as well.
Good work.
Your runtimes should be even better, currently your BOINC seems to checkpoint every second which is hurting performance.
Probably this is caused by a setting in your preferences that allows BOINC to "write to disk at most every 0 sec" (in the disk related settings). You should set it to something like 60 seconds.
The next generation of E@H apps will have a safeguard against excessive checkpointing, we have already made sure the code to implement this is in the current BOINC API code.
I am very happy with the performance improvements of the 1.28 application.
I am finding that I can run one to two tasks per GPU now to fully take advantage of the GPU whereas before I would run two to three tasks per GPU. 91%+ load is now possible with a single task running. Thanks and great work with the updates done for the application.
Thanks for the feedback and your substantial contribution to Einstein@Home!
As a general note: Of course, higher GPU utilization will also mean slightly more heat generated, so I guess now is a good time to also check temperatures and clear the dust bunnies out the cases :-)
Your runtimes should be even better, currently your BOINC seems to checkpoint every second which is hurting performance.
Probably this is caused by a setting in your preferences that allows BOINC to "write to disk at most every 0 sec" (in the disk related settings). You should set it to something like 60 seconds.
The next generation of E@H apps will have a safeguard against excessive checkpointing, we have already made sure the code to implement this is in the current BOINC API code.
Cheers
HB
Hello Bikeman,
Thanks, I just went and checked and my 660Ti and 550Ti were both set at 0 so I changed them to 60
My other one on this laptop was set at 60 already (also running CudaX2)......not sure why those two were set @ 0 seconds so I hope to see the improvement now.
-Magic
(edit: my 550Ti still has to run about 50 more 1.25's before it starts the 1.28's)
22 minutes for my first 1.28 vs ~36 for 1.25 on my 560 (running 1 at a time); also dropped the CPU load from 5-6% to 2-3%; or roughly from .4 to .2 cores.
On my host with the 2 560TIs: 46 mins (1.28) vs 70 mins (1.25) CPU usage dropped almost at half from 1300 secs to 700 secs.
On the host with one GT430 146 mins (1.28) vs 157 mins (1.25) with no noticeable difference in the CPU usage.
Im doing 2 WUs per GPU with 1 CPU core free for each GPU in both hosts.
As was said, the speed up is much more awesome on faster GPUs.
New CUDA BRP4 app versions 1.28
)
I run 3 at a time on my gtx 580 and the times has gone from 3600 to less than 2500, remarkable. Seems to validate against other versions as well.
Good work.
Hi MAGIC! Your runtimes
)
Hi MAGIC!
Your runtimes should be even better, currently your BOINC seems to checkpoint every second which is hurting performance.
Probably this is caused by a setting in your preferences that allows BOINC to "write to disk at most every 0 sec" (in the disk related settings). You should set it to something like 60 seconds.
The next generation of E@H apps will have a safeguard against excessive checkpointing, we have already made sure the code to implement this is in the current BOINC API code.
Cheers
HB
I run GTX 560 Ti and for 6
)
I run GTX 560 Ti and for 6 tasks in paralell time was 7800 sec. Now it is 6400 sec. Good job. Thank you.
Well, thank you for your
)
Well, thank you for your contribution to E@H , and thanks for the feedback!
Happy crunching
HB
I am very happy with the
)
I am very happy with the performance improvements of the 1.28 application.
I am finding that I can run one to two tasks per GPU now to fully take advantage of the GPU whereas before I would run two to three tasks per GPU. 91%+ load is now possible with a single task running. Thanks and great work with the updates done for the application.
Thanks for the feedback and
)
Thanks for the feedback and your substantial contribution to Einstein@Home!
As a general note: Of course, higher GPU utilization will also mean slightly more heat generated, so I guess now is a good time to also check temperatures and clear the dust bunnies out the cases :-)
Happy crunching
HB
RE: Hi MAGIC! Your
)
Hello Bikeman,
Thanks, I just went and checked and my 660Ti and 550Ti were both set at 0 so I changed them to 60
My other one on this laptop was set at 60 already (also running CudaX2)......not sure why those two were set @ 0 seconds so I hope to see the improvement now.
-Magic
(edit: my 550Ti still has to run about 50 more 1.25's before it starts the 1.28's)
So far that doesn't seem to
)
So far that doesn't seem to be making any difference as far as time per task Bikeman,
Any other tips?
Or is it because I run 2-core T4T's and LHC's at the same time as these cuda X2 tasks?
http://einsteinathome.org/host/4109993/tasks&offset=0&show_names=0&state=3
22 minutes for my first 1.28
)
22 minutes for my first 1.28 vs ~36 for 1.25 on my 560 (running 1 at a time); also dropped the CPU load from 5-6% to 2-3%; or roughly from .4 to .2 cores.
On my host with the 2 560TIs:
)
On my host with the 2 560TIs: 46 mins (1.28) vs 70 mins (1.25) CPU usage dropped almost at half from 1300 secs to 700 secs.
On the host with one GT430 146 mins (1.28) vs 157 mins (1.25) with no noticeable difference in the CPU usage.
Im doing 2 WUs per GPU with 1 CPU core free for each GPU in both hosts.
As was said, the speed up is much more awesome on faster GPUs.