New BRP4 application versions 1.22/1.23

steffen_moeller
steffen_moeller
Joined: 9 Feb 05
Posts: 78
Credit: 1773655132
RAC: 0

I am much impressed by this

I am much impressed by this feature, too. On Linux, I interpret the output of nvidia-smi correctly, then GPU utilisation went up from 55% to 90%, memory doubled as expected.

[pre]
# nvidia-smi
Sat Mar 10 11:38:35 2012
+------------------------------------------------------+
| NVIDIA-SMI 3.295.20 Driver Version: 295.20 |
|-------------------------------+----------------------+----------------------+
| Nb. Name | Bus Id Disp. | Volatile ECC SB / DB |
| Fan Temp Power Usage /Cap | Memory Usage | GPU Util. Compute M. |
|===============================+======================+======================|
| 0. Tesla C2075 | 0000:85:00.0 Off | 0 0 |
| 35% 84 C P0 128W / 225W | 10% 537MB / 5375MB | 92% Default |
|-------------------------------+----------------------+----------------------|
| Compute processes: GPU Memory |
| GPU PID Process name Usage |
|=============================================================================|
| 0. 4922 ...nary_BRP4_1.23_i686-pc-linux-gnu__BRP4cuda32nv270 260MB |
| 0. 4925 ...nary_BRP4_1.23_i686-pc-linux-gnu__BRP4cuda32nv270 262MB |
+-----------------------------------------------------------------------------+
[/pre]

There was only a small sampling period of mine, but I had the impression that it helps GPU utilisation a lot (!!!) to leave one core dedicated for it. It went to the 92% (varies between 90% and 94%) from 68% (peak, seen it between 45 and 68 when going back to full Processor usage) with all cores running, i.e. it helped to reduce the time the GPU needs to wait for being fed. I do so by reducing the overall CPU consumption to (1-1/n)*100 % with n the number of cores, so when coming from 100%, there is then one job "waiting to run".

With factory settings, I had a runtime of ~3500 for the GPU Jobs (CPU time ~600). The halfing of the GPU factor doubled the runtime to 7600-7800 leaving the CPU time at a slightly increased ~750. Ouch. But now, with the dedicated core, I already see the estimated remaining runtime of the GPU tasks to make 2 or 3 seconds per second. So while promising, it will take the rest of the day to collect the numbers.

Cheers,

Steffen

steffen_moeller
steffen_moeller
Joined: 9 Feb 05
Posts: 78
Credit: 1773655132
RAC: 0

RE: There was only a small

Quote:


There was only a small sampling period of mine, but I had the impression that it helps GPU utilisation a lot (!!!) to leave one core dedicated for it. It went to the 92% (varies between 90% and 94%) from 68% (peak, seen it between 45 and 68 when going back to full Processor usage) with all cores running, i.e. it helped to reduce the time the GPU needs to wait for being fed. I do so by reducing the overall CPU consumption to (1-1/n)*100 % with n the number of cores, so when coming from 100%, there is then one job "waiting to run".

With factory settings, I had a runtime of ~3500 for the GPU Jobs (CPU time ~600). The halfing of the GPU factor doubled the runtime to 7600-7800 leaving the CPU time at a slightly increased ~750. Ouch. But now, with the dedicated core, I already see the estimated remaining runtime of the GPU tasks to make 2 or 3 seconds per second. So while promising, it will take the rest of the day to collect the numbers.

The first results got in. With the spare core, the average runtime for dual-process use of the GPU is down from 7600-8000 to ~4000-4200 with cputime at ~950.

nanoprobe
nanoprobe
Joined: 3 Mar 12
Posts: 40
Credit: 12540756
RAC: 0

I decided to switch back to

I decided to switch back to running 1 task at a time. Although it did work the instability of the power consumption and GPU load that came with running 2 was out of my comfort zone. Even trying with 2 CPU cores available didn't help in my case.

Oliver Behnke
Oliver Behnke
Moderator
Administrator
Joined: 4 Sep 07
Posts: 984
Credit: 25171438
RAC: 39

Final notice: we expect to

Final notice: we expect to run out of the old format data files in the next 24 hours. The new data format will be shipped automatically as soon as the old sets are depleted. If you're using an app_info.xml file please update to the latest binaries (BRP4 v1.22 or later).

Thanks,
Oliver

Einstein@Home Project

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2143
Credit: 2956353113
RAC: 715699

RE: Final notice: we expect

Quote:
Final notice: we expect to run out of the old format data files in the next 24 hours. The new data format will be shipped automatically as soon as the old sets are depleted. If you're using an app_info.xml file please update to the latest binaries (BRP4 v1.22 or later).


Oliver, perhaps you should consider putting this in the front page news, so that people with the newer versions of BOINC see it through the 'Notice' system.

Sunny129
Sunny129
Joined: 5 Dec 05
Posts: 162
Credit: 160342159
RAC: 0

great idea Richard. i'd

great idea Richard.

i'd love to ditch the app_info.xml file now, as opposed to waiting for my current queue to run dry. but that means backing up my E@H data folder first if i don't want to unnecessarily force the server to "resend lost tasks" to my host. the problem w/ that is that i currently have 6GB of data in my E@H data folder, and i'm still running on an old fashioned HDD lol. since i don't have the patience to back up that much data at HDD speeds, i'll just DL the appropriate BRP binary and edit/add to the app_info.xml as necessary for the time being...

hotze33
hotze33
Joined: 10 Nov 04
Posts: 100
Credit: 368387400
RAC: 0

Thanks for the GPU

Thanks for the GPU utilization factor. It is working quite well.

Oliver Behnke
Oliver Behnke
Moderator
Administrator
Joined: 4 Sep 07
Posts: 984
Credit: 25171438
RAC: 39

RE: RE: Final notice: we

Quote:
Quote:
Final notice: we expect to run out of the old format data files in the next 24 hours. The new data format will be shipped automatically as soon as the old sets are depleted. If you're using an app_info.xml file please update to the latest binaries (BRP4 v1.22 or later).

Oliver, perhaps you should consider putting this in the front page news, so that people with the newer versions of BOINC see it through the 'Notice' system.

Sorry, saw it too late :-) Anyway, we already announced this and it went through as a notice. This thread in just a follow-up.

So far, everything seems to be just fine...

Cheers,
Oliver

Einstein@Home Project

johnprentiss
johnprentiss
Joined: 27 Jan 12
Posts: 1
Credit: 2157839
RAC: 0

All of this is French to me.

All of this is French to me. What do I need to do to find out if I am running anonymous? I would like to transition over to the new stuff. Sorry if this noobish. John

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6588
Credit: 316128852
RAC: 335905

RE: All of this is French

Quote:
All of this is French to me. What do I need to do to find out if I am running anonymous? I would like to transition over to the new stuff. Sorry if this noobish. John


Welcome John ! :-)

If you have simply signed up in the usual way and allowed our standard installation to proceed, then you are not running as an 'anonymous platform'. That mode requires considerable deliberate intervention to be so, and will not happen without your explicit knowledge.

Currently your 'BOINC client' will contact the E@H servers and behave as per any preferences that you have specified - or the default values if not. Your rig will automatically be brought along with the flow of new project work. This thread has detail which is only really relevant to the tweaksters ( bless 'em ) around here. :-)

Cheers, Mike.

( edit ) I see that your computer has no appropriate GPU's mentioned ( video cards that can perform a certain type of computing for us ), so that makes life even simpler for you.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.