New BRP4 application versions 1.22/1.23

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4312
Credit: 250437950
RAC: 34982
Topic 196213

Over the weekend we began shipping new versions of the BRP4 application (Linux CUDA 1.23, all others 1.22). These application versions are able to process a new data file format that we will begin shipping later this week. This new format reduces the data volume of a BRP4 task by almost a factor of two, being advantageous for both clients and server.

Note that previous app versions can't process this new format and will error out on new tasks immediately. Most people should get the new app versions automatically and don't have to do anything.

However, if you are running anonymous platform, you need to update your application configuration (if you're not, you don't need to read any further).

If you are doing this just to run multiple tasks on the same GPU, I suggest to wait a few more days. I'm going to implement a project setting that should allow for easier configuration without having to manually hack an app_info.xml. When done I'll let you know in this thread here, and we will not switch to the new format before this is done.

If that doesn't suit your needs and you want to continue manually tuning your app configuration, you'll need to manually download the new application binaries and update your app_info.xml file. While at it, I strongly recommend to also update the GW apps from S6Bucket to S6LV1, the SSE2 binaries for Mac, Linux and Windows are already there (in http://einstein.phys.uwm.edu/download).

BM

BM

archae86
archae86
Joined: 6 Dec 05
Posts: 3157
Credit: 7220564931
RAC: 970958

New BRP4 application versions 1.22/1.23

Quote:
Note that previous app versions can't process this new format and will error out on new tasks immediately.


How about the opposite direction? Will the new BRP4 application (and the new GW application) process the immediate previous work files? In other words, can we app_info folks download the various required files and make a single transition, or do we need to configure dual-application for each type to handle a mix of old and new?

The answer may help tip the balance for some of us to wait for your new project setting supporting multiple tasks per GPU.

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4312
Credit: 250437950
RAC: 34982

The S6LV1 App is really a new

The S6LV1 App is really a new application, with its own workunits, results, validation etc. If you have only S6Bucket in your app_info.xml and not S6LV1, you will not get any more work (e.g. for your CPU) when the last S6Bucket tasks are done.

For BRP4 we released new application versions in preparation of new tasks. The new app versions can process both current and future BRP4 tasks. But an old app version can not process a new task, it will immediately terminate with an error.

However the process for updating the app_info.xml is almost the same - you need to replace the application files in both cases, but for S6Bucket/S6LV1 you will also have to change the application name.

For the GW search you can, of course, create entries for both applications, i.e. keep the S6Bucket entry and create another one for S6LV1. This would enable you to process both S6Bucket and S6LV1 tasks. But I suspect this is more effort to get another application entry right, and I think it's not really worth it. If all goes well we will issue the first S6LV1 tasks later today or tomorrow, there will definitely be an overlap where there are both types of tasks available.

BM

BM

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4312
Credit: 250437950
RAC: 34982

RE: I'm going to implement

Quote:
I'm going to implement a project setting that should allow for easier configuration without having to manually hack an app_info.xml. When done I'll let you know in this thread here, and we will not switch to the new format before this is done.

This has been done, but undergone only very preliminary testing so far.

In the E@H preferences you will now find a setting "GPU utilization factor of BRP apps". This is meant to work as the tag in an app_info.xml: it defaults to 1.0, meaning that BOINC reserves a full GPU to run a BRP4 task with the current BRP4 CUDA application. If you want BOINC to reserve less, write the factor in there. E.g. if you want to run two tasks on the same GPU, use a factor of 0.5, BOINC will then reserve half a GPU for a task.

NOTE THAT USING THIS SETTING IS PRETTY DANGEROUS. Make sure that you know precisely what you are doing before messing around with this. Wrong settings may even damage your computer (see e.g. here)! If in any doubt, better leave it at the default (1).

If you were previously using an app_info.xml to achieve the same effect, I strongy suggest the following procedure: set Einstein@home to receive "No new work". When all E@H work you already got has been processed, "update" Einstein@Home to report this work. Stop the BOINC Client. Remove the app_info.xml file (or put it somewhere outside the BOINC data directory for future reference). Go to the web page and modify the E@H preferences. Finally start the BOINC Client again.

Please test & report here.

Note that this version of the project-specific settings page is newer than the scheduler code currently used on E@H. This means that it shows application selection settings (in particular "If no work for selected applications is available, accept work from other applications?" and opting out of "locality scheduling" applications, currently S6Bucket and S6LV1) that have no effect yet.

BM

PS: Kudos to Oliver for doing the worst part of this implementation (fiddling with project_specific_prefs.inc)!

BM

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2143
Credit: 2956199795
RAC: 716283

Kudos to Oliver indeed. This

Kudos to Oliver indeed. This sounds like something that users at other projects would welcome too, if it could be donated to the common BOINC code pool after we've given it a good testing here.

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4312
Credit: 250437950
RAC: 34982

I agree it would be nice if

I agree it would be nice if other projects could use this, too.

However our current implementation of the settings (web page) and its use in the scheduler is based on custom code (changes) that D.A. explicitly rejected to incorporate into general BOINC code.

Other projects could certainly take the idea, but probably not the implementation directly.

BM

BM

(retired account)
(retired account)
Joined: 28 Sep 11
Posts: 16
Credit: 7357648
RAC: 0

RE: E.g. if you want to run

Quote:
E.g. if you want to run two tasks on the same GPU, use a factor of 0.5, BOINC will then reserve half a GPU for a task.

Quote:
Please test & report here.

Thank you for implementing this on the server side. I switched this value to 0.5 (two workunits) and it works well so far.

BOINC x64 6.12.34, Win7
6core cpu
NVIDIA GeForce GTX 560 Ti (2048MB) driver: 285.62

GPU utilization ~ 66-75 % (roughly similar to one GPUGRID workunit on this card)
G-RAM utilization ~700-750 MB

Best regards

Mark my words and remember me. - 11th Hour, Lamb of God

JLConawayII
JLConawayII
Joined: 12 May 10
Posts: 9
Credit: 17317670
RAC: 0

Dangerous you say? Fry my

Dangerous you say? Fry my hardware it may? Woohoo!! An excuse to build a new cruncher. ^^

It seems to be working okay. I'm still only getting ~70% utilization on my GTX 260 with 2 WUs running, up from ~50% for one WU. I could try with 3 but that will be reaching the limit of my GPU memory.

Logforme
Logforme
Joined: 13 Aug 10
Posts: 332
Credit: 1714373961
RAC: 0

I have set the utilization

I have set the utilization factor to 0.5. Restarted the BOINC manager and done Update on the project. Still only one GPU task running and it says (0.20 CPUs + 1.00 NVIDIA GPUs) on that.
I don't have an app_info.xml file.
What have I missed?

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4312
Credit: 250437950
RAC: 34982

Your client requested only

Your client requested only 41.65 sec of CUDA work, and thus got only one task to run at all.

BM

BM

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2143
Credit: 2956199795
RAC: 716283

RE: Your client requested

Quote:

Your client requested only 41.65 sec of CUDA work, and thus got only one task to run at all.

BM


I would still expect BOINC Manager to display (0.20 CPUs + 0.50 NVIDIA GPUs), even with only one task running.

But I would expect the change to take effect only when new work is allocated, not just on a simple project update. He's had about 8 new tasks since he posted, so hopefully it's switched by now.

My GTX 470 is doing just fine, mostly sharing with SETI - though I'm starting to find that the BOINC limit of 4 'venues' is geting a bit restrictive for all the different things I want to do with Resource Share, on top of this setting. (All my other CUDA cards are 512MB 9800GTs, so I can't run multiple tasks on them)

Thank you for implementing this via the raw 'count' figure, not the inverse "Run 1, 2, 3... tasks", which might have been more intuitive for new users. I'm actually running with count=0.48 here and at SETI. That allows two tasks from either or both of those to run: later, I'm going to try adding a third project with count=0.51, because I want to rule out running two tasks from that project at once.

Looking good so far, I'll let you know how I get on.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.