Want to have at most "X" Einstein work units for my GPUs. I looked in the FAQ and read something about daily quota but did not find that parameter under project preferences. Looking at the cc_config wifi did not see anything other than max # of concurrent tasks which does not apply.
I googled for app_config and Einstein but found only a few references such as this but no wiki or reference page for info and I was unable to find that reference except it was listed in a post
Some projects have "max jobs" this would work for me but it is not a parameter.
I did try setting %resources to "1" but got too many downloads. Will try 0 and see if that works. Ideally I would like to run exactly 1 job per GPU with no (Einstein) queue until the main project comes back on line.
This project is a fallback (on one of my rigs) and if the main project (SETI) goes off-line then I want to process Einstein. FWIW, I do have a rig dedicated all to Einstein.
[EDIT] Forgot to mention that I end up aborting the remaining Einstein tasks when the main project comes back on line as I know they will mostly never complete by the deadline.
[EDIT-2] Setting resource to 0 at !BAM did not work. After synchronizing my rig with the BAM! manager %1 went to %100 instead of %0 and when I refreshed the page at boincstats the 0 was change to "-1" or "project default". I actually have 0 a the project. Anyway, it seems like I will be abort about 75 tasks every Tuesday when SETI goes back on-line.
Copyright © 2024 Einstein@Home. All rights reserved.
app_config.xml is documented
)
app_config.xml is documented in the BOINC user manual under 'Project-level configuration'.
https://boinc.berkeley.edu/wiki/Client_configuration#Project-level_configuration
Richard Haselgrove
)
That was one of the first things I read. It is a plan or guide line for how to make an app_config
It is missing all the "procedures" such as
<cmdline>-nobs 1</cmdline>
So where is all that documented?
i: THE STUFF THAT GOES INTO THE APP_CONFIG THAT IS UNIQUE TO JUST ABOUT EVERY PROJECT
Command lines, as you say,
)
Command lines, as you say, are project specific: more than that, they're application specific.
But what you requested doesn't involve a command line. "at most "X" Einstein work units for my GPUs" involves the generic <max_concurrent> (in either of its forms), and that is documented in the manual. Most elements in app_config.xml are optional: you don't need a command line, so leave it out.
Thank you for responding.In
)
Thank you for responding.
In projects\www.worldcomunitygrid.org I have the following app_config
<app_config>
<project_max_concurrent>14</project_max_concurrent>
</app_config
this works fine and I have 14 tasks running on this, my desktop development system.
However, I also have 5 other WCG apps with a total WU of 28+9+34+1+2 lasting total of 8 days, the "queue". This is fine for this desktop but not for my "SETI RIG" that has Einstein as fallback project. I only want exactly X tasks where X is the number of GPUs. Some of these GPUs could run 2 concurrent tasks which makes things complicated but sticking with just 1 it seems I cannot set the queue to zero so as to get exactly one tasks and none waiting except a download now and then when one Einstein finishes. Maybe this cannot be done and that contributed to that Boinc 7.15.0 "secret" version the SETI survivalists use.
[EDIT] Supposedly setting $%resources to 1 (or is it 0) should force this 1 to 1 upload/download but 0 becomes %100 when I tried 0 at !BAM and I am going to try %1 again this coming Tuesday when SETI goes offline and see what happens. When SETI comes back online if I have another 100+ Einstein task like last Tuesday I will just abort them
[quote=J [EDIT]
)
Try going to Account, preferences, project and setting your resource share to zero, that should limit the number of workunits you get for the project and you will not get a cache or workunits for the project. This will ONLY work if you have a resource share above zero for another gpu project, in your case Seti, otherwise it will give you a whole stack of workunits thinking you have messed up settings. I have never tried this with 2 gpu's in a system so don't know if it will give you 2 workunits but it should as long as your config_sys.xml file says to <use_all_gpus>1</use_all_gpus>.
Tried setting 0 over at
)
Tried setting 0 over at boincstats but it got changed to -1. This caused %100 to show up after a sync.
I posted about the problem with not being able to set 0 for resources
https://www.boincstats.com/forum/7/12283,1
May have to disconnect from !BAM so as to configure client to 0
[EDIT] Actually I do have 0 for resources on the project website but it is ignored since I have an account manager. This will get complicated when I disconnect from !BAM. I also use bonctasks which (I am guessing) allows me to set resources once I disconnect.
OK, I disconnected and the
)
OK, I disconnected and the re-connected to !BAM and now have resources = 0 showing up on the BOINC client (my PC) for Einstein and %100 for SETI so will see what happens when SETI goes offline next Tuesday.
Might work!!
This was a bug that Willie
)
This was a bug that Willie just fixed!!!! https://www.boincstats.com/forum/7/12283,1
Hopefully this solves the problem of allocating a single task at a time!
bumping this back up. I seem
)
bumping this back up. I seem to be having the same problem.
I have SETI resource share set to '100'. I have Einstein resource share set to '0' the boinc manager recognizes this setting, but still downloads too many Einstein tasks. it's holding about ~100 on each system.
this is fine when SETI is down. but whenever SETI WUs show up, it will not finish processing the Einstein WUs that I already have.it also still continues to request more Einstein work when SETI WUs are present
what I want to happen is this: When SETI tasks are present, do NOT get any more Einstein work. If Einstein work is present from when SETI was down but then comes back up, at least finish the Einstein work that I already have before continuing to process the new SETI work (then stop requesting new work)
how do I do this?
currently it looks like I have to manually monitor when work is flowing from SETI, set NNT on Einstein, suspend SETI, allow Einstein work to finish, resume SETI. then release NNT for Einstein when it looks like SETI goes down again. that's a tedious process. any way to avoid it?
_________________________________________________________________________
This seems to be a Boinc
)
This seems to be a Boinc problem and not a project problem.
Boinc should honor the 0 resource share for any project and not request work from it if work from a non 0 project is available.
Maybe post about this on the Boinc message boards?