Each time I do it BOINC spits out an error on the plan_class.
Make a copy of your state file (client_state.xml) somewhere safe so you don't disturb the real thing and can easily discard it when done. Open this copy in a plain text editor of your choice and search for '<result>' (without the quotes). The block between the <result> and </result> tags is the detail about one of your current tasks. You should find many of these blocks, one per current task. Within each block will be a line <plan_class>xxxxxxxx</plan_class> where the xxxxxxxx is exactly the text you need to use in app_config.xml. Be careful to look at a range of <results> so that you can properly determine all the different plan_classes being used on your machine and the type of task they apply to.
If you are absolutely certain that you are using the correct plan_class, exactly as listed in the state file (you did copy and paste precisely, didn't you :-) ) but you are still seeing the error message, then all I can suggest is that either there is a bug in the implementation of that particular extension to the app_config.xml mechanism or that there is something unusual going on with plan_classes here at Einstein. I did see (some time ago) that alternative mechanism when it was added to the documentation. Did you note and understand the particular quote from the documentation that I've listed below? I didn't check but I presume you are using a version later than 7.2.39?
Quote:
Each <app_version> element specifies parameters for a given app version; it overrides <app>. New in 7.2.39
I've been aware of this option for a while but have never needed to use it so I have no relevant experience as to whether or not it works as advertised.
I appreciate the clarification. That explains the purpose of each of those control files much more clearly.
I replied when I wasn't at my best (long hard workweek and quite tired), afterwards I got the feeling I was to harsh and I'm happy you didn't take offense to me being vague. It was also a try at getting you to do a bit of research and thinking to learn how this works instead of just serving it on a silver platter.
Jonathan Jeckell wrote:
I'm still working through this though. Each time I do it BOINC spits out an error on the plan_class. I used the names you provided, tried using the ones here https://einsteinathome.org/apps.php?xml=1 and most recently tried to use the actual application names for the executable in the project folder on my hard drive.
The <app_name> or <name> is hsgamma_FGRPB1G
If you want to confirm it the open Boinc Manager and in "Options" -> "Event log options..." enable "cpu_sched", this will give you messages like this in Boinc's Event log (Tools -> Event log): (you could of course create a cc_config.xml to enable this, but what's the point when Boinc will do it for you! )
13/02/2017 00:03:11 | Einstein@Home | [cpu_sched] Starting task LATeah0012L_892.0_0_0.0_811985_1 using hsgamma_FGRPB1G version 118 (FGRPopencl1K-nvidia) in slot 0
everything between "using" and "version" is the "app_name" or "name" used in app_config.xml and what's shown in the parentheses is the plan_class, in this case "FGRPopencl1K-nvidia"
Jonathan Jeckell wrote:
This https://boinc.berkeley.edu/wiki/Client_configuration#Application_configuration isn't terribly explicit on what exactly should go in the plan_class and I'm going to simply try NVIDIA and ATI, or try it without plan_class altogether and figure out where else I can differentiate the two.
That's probably because it has to consider that the values will vary between projects, but I do agree that it could be a bit more clear on how to get hold of the appropriate info.
Jonathan Jeckell wrote:
I do appreciate all your patience and help so far though. Thank you for taking the time.
Happy to try and help, especially now when I'm a bit more well rested!
Gary Roberts wrote:
I've been aware of this option for a while but have never needed to use it so I have no relevant experience as to whether or not it works as advertised.
I've used this option to differentiate between an Intel iGPU and a Nvidia GPU in the past and I see no reason for it not to work now.
Just to follow up and to thank you properly, it's been up and running for most of a week. It turned out to be a minor typo all along (change blindness).
Just to follow up and to thank you properly, it's been up and running for most of a week. It turned out to be a minor typo all along (change blindness).
Thanks again,
I am trying to do the same thing as Jonathan but with two AMD gpus. I'd like to run 2WUs on one AMD gpu and 1WU on another AMD gpu. Is it possible? Any tips are welcome.
I am trying to do the same thing as Jonathan but with two AMD gpus. I'd like to run 2WUs on one AMD gpu and 1WU on another AMD gpu. Is it possible? Any tips are welcome.
Open Boinc Manager and check on the Tasks-tab and in the Application column, if both of the GPUs run tasks with the same plan_class (the part within the parenthesis) then you're out of luck as Boinc doesn't support the kind of fine grained control you're after.
But if different plan_classes are used then you should be able to get it working by follow my example in Message 155274.
Open Boinc Manager and check on the Tasks-tab and in the Application column, if both of the GPUs run tasks with the same plan_class (the part within the parenthesis) then you're out of luck as Boinc doesn't support the kind of fine grained control you're after.
But if different plan_classes are used then you should be able to get it working by follow my example in Message 155274.
Thanks for response. Unfortunately, they are running tasks with the plan class.
Open Boinc Manager and check on the Tasks-tab and in the Application column, if both of the GPUs run tasks with the same plan_class (the part within the parenthesis) then you're out of luck as Boinc doesn't support the kind of fine grained control you're after.
But if different plan_classes are used then you should be able to get it working by follow my example in Message 155274.
Thanks for response. Unfortunately, they are running tasks with the plan class.
After some thinking I got a new idea and you could try to add the line <max_concurrent>3</max_concurrent>.
Something like this should get you 3 GPU tasks running and since they are set to use 0.5 GPUs then one GPU should run 2 and the other 1 but I don't think you can control which GPU runs 2...:
It will also reserve 1 CPU core to support the GPUs.
As we're not trying to differentiate tasks by different plan_classes that part can be left out and the first example in the documentation can be used.
" but I don't think you can control which GPU runs 2". That is exactly what I am trying to do.
I have an RX 480 , a RX 460 and an older HD7790 in linux system. The two weaker GPUs can only run 1WU , or 2WU at most, and RX 480 can run more. The two slower GPUs are holding back the RX 480. I think , I can get the most out of my system is by running 3WUs on the RX 480 and 2WUs each on the two slower ones.
The few search results that google found regarding running two clients, seem to be reporting problems and no solution.
The few search results that google found regarding running two clients, seem to be reporting problems and no solution.
This is something to start, i know it´s a little old but was the one i start with, you just need to adapt to run with AMD stuff. I only try on windows enviroment.
Actualy i run for a long time with 3 instances since i need to bypass the 100 GPU WU limitation on Seti and to optimize mixed GPU families on the same host. Obviusly it´s work with E@H too, but since i only use the same GPU model not need to do that anymore and E@H not has the 100 WU limitation.
Jonathan Jeckell wrote:Each
)
Make a copy of your state file (client_state.xml) somewhere safe so you don't disturb the real thing and can easily discard it when done. Open this copy in a plain text editor of your choice and search for '<result>' (without the quotes). The block between the <result> and </result> tags is the detail about one of your current tasks. You should find many of these blocks, one per current task. Within each block will be a line <plan_class>xxxxxxxx</plan_class> where the xxxxxxxx is exactly the text you need to use in app_config.xml. Be careful to look at a range of <results> so that you can properly determine all the different plan_classes being used on your machine and the type of task they apply to.
If you are absolutely certain that you are using the correct plan_class, exactly as listed in the state file (you did copy and paste precisely, didn't you :-) ) but you are still seeing the error message, then all I can suggest is that either there is a bug in the implementation of that particular extension to the app_config.xml mechanism or that there is something unusual going on with plan_classes here at Einstein. I did see (some time ago) that alternative mechanism when it was added to the documentation. Did you note and understand the particular quote from the documentation that I've listed below? I didn't check but I presume you are using a version later than 7.2.39?
I've been aware of this option for a while but have never needed to use it so I have no relevant experience as to whether or not it works as advertised.
Cheers,
Gary.
Jonathan Jeckell wrote:I
)
I replied when I wasn't at my best (long hard workweek and quite tired), afterwards I got the feeling I was to harsh and I'm happy you didn't take offense to me being vague. It was also a try at getting you to do a bit of research and thinking to learn how this works instead of just serving it on a silver platter.
The <app_name> or <name> is hsgamma_FGRPB1G
If you want to confirm it the open Boinc Manager and in "Options" -> "Event log options..." enable "cpu_sched", this will give you messages like this in Boinc's Event log (Tools -> Event log): (you could of course create a cc_config.xml to enable this, but what's the point when Boinc will do it for you! )
13/02/2017 00:03:11 | Einstein@Home | [cpu_sched] Starting task LATeah0012L_892.0_0_0.0_811985_1 using hsgamma_FGRPB1G version 118 (FGRPopencl1K-nvidia) in slot 0
everything between "using" and "version" is the "app_name" or "name" used in app_config.xml and what's shown in the parentheses is the plan_class, in this case "FGRPopencl1K-nvidia"
That's probably because it has to consider that the values will vary between projects, but I do agree that it could be a bit more clear on how to get hold of the appropriate info.
Happy to try and help, especially now when I'm a bit more well rested!
I've used this option to differentiate between an Intel iGPU and a Nvidia GPU in the past and I see no reason for it not to work now.
Just to follow up and to
)
Just to follow up and to thank you properly, it's been up and running for most of a week. It turned out to be a minor typo all along (change blindness).
Thanks again,
Jonathan Jeckell wrote:Just
)
I am trying to do the same thing as Jonathan but with two AMD gpus. I'd like to run 2WUs on one AMD gpu and 1WU on another AMD gpu. Is it possible? Any tips are welcome.
Vasishk-Taneya wrote:I am
)
Open Boinc Manager and check on the Tasks-tab and in the Application column, if both of the GPUs run tasks with the same plan_class (the part within the parenthesis) then you're out of luck as Boinc doesn't support the kind of fine grained control you're after.
But if different plan_classes are used then you should be able to get it working by follow my example in Message 155274.
Holmis wrote: Open Boinc
)
Thanks for response. Unfortunately, they are running tasks with the plan class.
If your are and advanced user
)
If your are and advanced user you could do what you want by running 2 instances of Boinc and fine tunning each instance to each GPU.
But be aware, don´t try unless you realy knows what you are doing.
If you decide to do, there are few examples on how to do that in S@H forums. Not remeber to see something related here.
Vasishk-Taneya wrote:Holmis
)
After some thinking I got a new idea and you could try to add the line <max_concurrent>3</max_concurrent>.
Something like this should get you 3 GPU tasks running and since they are set to use 0.5 GPUs then one GPU should run 2 and the other 1 but I don't think you can control which GPU runs 2...:
It will also reserve 1 CPU core to support the GPUs.
As we're not trying to differentiate tasks by different plan_classes that part can be left out and the first example in the documentation can be used.
" but I don't think you can
)
" but I don't think you can control which GPU runs 2". That is exactly what I am trying to do.
I have an RX 480 , a RX 460 and an older HD7790 in linux system. The two weaker GPUs can only run 1WU , or 2WU at most, and RX 480 can run more. The two slower GPUs are holding back the RX 480. I think , I can get the most out of my system is by running 3WUs on the RX 480 and 2WUs each on the two slower ones.
The few search results that google found regarding running two clients, seem to be reporting problems and no solution.
Vasishk-Taneya wrote:The few
)
This is something to start, i know it´s a little old but was the one i start with, you just need to adapt to run with AMD stuff. I only try on windows enviroment.
Actualy i run for a long time with 3 instances since i need to bypass the 100 GPU WU limitation on Seti and to optimize mixed GPU families on the same host. Obviusly it´s work with E@H too, but since i only use the same GPU model not need to do that anymore and E@H not has the 100 WU limitation.
http://vyper.kafit.se/wp/index.php/2011/02/04/running-different-nvidia-architectures-most-optimal-at-setihome/