Wasting resources.

adrianxw
adrianxw
Joined: 21 Feb 05
Posts: 242
Credit: 322,654,862
RAC: 0
Topic 206357

I ran Belarc to check my system temperatures and happened to notice that not all off my CPU's  were running 100%. Puzzled I started stopping BOINC programs to see what was causing that. It was Einstein. Other projects that use the GPU do not do that. Are you aware of this, if so, what is your justification?

Wave upon wave of demented avengers march cheerfully out of obscurity into the dream.

Christian Beer
Christian Beer
Joined: 9 Feb 05
Posts: 595
Credit: 171,622,946
RAC: 285,433

I guess you are refering to

I guess you are refering to the current FGRPB1G GPU application. This is app is using openCL which needs 1 full core when run with Nvidia GPUs (due to limitations of the Nvidia driver). You can modify the behavior on AMD GPUs by adding an app_config.xml for your platform and adjust manually:

    <avg_ncpus>0.500000</avg_ncpus>
    <max_ncpus>0.500000</max_ncpus>

Original thread where more help from users who use this is available.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5,870
Credit: 115,698,136,691
RAC: 34,708,019

Christian Beer wrote:You can

Christian Beer wrote:

You can modify the behavior on AMD GPUs by adding an app_config.xml for your platform and adjust manually:

    <avg_ncpus>0.500000</avg_ncpus>
    <max_ncpus>0.500000</max_ncpus>

Original thread where more help from users who use this is available.

I'm not sure if you intended to reference a particular message (which is actually talking about the app for NVIDIA GPUs) or perhaps the complete thread, which should have some examples showing the use of app_config.xml files.  The OP is using AMD Pitcairn series GPUs.

Also, if this mechanism is to be used to control CPU and GPU usage,  the documented options need to be

<cpu_usage>0.5</cpu_usage>
<gpu_usage>0.5</gpu_usage>

if the intention is to run 2 simultaneous GPU tasks whilst reserving just a single CPU core.

I use a lot of AMD Pitcairn series GPUs, from HD7850s to R7 370s, all with 2GB GPU RAM which should be pretty similar to what the OP has.  The biggest difference between our respective aims would be that the OP supports CPU apps on other projects whilst I don't.  I leave lots of CPU cores idle these days as a means of reducing power and heat whilst still trying to maximise GPU output.  I suspect Adrianwx would rather maximise CPU use whilst at least getting reasonable output from the GPUs.

The above options should improve GPU output by a measurable (but not all that large) amount.  They will not improve CPU output because there will still be a CPU core that is not crunching a CPU task.  It will be a bit more 'utilized' because it will be supporting 2 GPU tasks rather than 1.  I know this works well from direct experience.  I also have a host with 2 Pitcairn series GPUs (R7 370s) where the 4 concurrent GPU tasks are supported by just a single CPU core in a Pentium dual core host.  The GPU run times are pretty much the same, indicating there isn't a problem even with this configuration.

I would suggest a different option if the desire is to use all CPU cores if possible, even though it would probably impact on GPU run times to some extent.  This is shown in the complete app_config.xml file below.

<app_config>
  <app>
    <name>hsgamma_FGRPB1G</name>
    <gpu_versions>
      <gpu_usage>0.5</gpu_usage>
      <cpu_usage>0.4</cpu_usage>
    </gpu_versions>
  </app>
</app_config>

The idea here is still to run 2 GPU tasks but not to reserve any CPU cores (since 2 x 0.4 < 1).  I've never tried this with the current FGRPB1G app.  There have been improvements in the app since initial release so there's some chance it might work.  It would have been a disaster with the previous BRP6 app - certainly with the early versions before HB made his improvements.  I think it's probably worth a try.  If the GPU performance drops off a cliff, the easiest remedy would be to create a 'free' core by setting the preference to allow BOINC to use 87% of the CPU cores (7 out of 8 threads).  Another option would be to go back to a single GPU task by setting gpu_usage back to 1 and keeping cpu_usage still less than 1 - 0.4 would be fine.  Perhaps the GPU performance wouldn't be as badly effected if only a single task was running.

The only sure way to get the best performance consistent with the OP's aims is to try the various options to see what works best.  It's not something that is predictable without experiment.

 

Cheers,
Gary.

Christian Beer
Christian Beer
Joined: 9 Feb 05
Posts: 595
Credit: 171,622,946
RAC: 285,433

Hi Gary, yes your approach

Hi Gary,

yes your approach is better. I started to write about Nvidia but then realized Adrian is using AMD which I don't have experience with and tried to give my best advise by using my Nvidia config example.

@Adrian please follow Gary's advise.

adrianxw
adrianxw
Joined: 21 Feb 05
Posts: 242
Credit: 322,654,862
RAC: 0

I'll do that and see how it

I'll do that and see how it goes, experiment a bit. There are other GPU projects that do not do this, and the majority of projects do not use the GPU at all. I may continue crunching Einstein, but not if the cost is being paid by my other projects.

Wave upon wave of demented avengers march cheerfully out of obscurity into the dream.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.