hsgamma using 85-95% of integrated GPU; 0% of dedicated

G J B
G J B
Joined: 23 May 05
Posts: 1
Credit: 7823158
RAC: 0
Topic 225393

Laptop running Intel i8750H (6 core 2.2GHz unboosted) and Nvidia RTX 2060 Mobile. Compute prefs set to 25% of CPUs, 30% of the time. Also running Folding@Home which runs on Light. I'm not running any new apps or settings, or subscribing to new distributed computing projects.

Since receiving hsgamma assignments, my CPU ramps right up to 100C because BOINC won't let me specify GPU preferences.

How can I scale it back and/or give resources from the dedicated GPU?

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5872
Credit: 117699295793
RAC: 35102911

G J B wrote:Compute prefs set

G J B wrote:
Compute prefs set to 25% of CPUs, 30% of the time.

Please be aware that 30% of the time means run it full bore for 1 second then set it to idle for ~2.33 seconds.  This is likely to create thermal cycling (expansion and contraction effects) as the core heats up and cools down.  I'm no expert but that doesn't sound particularly good to me.  Apart from that, unless you have a really good cooling solution (100C doesn't suggest you have) there's quite a bit of danger in high intensity compute operations on laptops.

G J B wrote:
BOINC won't let me specify GPU preferences.

With BOINC, you either use a GPU or you don't.  You can't do the same throttling as you can with CPUs.

G J B wrote:
How can I .... give resources from the dedicated GPU?

By telling BOINC not to use the internal GPU.

When BOINC starts, the event log shows BOINC detecting the GPUs.  By default, BOINC uses what it considers to be the 'best' GPU.  For some reason, BOINC must be thinking that the Intel GPU is 'best'.  You can configure BOINC to change that.  Read through the BOINC User Manual on Client Configuration for options like <ignore_intel_dev>N</ignore_intel_dev>, and others.  You need to create a cc_config.xml file.  There are full instructions in the documentation.  You just need to read it carefully and ask specific questions if anything is not clear.  I don't need this level of control so I don't have first hand experience with these options to turn specific GPUs on or off.

Cheers,
Gary.

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2143
Credit: 2959312817
RAC: 706207

It's also worth looking at

It's also worth looking at the server log for the computer. So far, I've seen:

2021-05-14 03:55:41.9128 [PID=22081]    [send] CUDA: req 0.00 sec, 0.00 instances; est delay 0.00
2021-05-14 03:55:41.9128 [PID=22081]    [send] Intel GPU: req 26032.25 sec, 0.00 instances; est delay 0.00

'CUDA' would be the RTX 2060 - it seems as if you're not even requesting work for that device.

2021-05-14 03:55:41.9556 [PID=22081]    [version] NVidia compute capability: 705
2021-05-14 03:55:41.9556 [PID=22081]    [version] CUDA compute capability required max: 699, supplied: 705

That looks as if the RTX 2060 is too recent to be supported on this project - which would be surprising. But maybe this one's OK:

2021-05-14 03:55:41.9556 [PID=22081]    [version] Checking plan class 'FGRPopenclTV-nvidia'
2021-05-14 03:55:41.9556 [PID=22081]    [version] parsed project prefs setting 'gpu_util_fgrp': 1.000000
2021-05-14 03:55:41.9556 [PID=22081]    [version] GPU RAM calculated: min: 1000 MB, use: 750 MB, WU#550489120 CPU: 429 MB
2021-05-14 03:55:41.9556 [PID=22081]    [version] NVidia compute capability: 705
2021-05-14 03:55:41.9556 [PID=22081]    [version] Peak flops supplied: 5.76e+11
2021-05-14 03:55:41.9556 [PID=22081]    [version] plan class ok

 

 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.