You don't have to write cc_config.xml. BOINC writes it automatically if you just modify one of the stock parameters for the file via the Manager. Just go to Options >>Event Log options in the Manager and toggle the sched_op_debug setting on and Save the settings.
That automatically populates a complete, full cc_config.xml file with all possible parameters.
Then you can add your gpu exclude statement in the <options> file section. Then re-read the file settings via the Manager Options >> Read config files menu option.
You need to restart BOINC to pick up the gpu exclude statement though.
I’ve very little experience in programming, decades ago. How does look the body of an xml-file? There was no cc_config.xml in my BOINC, so I’ve to write it completely new. At now, it’s that
start
<exclude_gpu>
<url>http://einstein.phys.uwm.edu/</url>
<device_num>0</device_num>
</exclude_gpu>
end
At me <0> is my IGPU, >1< is my Radeon 6600 from BOINC-messages.
Even with ´begin´ instead of ´start´ in the first line, I still get the error-message: missing starttag in cc_config.xml
Kind regards and happy crunching
Martin
if you disable the iGPU in the BIOS, you wont have to mess around with anything with a cc_config file.
I want to use the IGPU for my daily work and leave the GPU undisturbed all time crunching E@H. I believe, this is not possible, when disabling the IGPU in the BIOS.
I want to use the IGPU for my daily work and leave the GPU undisturbed all time crunching E@H. I believe, this is not possible, when disabling the IGPU in the BIOS.
Kind regards and happy crunching
Martin
Seems like there is another parameter in the cc_config.xml file that basically says use only the most powerful GPU for crunching. Set it to "zero" instead of one and boinc should only use the discrete GPU.
Think it says something like "use all GPU's".
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
I want to use the IGPU for my daily work and leave the GPU undisturbed all time crunching E@H. I believe, this is not possible, when disabling the IGPU in the BIOS.
Kind regards and happy crunching
Martin
Seems like there is another parameter in the cc_config.xml file that basically says use only the most powerful GPU for crunching. Set it to "zero" instead of one and boinc should only use the discrete GPU.
I want to use the IGPU for my daily work and leave the GPU undisturbed all time crunching E@H. I believe, this is not possible, when disabling the IGPU in the BIOS.
Kind regards and happy crunching
Martin
Seems like there is another parameter in the cc_config.xml file that basically says use only the most powerful GPU for crunching. Set it to "zero" instead of one and boinc should only use the discrete GPU.
Think it says something like "use all GPU's".
Tom M
that parameter needs to be set to 1, not 0.
Unless I am confused the parameter needs to be zero based on the documentation.
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
you're confused. and not understanding how BOINC determines what is the "most capable" GPU.
Quote:
<use_all_gpus>0|1</use_all_gpus>
If 1, use all GPUs (otherwise only the most capable ones are used). Requires a client restart.
his issue is partially because he lacks this parameter. BOINC thinks that the iGPU is the most capable because it's reporting more VRAM than his discrete card (12GB for iGPU, 8GB for the RX6600). VRAM amount is higher priority than "speed" (likely flops) for determining what is most capable in BOINC logic.
he needs it to be '1', and he will need an exclude(or ignore) statement to exclude(or ignore) the iGPU.
Yet another technical detail on how boinc actually works than has flown right over my head.
:(
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
BOINC tries to use the "best" GPU in a system. BOINC's assessment of "best" is done in the '_compare' functions in coproc_detect.cpp. If I'm reading it right, the priority order for NVidia GPUs is:
1) compute capability
2) CUDA version
3) Available memory
4) Speed
BOINC tries to use the "best" GPU in a system. BOINC's assessment of "best" is done in the '_compare' functions in coproc_detect.cpp. If I'm reading it right, the priority order for NVidia GPUs is:
1) compute capability
2) CUDA version
3) Available memory
4) Speed
i know you're just quoting Richard's old post, but the link is broken as BOINC uses github now and not trac.
also the referenced coproc_detect.cpp file doesn't exist and hasn't been used since c. 2012-ish. these functions are now in gpu_nvidia.cpp for Nvidia. and in gpu_amd.cpp for AMD
AMD specific priority:
1. double precision support
2. local RAM (VRAM size)
3. speed (peak flops)
You don't have to write
)
You don't have to write cc_config.xml. BOINC writes it automatically if you just modify one of the stock parameters for the file via the Manager. Just go to Options >>Event Log options in the Manager and toggle the sched_op_debug setting on and Save the settings.
That automatically populates a complete, full cc_config.xml file with all possible parameters.
Then you can add your gpu exclude statement in the <options> file section. Then re-read the file settings via the Manager Options >> Read config files menu option.
You need to restart BOINC to pick up the gpu exclude statement though.
astro-marwil wrote: Hallo
)
if you disable the iGPU in the BIOS, you wont have to mess around with anything with a cc_config file.
just disable it.
_________________________________________________________________________
Hallo Ian & Steve
)
Hallo Ian & Steve !
Thank-you for your answer.
I want to use the IGPU for my daily work and leave the GPU undisturbed all time crunching E@H. I believe, this is not possible, when disabling the IGPU in the BIOS.
Kind regards and happy crunching
Martin
astro-marwil wrote: Hallo
)
Seems like there is another parameter in the cc_config.xml file that basically says use only the most powerful GPU for crunching. Set it to "zero" instead of one and boinc should only use the discrete GPU.
Think it says something like "use all GPU's".
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Tom M wrote: astro-marwil
)
that parameter needs to be set to 1, not 0.
_________________________________________________________________________
Ian&Steve C. wrote: Tom M
)
Unless I am confused the parameter needs to be zero based on the documentation.
https://boinc.berkeley.edu/wiki/Client_configuration
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
you're confused. and not
)
you're confused. and not understanding how BOINC determines what is the "most capable" GPU.
his issue is partially because he lacks this parameter. BOINC thinks that the iGPU is the most capable because it's reporting more VRAM than his discrete card (12GB for iGPU, 8GB for the RX6600). VRAM amount is higher priority than "speed" (likely flops) for determining what is most capable in BOINC logic.
he needs it to be '1', and he will need an exclude(or ignore) statement to exclude(or ignore) the iGPU.
_________________________________________________________________________
Yet another technical detail
)
Yet another technical detail on how boinc actually works than has flown right over my head.
:(
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Answered here by Richard
)
Answered here by Richard Haselgrove. How BOINC chooses the most capable GPU
BOINC tries to use the "best" GPU in a system. BOINC's assessment of "best" is done in the '_compare' functions in coproc_detect.cpp. If I'm reading it right, the priority order for NVidia GPUs is:
1) compute capability
2) CUDA version
3) Available memory
4) Speed
Keith Myers wrote: Answered
)
i know you're just quoting Richard's old post, but the link is broken as BOINC uses github now and not trac.
also the referenced coproc_detect.cpp file doesn't exist and hasn't been used since c. 2012-ish. these functions are now in gpu_nvidia.cpp for Nvidia. and in gpu_amd.cpp for AMD
AMD specific priority:
1. double precision support
2. local RAM (VRAM size)
3. speed (peak flops)
_________________________________________________________________________