From the See our FAQ page link under GPU utilization factor settings:
What are the GPU utilization factor settings in project preferences? How do I set/use them?
These settings allow users to run multiple GPU tasks concurrently on sufficiently powerful GPUs.
Please note that if you have any doubt about the suitability of your GPU, you should post a query in the help forums to seek advice before changing any of the settings below. For multiple tasks, you need larger amounts of GPU RAM. If you change these settings (with advice or not), it is entirely at your own risk!
The running of multiple GPU tasks is likely to increase the loading, power consumption, and operating temperature of your system and could cause your system instability or even hardware damage, particularly if your cooling solution is not adequate for the increased load.
You should only change the factor for the particular type(s) of GPU task(s) that you wish to run on your computer(s). Not all types of tasks may be available at any particular point in time. The factor represents the 'fraction' of a GPU device that a single task will 'consume'. For example, if you wish to run two tasks concurrently per device, you would set the factor to 0.5. Default values are all 1.0 (run 1 task per device).
If you change any of these factors, it will only be applied by your BOINC client after it has downloaded a new task of that particular type. For the impatient, you could use a small increase in the work cache setting to trigger an immediate work fetch. The change will also apply to all previously downloaded work of that type on your computer.
Increasing the number of concurrent tasks running on a GPU can increase the task run efficiency and productivity, but which utilization factor you use will depend on the particular E@H app, the brand and model of GPU, system hardware and resources, what other apps you may have running, the computer's cooling efficiency, and so on.
You can peruse or search the forums for what has worked for other folks or see what works best for your system by trial (and hopefully not much error). Start with 2x tasks (0.5 utilization factor). I have run some apps at 4x (0.25 factor). Be sure to closely monitor GPU temperatures. You have your computers hidden on your account, so no one will be able to provide any specific guidance without details about your system and goals.
Ideas are not fixed, nor should they be; we live in model-dependent reality.
Thank you for the answer. I think my question was not clear enough, sorry. What I'm trying to figure out is how the GPU utilization factor works. I understand that a factor of 0.5 means 2 tasks, 0.25 means 4 tasks and so on. But what happens when you set a factor like 0.9 or 0.75? Will it be treated like 1.0 task or like 1.xx tasks? For example, arithmetically 0.8 would translate into 1.25 tasks. Do these fractional tasks make any sense?
My take on this is that GPU utilization is rounded to an integer, so 0.8 will either equal 1 or 0.5. Can't say I know witch one though. EDIT: cecht post after mine seems to indicate that BOINC rounds up. So anything between 0.51 and 0.99 will be interpreted as 1.
The CPU on the other hand is allowed to be overcommitted and would be scheduled to run 1.25 CPU tasks on the one CPU.
If anyone has more up to date info please feel free to share!
Good question. I just tried running a GPU utilization factor of 0.75 and 0.55 and saw no difference from a factor of 1 for how many tasks were concurrently run. So it appears that the boinc-client will only act on GPU factors that give integer multiples of tasks. In the past I have run 0.3, 0.33, and 0.333 factors in app_config and all have the same effect of running 3 concurrent tasks.
I did just now notice that when I set the factor to 0.55, the task that downloaded at the time was entered into my queue as "Ready to start (0.9 CPUs + 0.55 AMD/ATI GPUs)", so the download server recognized the add utilization factor, even though the boinc-client did not change how it runs. I know that lowering the CPU factor can cause the downloader to provide more tasks per download because it interprets that as an enhanced system capacity for work completion; whether lowering the GPU factor by odd fractions does the same thing, I don't know.
Ideas are not fixed, nor should they be; we live in model-dependent reality.
From the See our FAQ page
)
From the See our FAQ page link under GPU utilization factor settings:
What are the GPU utilization factor settings in project preferences? How do I set/use them?
These settings allow users to run multiple GPU tasks concurrently on sufficiently powerful GPUs.
Please note that if you have any doubt about the suitability of your GPU, you should post a query in the help forums to seek advice before changing any of the settings below. For multiple tasks, you need larger amounts of GPU RAM. If you change these settings (with advice or not), it is entirely at your own risk!
The running of multiple GPU tasks is likely to increase the loading, power consumption, and operating temperature of your system and could cause your system instability or even hardware damage, particularly if your cooling solution is not adequate for the increased load.
You should only change the factor for the particular type(s) of GPU task(s) that you wish to run on your computer(s). Not all types of tasks may be available at any particular point in time. The factor represents the 'fraction' of a GPU device that a single task will 'consume'. For example, if you wish to run two tasks concurrently per device, you would set the factor to 0.5. Default values are all 1.0 (run 1 task per device).
If you change any of these factors, it will only be applied by your BOINC client after it has downloaded a new task of that particular type. For the impatient, you could use a small increase in the work cache setting to trigger an immediate work fetch. The change will also apply to all previously downloaded work of that type on your computer.
Increasing the number of concurrent tasks running on a GPU can increase the task run efficiency and productivity, but which utilization factor you use will depend on the particular E@H app, the brand and model of GPU, system hardware and resources, what other apps you may have running, the computer's cooling efficiency, and so on.
You can peruse or search the forums for what has worked for other folks or see what works best for your system by trial (and hopefully not much error). Start with 2x tasks (0.5 utilization factor). I have run some apps at 4x (0.25 factor). Be sure to closely monitor GPU temperatures. You have your computers hidden on your account, so no one will be able to provide any specific guidance without details about your system and goals.
Ideas are not fixed, nor should they be; we live in model-dependent reality.
Thank you for the answer. I
)
Thank you for the answer. I think my question was not clear enough, sorry. What I'm trying to figure out is how the GPU utilization factor works. I understand that a factor of 0.5 means 2 tasks, 0.25 means 4 tasks and so on. But what happens when you set a factor like 0.9 or 0.75? Will it be treated like 1.0 task or like 1.xx tasks? For example, arithmetically 0.8 would translate into 1.25 tasks. Do these fractional tasks make any sense?
A quick Google turned this up
)
A quick Google turned this up and I believe it's still the way things work:
https://boinc.berkeley.edu/trac/wiki/GpuSched
My take on this is that GPU utilization is rounded to an integer, so 0.8 will either equal 1 or 0.5. Can't say I know witch one though. EDIT: cecht post after mine seems to indicate that BOINC rounds up. So anything between 0.51 and 0.99 will be interpreted as 1.
The CPU on the other hand is allowed to be overcommitted and would be scheduled to run 1.25 CPU tasks on the one CPU.
If anyone has more up to date info please feel free to share!
Òscar Àrias wrote:...Do these
)
Good question. I just tried running a GPU utilization factor of 0.75 and 0.55 and saw no difference from a factor of 1 for how many tasks were concurrently run. So it appears that the boinc-client will only act on GPU factors that give integer multiples of tasks. In the past I have run 0.3, 0.33, and 0.333 factors in app_config and all have the same effect of running 3 concurrent tasks.
I did just now notice that when I set the factor to 0.55, the task that downloaded at the time was entered into my queue as "Ready to start (0.9 CPUs + 0.55 AMD/ATI GPUs)", so the download server recognized the add utilization factor, even though the boinc-client did not change how it runs. I know that lowering the CPU factor can cause the downloader to provide more tasks per download because it interprets that as an enhanced system capacity for work completion; whether lowering the GPU factor by odd fractions does the same thing, I don't know.
Ideas are not fixed, nor should they be; we live in model-dependent reality.
Thank you very much for your
)
Thank you very much for your answers! Also interesting the details about the CPU.