For example : 1 CPU for 1 GPU or 0.5 CPU for 1 GPU
Thanks for your explanations
The short answer is “it depends”
Einstein uses OpenCL for their GPU apps. It varies also with the apps (which searches you’ve selected in your project preferences). Some apps are more mature and do almost all their processing on the GPU and others like the Gravity Wave search on GPU aren’t as developed. You’ve got your computers hidden so I can only give general advice.
Nvidia’s implementation of OpenCL tends to require more CPU support so I generally reserve one CPU thread per GPU task. AMD’s implementation of OpenCL is less demanding so you could reserve half a thread per GPU task. I don’t have any AMD GPUs so others who have them will be better placed to advise you.
Regarding my computer, I have a AMD 3950x with 2 2080 Super, perhaps it helps, thanks
Download and run a programlike Gpu-Z or MSI Afterburner, both work with all brands of gpu's, which you can then use to see how much of the gpu each workunit you are running is using. The short answer is as MarkJ said Nvidia generally takes more to keep the gpu fed and running at it's max, but if your gpu is only using in the 60 to 70% range you may want to run more than one taskat a time. You will need an app_config.xml file to do that but once you get one running it's pretty easy to make changes so your system runs best for you.
Some people prefer to max out everything so they get thru the max number of workunits in the shortest time period, while other people tend to do as Zalster said and let the gpu just run somewhere below it's max and let it run for years on end. I'm more towards Zalsters way but for instance at MilkyWay my 7970 can do 3 workunits at one time while only using 0.5 or even 0.3 of a cpu core doing it. Your 2080's could easily do more than that there but here where the app is better written running 3 workunits at one time may overwhelm even your 2080.
Here is a copy of myapp_config.xml file from Ammicable Numbers to give youan example of how it looks:
You will need an app_config.xml file to do that...
That's not true here at Einstein, you could change the project preferences to change the utilization factor for GPU tasks to run more than one tasks at a time on a GPU. But be aware the the setting only applies after new work has been downloaded. The app_config.xml gives more control but also have a downside if you want to back out, you'll have to reset the project to revert the changes made by the app_config.xml file.
mikey wrote:
Some people prefer to max out everything so they get thru the max number of workunits in the shortest time period, while other people tend to do as Zalster said and let the gpu just run somewhere below it's max and let it run for years on end.
What do you mean by this statement?
I've been running tasks on my GPU for "years on end" and utilizing them to do as much work as possible and they've just kept on going.
As long as you keep them relatively cool they just keep on going.
You will need an app_config.xml file to do that...
That's not true here at Einstein, you could change the project preferences to change the utilization factor for GPU tasks to run more than one tasks at a time on a GPU. But be aware the the setting only applies after new work has been downloaded. The app_config.xml gives more control but also have a downside if you want to back out, you'll have to reset the project to revert the changes made by the app_config.xml file.
You are right I had forgotten about that!!
Quote:
mikey wrote:
Some people prefer to max out everything so they get thru the max number of workunits in the shortest time period, while other people tend to do as Zalster said and let the gpu just run somewhere below it's max and let it run for years on end.
What do you mean by this statement?
I've been running tasks on my GPU for "years on end" and utilizing them to do as much work as possible and they've just kept on going.
As long as you keep them relatively cool they just keep on going.
I meant that by tweaking the memory settings etc you can produce more RAC out of your gpu at the expense of it's normal long lifetime. I too run mine at the standard settings but do tweak the fan speed settings on them as they get older and run hotter by speeding up the fan a bit. Gamers often tweak their gpu settings to get the max performance out of them, doing that for crunching can be counter productive ending up wiith both invalid results and a shorter lifespan.
Regarding my computer, I have a AMD 3950x with 2 2080 Super, perhaps it helps, thanks
You didn’t say how much memory. Given your CPU is 16 cores/32 threads I would probably run 30 CPU/2 GPU tasks assuming it has enough memory. You could try 28 CPU/4 GPU. You’ll need to see which produces more work. There have been issues with the OpenCL tasks not wanting to share a GPU (they produce invalid results).
No. I’d reserve 1 CPU for each GPU task, which is why I had 28/4.
Like I said before it depends. Some tasks need more CPU support than others. Try it at 30/2 for a bit to see what the throughput is like and after a couple of days try 28/4 and compare.
Nils Bruttin wrote:Hi
)
The short answer is “it depends”
Einstein uses OpenCL for their GPU apps. It varies also with the apps (which searches you’ve selected in your project preferences). Some apps are more mature and do almost all their processing on the GPU and others like the Gravity Wave search on GPU aren’t as developed. You’ve got your computers hidden so I can only give general advice.
Nvidia’s implementation of OpenCL tends to require more CPU support so I generally reserve one CPU thread per GPU task. AMD’s implementation of OpenCL is less demanding so you could reserve half a thread per GPU task. I don’t have any AMD GPUs so others who have them will be better placed to advise you.
BOINC blog
As MARKJ very correctly
)
As MARKJ very correctly said:
It depends!
Your question is somewhat unspecific and covers a very wide range of possible answers.
Especially since you don't offer any "details" of your setups, so that we could give a "customized" answer.
The way I (basic beginner) see it (you probably know all this) is:
Depends on what CPU, GPU and System you have.
Depends on the cooling capacity each component and the rig has.
Depends on the load percentage of each component.
Depends on which app of the project is to run (they all have different ways of working).
So, check/monitor (with certain tools) those parts and then do a little experimenting.
Sometimes 1 CPU for 0.5 GPU is better ... like already said: it depends.
For finer tuning I personally use app_config, where I define i.e. 1 CPU for 0.33 GPU.
Or 0.24 CPU for 0.49 GPU ...
If you perhaps want more details in German, then send me a private message.
Regarding my computer, I have
)
Regarding my computer, I have a AMD 3950x with 2 2080 Super, perhaps it helps, thanks
Always use 1 CPU per GPU
)
Always use 1 CPU per GPU task. Only run 1 task per card.
Nils Bruttin wrote:Regarding
)
Download and run a programlike Gpu-Z or MSI Afterburner, both work with all brands of gpu's, which you can then use to see how much of the gpu each workunit you are running is using. The short answer is as MarkJ said Nvidia generally takes more to keep the gpu fed and running at it's max, but if your gpu is only using in the 60 to 70% range you may want to run more than one taskat a time. You will need an app_config.xml file to do that but once you get one running it's pretty easy to make changes so your system runs best for you.
Some people prefer to max out everything so they get thru the max number of workunits in the shortest time period, while other people tend to do as Zalster said and let the gpu just run somewhere below it's max and let it run for years on end. I'm more towards Zalsters way but for instance at MilkyWay my 7970 can do 3 workunits at one time while only using 0.5 or even 0.3 of a cpu core doing it. Your 2080's could easily do more than that there but here where the app is better written running 3 workunits at one time may overwhelm even your 2080.
Here is a copy of myapp_config.xml file from Ammicable Numbers to give youan example of how it looks:
<app_config>
<app> <name>ammicable_10_21</name>
<gpu_versions>
<gpu_usage>0.5</gpu_usage>
<cpu_usage>0.05</cpu_usage>
</gpu_versions>
</app>
</app_config>
Obviously you would have to change the <app_name> line
but the <gpu_usage> and <cpu_usage> lines tell the gpu to run 2 workunits at one time and to use 0.5 of a cpu core for each one.
mikey wrote:You will need an
)
That's not true here at Einstein, you could change the project preferences to change the utilization factor for GPU tasks to run more than one tasks at a time on a GPU. But be aware the the setting only applies after new work has been downloaded. The app_config.xml gives more control but also have a downside if you want to back out, you'll have to reset the project to revert the changes made by the app_config.xml file.
What do you mean by this statement?
I've been running tasks on my GPU for "years on end" and utilizing them to do as much work as possible and they've just kept on going.
As long as you keep them relatively cool they just keep on going.
Quote:Holmis wrote:mikey
)
You are right I had forgotten about that!!
I meant that by tweaking the memory settings etc you can produce more RAC out of your gpu at the expense of it's normal long lifetime. I too run mine at the standard settings but do tweak the fan speed settings on them as they get older and run hotter by speeding up the fan a bit. Gamers often tweak their gpu settings to get the max performance out of them, doing that for crunching can be counter productive ending up wiith both invalid results and a shorter lifespan.
Nils Bruttin wrote:Regarding
)
You didn’t say how much memory. Given your CPU is 16 cores/32 threads I would probably run 30 CPU/2 GPU tasks assuming it has enough memory. You could try 28 CPU/4 GPU. You’ll need to see which produces more work. There have been issues with the OpenCL tasks not wanting to share a GPU (they produce invalid results).
BOINC blog
I have 32go RAM 28CPU/4GPU
)
I have 32go RAM
28CPU/4GPU means 4CPU for 4x0.5GPU, right ?
Thanks
Nils Bruttin wrote:I have
)
No. I’d reserve 1 CPU for each GPU task, which is why I had 28/4.
Like I said before it depends. Some tasks need more CPU support than others. Try it at 30/2 for a bit to see what the throughput is like and after a couple of days try 28/4 and compare.
BOINC blog