Many Nvidia gpus do not seem to run more than one gamma ray task at a time effectively.
This might be true of gravity wave tasks too.
===edit===
The Nvidia GPU manager for Windows will tell you how loaded your GPU is.
Sorry. Conflated the Ubuntu Nvidia X-Server app with the Windows one.
----edit----
So will gpu-z.
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Many Nvidia gpus do not seem to run more than one gamma ray task at a time effectively.
This might be true of gravity wave tasks too.
The Nvidia GPU manager for Windows will tell you how loaded your GPU is.
So will gpu-z.
Tom M
So will Windows 10 Task Manager. Select the Performance tab and select their Nvidia GPU. There you see 4 different categories of loads. Click on the title of one of them (arrow down) and select Cuda. This will show the computing load (includes OpenCL as well) of that GPU.
I hate it when I have a brain fart!
Yup.
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
How can I do staggered starts for GPU? All tasks are synced right now. Thanks in advance
If you have tasks in queue already: Suspend a task somewhere in the middle while it's running. Then another task should start. It's not easy to give any suggestions when exactly would be the best moment to suspend a running task. Just try pausing them... and you will find good intervals.
Or you could set "no new tasks", let your queue run out, open up computing preferences and set 0 for "store at least X days of fork" and "store up to and additional X days of work". Then set "allow new tasks" again. Right after the first task begun running... set "no new tasks". Then wait... and at some point set "allow new tasks" again. Boinc should download another one and start it.
That sounds nice.
Will that work forever without intervention?
I'd like to run 2 or N tasks at a time and have them communicating or at least understanding that only one of those assigned to a particular GPU will do its GPU job perfectly alone and then turn a flag (mutex) and say Now it is you other tasks on this GPU turn to run.
With a CUDA application, not OpenCL, you could run with NVIDIA provided tools to make many processes to share a GPU with the highest throughput without changes to the code. And with new generation of high end cards you could virtualise a GPU and decide how big a proportion a given process gets access to.
So will Windows 10 Task Manager. Select Performance tab and select there Nvidia GPU. There you see 4 different categories of loads. Click on the title of one of them (arrow down) and select Cuda. This will show the computing load (includes OpenCL as well) of that GPU.
Tom M wrote: Many Nvidia
)
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Harri Liljeroos wrote: Tom M
)
I hate it when I have a brain fart!
Yup.
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Richie wrote: Markus
)
That sounds nice.
Will that work forever without intervention?
I'd like to run 2 or N tasks at a time and have them communicating or at least understanding that only one of those assigned to a particular GPU will do its GPU job perfectly alone and then turn a flag (mutex) and say Now it is you other tasks on this GPU turn to run.
With a CUDA application, not OpenCL, you could run with NVIDIA provided tools to make many processes to share a GPU with the highest throughput without changes to the code. And with new generation of high end cards you could virtualise a GPU and decide how big a proportion a given process gets access to.
I'm sure AMD has similar tools too.
--
Petri33.
Harri Liljeroos schrieb: So
)
Thanks a lot! Didn't know there's more!
Nvidia driver 496.49 trashed
)
Nvidia driver 496.49 trashed my RTX3080Ti CUDA performance. I rolled back my driver to 496.13 and everything is performing at peak.
Did the upgrade to 496.49 twice and had the same results. It's not CUDA friendly for E@H.
There are only 10 kind of people in the world, those that understand binary and those that don't!