for the new O3ASBu tasks, they have such a short relative time in the recalc step that I'm not overcommiting stuff nearly as much as I was with O3ASHF, and CPU use isnt really slowing them down since there's such a large portion of it in GPU only stage
so for 2x tasks, running MPS at 70%
for 3x tasks, running MPS at 40%
using the 1.08/1.15 CPU app, my recalc step only takes about 1-2 minutes on my EPYC Rome systems, allowing CPU load to stay around 75% from other projects. adding only 1-2 mins on a task that runs for 2700s is hardly any impact. less than 5% of the overall runtime for me. compared with O3ASHF tasks where CPU recalc would take at least 50% of the runtime.
I'm kind of surprised that it's taking 5 mins on that Threadripper system, even if it's only a Zen1+ chip. are you running a bunch of CPU work on it too?
Thanks for the MPS info.
Yep, those systems are running full-time CPU work as well. That gives me an idea to test to see how bad the bottleneck actually is.. Let me see what happens when I stop CPU work, close BOINC to clear RAM, and then reopen to only GPU work. More just out of curiosity than anything else.
EDIT/UPDATE: It took ~80 seconds when no other CPU work was active and RAM was cleared.
It took ~107 seconds when 50% of the cores were doing CPU work.
It took ~132 seconds when 75% of the cores were doing CPU work.
I understand the clock slows when more cores are filled but still a pretty severe bottleneck at high/full CPU usage. Now to find the "magic" CPU usage percentage to maximize both core count usage and speed...
but again, it's not a huge loss anyway with the new tasks. only losing a tiny fraction of overall speed with the CPU recalc portion extending by a couple mins when the overall runtime is so long anyway.
but again, it's not a huge loss anyway with the new tasks. only losing a tiny fraction of overall speed with the CPU recalc portion extending by a couple mins when the overall runtime is so long anyway.
For sure. I was also thinking in the context of all of the other work on the cpu. Relatively insignificant improvement for this work, but could be a significant difference/improvement for strictly cpu work. Always something to explore!
Ian&Steve C. wrote:for the
)
Thanks for the MPS info.
Yep, those systems are running full-time CPU work as well. That gives me an idea to test to see how bad the bottleneck actually is.. Let me see what happens when I stop CPU work, close BOINC to clear RAM, and then reopen to only GPU work. More just out of curiosity than anything else.
EDIT/UPDATE: It took ~80 seconds when no other CPU work was active and RAM was cleared.
It took ~107 seconds when 50% of the cores were doing CPU work.
It took ~132 seconds when 75% of the cores were doing CPU work.
I understand the clock slows when more cores are filled but still a pretty severe bottleneck at high/full CPU usage. Now to find the "magic" CPU usage percentage to maximize both core count usage and speed...
75% is pretty good :) but
)
75% is pretty good :)
but again, it's not a huge loss anyway with the new tasks. only losing a tiny fraction of overall speed with the CPU recalc portion extending by a couple mins when the overall runtime is so long anyway.
_________________________________________________________________________
Ian&Steve C. wrote: 75% is
)
For sure. I was also thinking in the context of all of the other work on the cpu. Relatively insignificant improvement for this work, but could be a significant difference/improvement for strictly cpu work. Always something to explore!