You need to try to stagger the task startups on the gpu so that they all don't converge at the 49% and 99% cpu offload points when running gpu task multiples.
Just suspend a task that starts anew on the gpu until the previous one had made progress halfway to one of its cpu compute intervals, then un-suspend the task.
Thank you.
Immediate result can be seen:
It sure looks like both tasks have started and running in sync instead of opposite to each other
It sure looks like both tasks have started and running in sync instead of opposite to each other
you're focusing on the wrong part of the graph. the beginning is with the tasks not staggered, but the end shows the second task start right when the first task ends, leading to near constant GPU utilization. you can see a slight dip in temp, then it comes right back up instead of falling down like the previous two cycles.
It sure looks like both tasks have started and running in sync instead of opposite to each other
you're focusing on the wrong part of the graph. the beginning is with the tasks not staggered, but the end shows the second task start right when the first task ends, leading to near constant GPU utilization. you can see a slight dip in temp, then it comes right back up instead of falling down like the previous two cycles.
It sure looks like both tasks have started and running in sync instead of opposite to each other
you're focusing on the wrong part of the graph. the beginning is with the tasks not staggered, but the end shows the second task start right when the first task ends, leading to near constant GPU utilization. you can see a slight dip in temp, then it comes right back up instead of falling down like the previous two cycles.
okay thanks
Its like @mikey said.
But also, there are 2 cards in the system. So both cards work, that is why there are 2 graphs!
(Though my M5000 & P4 was read with only single temp., so I changed from M5000 to P2000...waiting for Eintein@home tasks to start again, to show the real results.)
Main thing is: previous dips were after around 18~20min of GPU & lasted for about ~15min with CPU time. If I run 2x per GPU, then the 2nd task can almost be over by the time 1st one finished it CPU calc. ????
Yes, same principle. But in my experience so far with MeerKAT tasks, they don't respond as well to 2X as the optimized app for GR#1. You won't see as much benefit . . . . if any.
They also use more VRAM. 2X may be cutting it close with 6GB.
Are you sure about that? This is MeerKAT, with only 22~25% load & not much memory used.
Did anybody use 2x WU on stronger cards, like Teslas - which can crunch more data? What are the experiences?
Yes, same principle. But in my experience so far with MeerKAT tasks, they don't respond as well to 2X as the optimized app for GR#1. You won't see as much benefit . . . . if any.
They also use more VRAM. 2X may be cutting it close with 6GB.
Are you sure about that? This is MeerKAT, with only 22~25% load & not much memory used.
Did anybody use 2x WU on stronger cards, like Teslas - which can crunch more data? What are the experiences?
Your image doesn't show much of a load from a Tesla card, and your two Tesla K20Xm's have less than 6GB of VRAM in Windows 10 running an Intel i7-5820K CPU. It may matter what your PC & BOINC specs are set to.
KLiK wrote: Keith Myers
)
It sure looks like both tasks have started and running in sync instead of opposite to each other
mikey wrote: It sure looks
)
you're focusing on the wrong part of the graph. the beginning is with the tasks not staggered, but the end shows the second task start right when the first task ends, leading to near constant GPU utilization. you can see a slight dip in temp, then it comes right back up instead of falling down like the previous two cycles.
_________________________________________________________________________
Ian&Steve C. wrote: mikey
)
okay thanks
mikey wrote:Ian&Steve C.
)
Its like @mikey said.
But also, there are 2 cards in the system. So both cards work, that is why there are 2 graphs!
(Though my M5000 & P4 was read with only single temp., so I changed from M5000 to P2000...waiting for Eintein@home tasks to start again, to show the real results.)
Main thing is: previous dips were after around 18~20min of GPU & lasted for about ~15min with CPU time. If I run 2x per GPU, then the 2nd task can almost be over by the time 1st one finished it CPU calc. ????
non-profit org. Play4Life in Zagreb, Croatia, EU
Keith Myers wrote: Yes, same
)
Are you sure about that? This is MeerKAT, with only 22~25% load & not much memory used.
Did anybody use 2x WU on stronger cards, like Teslas - which can crunch more data? What are the experiences?
non-profit org. Play4Life in Zagreb, Croatia, EU
KLiK wrote: Keith Myers
)
Your image doesn't show much of a load from a Tesla card, and your two Tesla K20Xm's have less than 6GB of VRAM in Windows 10 running an Intel i7-5820K CPU. It may matter what your PC & BOINC specs are set to.
Proud member of the Old Farts Association