For the past few weeks I've mainly been crunching BRP6 GPU tasks without any issues. However today I seem to be racking up a whole load of BRP4G GPU tasks which are causing stuttering issues on my machine when using the GPU for other tasks, such as playing video (local video, youtube, etc).
Is there something different about the BRP4 GPU tasks that are attempting to use too much GPU power, starving other applications?
Copyright © 2024 Einstein@Home. All rights reserved.
BRP4G GPU usage too high?
)
BRP4G tasks aren't always available and when they are they don't last for long. I've seen others mention that they can cause effects like you are reporting. I don't know for sure how they really perform these days because I've used preferences to exclude them. I only crunch BRP6. Also I run BOINC on machines that do nothing but crunch so I don't ever see slowdowns from anything.
Your computers are hidden so I can't see any details about them. I would suggest going back to BRP6 by excluding BRP4G in your project preferences - click the link on your account page and then de-select the BRP4G run after clicking the 'edit preferences' link on the page that opens.
Cheers,
Gary.
FYI, using an Nvidia card
)
FYI, using an Nvidia card with the latest drivers at time of writing. I'm unfamiliar with GPU tasks, but could this be a process priority issue?
BRP4G requires more data
)
BRP4G requires more data traffic between CPU, RAM and GPU. Depending on your computer there might be a "bottleneck" somewhere with all that I/O and then it happens to cause stuttering.
One thing you could try is to set the browser or video player not to use hardware acceleration for video (look for settings). Depending on the computer and GPU again, video playback might then work better or not.
While the original post
)
While the original post posits that the conflict with other uses is at the GPU itself, my personal experience suggests that priority adjustments among the CPU support applications can help this sort of problem. The right description of the problem is not that the application uses "too much" of either GPU or CPU, but rather that switching to the preferred competing use is not quick enough.
To be specific, in the past I have had unacceptable interactive usage impairment both of DVD playback (probably using GOM player, though I'm not sure) and of Adobe Photoshop work with pictures. These were sufficiently bad that for a time I made a practice of stopping BOINC GPU work when doing these things.
While BOINC does support suppressing GPU work when specific applications are active, I don't currently use that, but instead have adjusted upward the priority of the specific suffering non-BOINC applications. One can do this on a trial basis using, for example, the excellent free application Process Explorer, which is a useful way to investigate.
For a more durable system configuration adjustment I use Process Lasso, and have in the past adjusted all of CPU priority, CPU affinity, and I/O priority, in the pursuit both of higher total BOINC output and of peaceful coexistence between my BOINC work and my personal usage.
As others have suggested, I personally prefer to allow only one specific flavor of GPU work on a system, and at the current time that flavor is the Parkes PMPS work (BRP6-Beta-cuda55). Work is readily available, seems likely to remain so for some time, and seems to run well on my hosts. I also imagine the work involved to have good science potential.
I think for the original posting party, the simplest route to satisfaction might be to disallow BRP4 work, possibly combined with a reduction in the fraction of available CPUs BOINC is allowed to employ.
Actually I already only
)
Actually I already only allocate 3 of my 4 cores to BOINC, so that's not the issue.
I did however take the advice to disable BRP4G so that videos are watch-able again. Disabling "Hardware Acceleration" for video playback isn't a solution, it's a workaround.
You're probably right regarding the process priority. I can only assume the applications were developed by different developers, or that it's a CUDA issue with one application using a different version of the CUDA API. Maybe someone responsible can compare the two applications.