A question about GPU usage. I have an Nvidia GTX 770 [1536 cores] w 4gb and FC22.
If I don't have a [app1]/app_config.xml and the Task uses my gpu, my Bmgr reports it running (1cpu+1gpu) and at the same time I see app2 running (.05cpu+.25gpu) (matching it's app_config file) on another application. Google hasn't be forth coming to my questions: Do the gpu apps timeslice? or does an app get 384 cores or what?
Thanks
Copyright © 2024 Einstein@Home. All rights reserved.
Using GPU - questions
)
In my experience
Two apps using the gpu will both be in the gpu memory at the same time but
the gpu cores will timeslice between the apps.
The timeslice between the same type apps Arecibo vs Arecibo will usually be
equal but between different apps will not. Parkes usually gets more of the timeslice than Arecibo, at least it used to be this way.
I am sure Gary can elaborate
My impression, which I hope
)
My impression, which I hope will be corrected by someone knowing more about these things, is that only one application is truly active on the GPU at a given time. I think it runs until it stalls for want of some service provided through a CPU of the system (sometimes actually meaningful communication, but often more like I/O). Then another task can run on the GPU if it is not also stalled for CPU service. Modern GPUs have extra hardware aimed at making the task switch at such boundaries much, much faster than it used to be, and in particular much of the required context is stored on-chip in the useful cases.
If there are two like tasks, they will tend to get somewhat equal time unless their CPU tasks happen to get "stuck" to cores with substantially different current usage. Windows 7 and 10 seem far less inclined to shuffle a task on to the next core than XP, so I've seen lots of effects I believe reflect a difference in latency of CPU service by the CPU tasks handling, say three tasks of the same type. People commonly misconstrue the resulting inequality of elapsed time as proof of unequal work content between work units, which, while true for some applications at some projects, is often not true at all.
If the types are dissimilar, then the one that can have a long run time between stalls, will thus get a far higher fraction of actual GPU operating time than one running "simultaneously" that requires help far more frequently. Folks here often accuse the one that gets more share of time of "hogging" the GPU. Truth is, every GPU task "wants it all", but just accepts what it gets.
But to get back to the originally posted question, I am pretty sure there is no such thing as BOINC allocating some specific subset of GPU resource to a particular task.
What started my question is
)
What started my question is that I was running FC19 and the boinc that was loaded about 2 years ago. Projects: Seti, Einstein, WCG, rosetta on an AMD 1090T 6X 3.8 machine with a GTS 520 gpu. I was getting right at 30,000 credits per day. So, it ain't broke, lets fix it and upgrade to FC22 and a new boinc, login and welcome back, here's yor seti, einstein, etc ... and credits of 5000-7000 per day - NOW it's broke. I don't remember every telling the FC19 setup to do any gpu, never had any app_config.xml files. Anyhow I can find all sorts of posts about "create this file" - "put this line in it" etc but what I can't find is - it's better to run 1@100% or run 10@10% or 2@50% or.... I've played with cpu@.90 and cpu@.05 - doesn't seem to make any difference. I did get from you that gpu things "time slice", thanks.
Did the people who wrote the gpu code do any research as to the best configuration? or did they just say hey the code works - high V? You have to understand how my mind works - here's 10k of code, takes 2 minutes to run the job - Hmmmm, if I cut, trim, convert from xyz to C ... hmmm now 800 lines and runs in 12 seconds ... acceptable... but a bit sloppy - can be cleaned up... if I remove the manual search, do 96 parallel searches, add a web frontend ... [ a $500 bonus? Thanks boss ] - I digress.
Anyhow, 1x100=100, 2x50=100, 5x20=100, 100x1=100. Some how I don't think things work quite that way. I've told boinc to swap tasks every 6000 minutes - do the job and move on, don't piecemeal. But again lots of "you can" but not a lot of "you should's".
Thanks
PS - I moved my GTS 520 to a second box and upgraded to a GTX 770 4g -
liderbug wrote:What started
)
You misunderstand the 0.90 and 0.50 settings, they are NOT a use this setting, they are use this amount MAX setting. So if your gpu only needs .01% of the cpu you can put 0.90%, or 0.50%, in there all you like but it won't ever actually use that much, just that much is AVAILABLE for it to use if it needs it.
Also I don't understand about the 2x50=100 stuff, do you running 2 tasks from different projects or 2 kinds of tasks here? If here it runs one task until your setting tells it to stop and run a different task, in your case 6000 minutes, it's seconds thou isn't it? People usually try to tune that so a task will actually finish before swapping to the next task, that way it's more efficient and tasks get run straight thru without stopping and then start and finish another task.
If you are talking different projects then that's a whole bunch of other stuff involved in the percentage settings for each project and 50% is NOT 50% like most of us think of 50%!! It's got to do with daily RAC not percentage of gpu usage directly. So if project A gives out 3 times as many credits as project B then it will run 1/3 the time that project A does, screwy imho but it's the way the Boinc Programmers made it work. That is NOT a project by project setting.
mikey wrote:liderbug
)
Actually the tasks will use whatever resources the OS will give them once started. The settings for 0.5 CPU and 0.5 GPU is only to tell Boinc how many tasks to start. Those settings has no direct control over how fast or slow the tasks run, indirectly they can have a huge impact if one sets them "wrong".
Usually Nvidia cards can run up to x3 or maybe x4 GPU tasks at once and still be effective, AMD cards might be able to run x5. Nvidia cards usually does not need to have a free CPU core to support the GPU to be effective while AMD cards and Intel GPUs require it. The only way to know what's best on a particular machine is to do some testing.
Greetings Liderbug, I
)
Greetings Liderbug,
I agree with Holmis about doing some testing.
I will share my results....
I have a Radeon/ATI 7750, that I tested running 1 vs. 2 GPU tasks concurrently,
I found that running two tasks took slightly longer than twice as long as running one task.
I compared elapsed wall-clock time.
I did not want to compare how much time the CPU used to support the GPU.
I think it should be easy for you to compare with your NVidia as well.
Have Fun!!
Jay
Liderbug - All the posts
)
Liderbug -
All the posts that are here are good info for you.
1) Run one task at a time and see how long each takes (do at least 10 tasks).
2) Then run two tasks at a time and see how long each takes. If the time is still the same as 1), you still have 'excess capacity". If it is twice as long or longer, one task is your limit. In between, maybe two tasks would make full utilization of your card.
3) Depending on the results in 2), maybe try 3 tasks at a time.
If you are running Windows, there is a program called GPU-Z, which will show you GPU memory usage, and GPU processor usage, plus other stuff. You can watch the usage of your card (along with temp).
I believe results depend on your card, and the rest of your computer, (processor speed, memory speed, etc.). Only you can determine what is best for your computer.