while doing some checking last week I had 2 .5x.25 units running then I later realized it was running 2 gpu units on the one gpu and it was 95 % . so far it has been running 2 cpu cores and 4 gpus 98-99 % . I have a 7850 amd . am I now wondering is this common . and am I . bottle necked by the gpu and wasting a cpu . ? I was wondering if a .5x.5 would be better . right now I have 3 bpr3 and one bpr5 . most bouncing between 2-5 %cpu usage
Copyright © 2024 Einstein@Home. All rights reserved.
# of gpu units per gpu
)
Do you mean that you were using settings (either through the website or through the app_config.xml file) that set the resources allocated for each GPU task to be 0.5 CPUs plus 0.25 GPUs? If so, you should have seen 4 concurrent GPU tasks and not 2.
I'm sorry, I don't understand. Did you have 2 GPU tasks running concurrently (your first statement) or did you have 4? The second part of your statement makes sense. Your quad core host should be running 2 CPU tasks (2 free cores) and 4 GPU tasks if your settings for cpu_usage is 0.5 and gpu_usage is 0.25.
I have a lot of HD7850s and these are the settings that I use on most of them. When the RAC plateaus, it gets to between 90 - 100K. The thing I like about them is that they are quite reasonable on power.
Perhaps, no and no (3 questions, 3 answers) :-). I don't know how common this setting is but I imagine those with 7850s who have experimented, may very well have ended up with something like this if they are trying to maximise RAC. These settings seem to be optimal.
I have never mixed GPU science runs on the one GPU so I don't really know if there is any interference between the two types. Others may comment on that. If you go to 0.5 CPUs plus 0.5 GPUs as settings, your host will run 3 CPU tasks and 2 GPU tasks with 1 free core. You will see a drop in RAC. If you changed the settings to try to run 3 CPU tasks and 4 GPU tasks (you could do this) I believe you may find that the extra CPU task causes quite an increase in the time to crunch a GPU task, so much so that it wouldn't be worth it. However, the only way to know for sure is to try the experiment.
I actually have a few Haswell Pentium dual core systems where (through app_config.xml) I set cpu_usage and gpu_usage both to be 0.25. This gives 1 CPU task, 4 GPU tasks and 1 free core. The Haswell cores are good enough for this to work with very little increase at all in the GPU crunch time.
Cheers,
Gary.
normally all I ever see is .
)
normally all I ever see is . 5 x .25 running 4 units . but the computer one day was running one core .5x .25 and the gpu was at 95 % . I did not have time to check it . so is one core running .5 x .25 . keep the gpu maxed out ?
I really don't have a clue
)
I really don't have a clue what you are talking about. I'm guessing that
... all I ever see is . 5 x .25 ...
means that you were looking at the tasks tab of BOINC Manager (Advanced view) and beside each GPU task with an incrementing progress % you saw[pre] Running (0.5 CPUs + 0.25 ATI GPUs) [/pre]
Is this correct? Please realise that the bit in brackets is just an estimate of what BOINC thinks each GPU task will need in order to crunch efficiently. It is NOT necessarily what is actually being used. If there was a time when those values were showing but only 1 GPU task was running, that task might have had access to quite different resources - the whole GPU for example - but would also be being starved for CPU resources if the reason for the single GPU task was that 4 high priority CPU tasks were also running.
You also said
... running 4 units ...
Units of what? GPU tasks? CPU tasks? or perhaps both?Then you said
... the computer one day was running one core .5x .25 ...
Are you talking about a CPU task running on a CPU core or are you saying that for some reason there was only one GPU task running?Please go back and read what I posted previously. Please take the time to more clearly state what you are seeing and the information you are seeking. If you don't understand what was written, please point out what it was that you don't understand. Can you also please answer the following:-
* The precise number of tasks (both CPU and GPU) when things are 'normal'?
* Ditto for that time (the computer one day ...) when you were obviously concerned about something.
* Did you at any point have CPU tasks running in high priority mode?
* What exactly do you mean by "... running one core .5x .25 ..." If you mean there was only one GPU task running, can you advise why that might have been so? Were there no other GPU tasks available to start or resume? Were GPU tasks suspended? Were there CPU tasks running in high priority mode that were preventing more GPU tasks from running? There are lots of possibilities and unless you actually tell us more information, you're only going to get wild guesses.
Cheers,
Gary.
when running 2 cpu cores
)
when running 2 cpu cores with .5 cpu x .25 gpu . the gpu is a bottle neck . the scheduler is running a single core .5 x .25 and the amd 7850 runs over 90 % roughly it can bounce from 80 to 96 . running 2 bpr6 . so for my system it appears I need to only allocate one cpu core . and use the extra core for another project . does anyone want to share an xml file for running one cpu core .?? thanks for everyone's input
Robby - I can't be sure,
)
Robby -
I can't be sure, but I think you may be looking at the wrong thing to figure-out how to best use the resources you have.
First, let me confess that I'm not entirely certain that I understand what you are asking. "Reserving" .5 of a CPU core isn't the same as "instructing the work unit to use" that much. I know, if you tell the GPU to use .25 GPUs, it will run four GPU work units. If you tell a GPU work unit it "can" use .5 CPU cores, that doesn't mean it will.
I am going to assume that what you are after is the fastest "throughput" of work for your individual machine.
It's unfortunate, but to find that combination you have to look at the times directly. The times are not directly related to how much work the machine is doing via GPU-Z, Precision X, etc, etc, etc. so that's not much help, really.
The best way I have found to determine the best combination is to A) do nothing but GPU work starting at 2 work units per GPU. B)Increase the number of GPU work units by one and calculate whether or not you have decreased the per-work-unit average time. C) If not, then back the number of simultaneous work down to where it was fastest per work unit done.
THEN, add one CPU "core" to the mix and see if there was any increase in your GPU times. If so, don't run any CPU work. If adding one CPU core didn't increase your GPU times, then add another CPU core. Do that until the GPU times rise, then back down the number of CPU cores crunching.
With GPU to CPU "credits" being 10-20:1, anything you do to slow a GPU's progress is a bad, bad thing.
So it isn't about the number of units you are running at once. The maximum number you can simultaneously run will, without exception, slow your machine down. The inescapable result being it is doing less work per hour.
With your hardware, I would be surprised if you would not be best served by running only two GPU work units at one time. That was true on my 7770 which I recently pulled from service, and my R9 270X cards. Will a 7870 do three faster? Maybe. Will it do four faster? I seriously, seriously doubt it.
What it IS doing is making the CPU busier in any given hour AND it is putting more stuff in memory and making the PCIe lanes busier. That gives you fewer resources for CPU work.
My little i5s seem to be able to do no more than 2, and often only one, CPU work unit without interfering at least slightly with the GPUs, but that is going to depend on "what kind of" CPU work and "what kind of" GPU work, as well. It is dangerous to generalize too much, so I won't.
Let me hasten to say that if you want to do a number of CPU work units because you want to participate in a CPU-only program, then that may be your priority and not-so-much "total credits" in which case you really don't care as much how long the GPU work takes.
Please read carefully what
)
Please read carefully what tbret has written. There's a lot of good information there. Please ask if you don't understand something.
On the assumption that this means you have set the GPU utilization factor to 0.25, this should allow you to have 4 concurrent GPU tasks together with 2 CPU tasks. An inspection of your tasks list shows that you have both BRP4G and BRP6 tasks on board but no CPU tasks. I presume your CPU tasks are coming from another project.
If you look at this thread, you will find a discussion about a slowdown when 4 BRP4G tasks are running simultaneously. Perhaps this same problem is affecting your machine? You might be wise to choose one or the other of BRP4G or BRP6.
I know for certain when running 4 BRP6 tasks simultaneously (no BRP4G) on a quite old quad core machines (Q8400 ~2009) with HD7850 GPUs, it is quite possible to run 2 CPU tasks at the same time without a significant slowdown in GPU performance. I have a dozen machines like this doing just fine. Your 3.16GHz quad core Xeon should be better than my Q8400s so I think you will certainly be able to run 4xBRP6 GPU tasks. As long as you don't try to run more than 2 CPU tasks, it works very well. If you attempt more than 2 CPU tasks or if CPU tasks get into high priority mode, that will quickly destroy the efficiency of this configuration.
You should not be concerned by any GPU %usage information produced by third party software and trust the elapsed times for crunching as the best measure of efficient usage of your resources.
Cheers,
Gary.