X percent CPU+ 1 X GPU ratio changing

Xandro BA
Xandro BA
Joined: 23 Jul 13
Posts: 49
Credit: 4,522,731
RAC: 0
Topic 197416

Currently on default i get 0.2 CPU+ 1 NVIDIA GPU and on the Intel 0.5 CPU+ 1 intel_gpu. Can that CPU ratio be changed with an XML-file? Is there any reason why i should- or should-not change those ratios?

archae86
archae86
Joined: 6 Dec 05
Posts: 3,156
Credit: 7,179,384,931
RAC: 771,882

X percent CPU+ 1 X GPU ratio changing

What is it that you hope to accomplish with a change? Knowing that, you could be given better advice.

Xandro BA
Xandro BA
Joined: 23 Jul 13
Posts: 49
Credit: 4,522,731
RAC: 0

Well, either to lower the CPU

Well, either to lower the CPU on the intel task to have a bit more for CPU tasks or to have a bit more for the NVIDIA task since that one seems to use more CPU time per task compared to the intel task.

Holmis
Joined: 4 Jan 05
Posts: 1,118
Credit: 1,055,935,564
RAC: 0

These settings does not

These settings does not change how much of the CPU the task needs or gets, they are there to give Boinc information on how many tasks to start and run at any given time.
It's the OS, hardware driver and priority settings of the application that decides how much CPU the task will use at any given time.

The only reason to change these settings is to "reserve" a CPU core for a GPU task by telling Boinc to run one CPU task less.

Lastly yes the settings can be changed via a app_config.xml file. Go to http://boinc.berkeley.edu/wiki/Client_configuration and scroll to the bottom of the page for instructions on how, they are not project specific or very clear but that's the documentation that's available.

Xandro BA
Xandro BA
Joined: 23 Jul 13
Posts: 49
Credit: 4,522,731
RAC: 0

Claro, my idea was that it

Claro, my idea was that it would help to speed up a current GPU task by allowing more CPU use changing the CPU factor. I understand that won't help much. Thanks for the answer(s).

Jord
Joined: 26 Jan 05
Posts: 2,952
Credit: 5,878,802
RAC: 6,460

RE: These settings does not

Quote:
These settings does not change how much of the CPU the task needs or gets, they are there to give Boinc information on how many tasks to start and run at any given time.


The CPU value on X.XX CPU + X.XX GPU is there to show the maximum sustained load (*) that the CPU will use to supply the GPU with data. It's not there to show how many tasks you can run simultaneously on the GPU, as when it would be that, theoretically the maximum for Einstein would be 2 (0.5 CPU) and Seti optimized would be 25 (0.04 CPU). (which I suppose you meant with "info on how many tasks to start").

On most projects the CPU does not do more than get a bit of the data of the task that's running, translate that bit to kernels that the GPU understands, transfer the data to the GPU memory, then When the GPU reports it's done, the CPU will transfer the data from the GPU memory and translate it back into the original data format, then write that to disk, before translating the next bit of data and transfering that to the GPU memory.

Now, that's super simplified. In real time this happens very quickly, maybe even multiple times a second.
This amount of work that the CPU does is defined by the X.XX CPU value and set by the project science application.

It's also possible that a lot of the calculations are still being done on the CPU, such as here at Einstein. Mostly because they're not well-suited to be run on the GPU. That's why the CPU value will then have an increased value.

Remember that these values were available already from the very beginning of GPU calculations, when users were just running one task per GPU. It's also mostly the amount of memory on the videocard that dictates how many tasks you can load on it simultaneously. The GPU will just switch all of the shader/stream processors between running parts of the various tasks. It won't divide its amount of shader/stream processor by the amount of tasks you put on there, and run them that way. As far as I know, it's not even possible yet to tell the GPU to only use processors 1 to 40 for this, 41 to 90 for that and 91 to 640 for something else.

(*) With maximum sustained load, I do not mean that when you check in e.g. Windows Task Manager, that the process constantly takes only 0.04% CPU cycles. You'll see peaks of up to 100%, you'll see (long) periods of seemingly no activity at all. But in the long run, the load of the CPU was approximately the value that they asked for. And then most developers put in a higher value than is actually needed.

Holmis
Joined: 4 Jan 05
Posts: 1,118
Credit: 1,055,935,564
RAC: 0

RE: The CPU value on X.XX

Quote:
The CPU value on X.XX CPU + X.XX GPU is there to show the maximum sustained load (*) that the CPU will use to supply the GPU with data. It's not there to show how many tasks you can run simultaneously on the GPU, as when it would be that, theoretically the maximum for Einstein would be 2 (0.5 CPU) and Seti optimized would be 25 (0.04 CPU). (which I suppose you meant with "info on how many tasks to start").


First off thank you for the much more detailed explanation then I could ever have given, my understanding is more or less the same as you just described.

What I tried to say was that Boinc have to take both numbers in consideration when deciding on how many tasks to start and compare with the actual number of available resources (number of GPUs and CPUs (cores) available).

The main point was to explain that the numbers one see in Boinc does not influence the actual load an app puts on either the CPU or the GPU at any given time, as I understand it it's there to give Boinc information on how to best use the available resources.
So for a value of 0.5 CPU + 1 GPU it's not certain that it will use 50% of the CPU, not even as a max sustained load and if one uses an app_config.xml to change the CPU value the app would still load the CPU just as much as before the change given all other things being equal.

I never intended it to sound like one can use the numbers to decide on how many tasks to run simultaneously, for that one needs to experiment and use monitoring programs to check load on both GPU and CPU and also monitor the memory usage.

But please do correct me if I'm wrong!

Jord
Joined: 26 Jan 05
Posts: 2,952
Credit: 5,878,802
RAC: 6,460

RE: What I tried to say was

Quote:
What I tried to say was that Boinc have to take both numbers in consideration when deciding on how many tasks to start and compare with the actual number of available resources (number of GPUs and CPUs (cores) available).


{Thinking} Or not. For if the user tells BOINC to use nCPUs - 1, the GPU load will be on the one core/CPU that got freed this way.

So if you tell BOINC to use 3 of 4 cores of a Quad core CPU, and we'd follow your "telling BOINC how many tasks to start on the actual number of available resources", it would always use 3 of the 4 CPUs and leave the 4th core free for the OS, not for the GPU. The GPU would then be used by any of the 3 cores we tell BOINC to use.

P.S: Not telling you you're wrong, just wondering if it may not be different.

Quote:
So for a value of 0.5 CPU + 1 GPU it's not certain that it will use 50% of the CPU, not even as a max sustained load and if one uses an app_config.xml to change the CPU value the app would still load the CPU just as much as before the change given all other things being equal.


Yes, that's correct. Is also what I meant with my loose note of "And then most developers put in a higher value than is actually needed". It may also be the maximum peak load. In any case, not a number that will be visible anywhere, unless you track complete load of all resources for the duration of the running of the task.

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2,143
Credit: 2,924,667,481
RAC: 875,246

RE: RE: What I tried to

Quote:
Quote:
What I tried to say was that Boinc have to take both numbers in consideration when deciding on how many tasks to start and compare with the actual number of available resources (number of GPUs and CPUs (cores) available).

{Thinking} Or not. For if the user tells BOINC to use nCPUs - 1, the GPU load will be on the one core/CPU that got freed this way.

So if you tell BOINC to use 3 of 4 cores of a Quad core CPU, and we'd follow your "telling BOINC how many tasks to start on the actual number of available resources", it would always use 3 of the 4 CPUs and leave the 4th core free for the OS, not for the GPU. The GPU would then be used by any of the 3 cores we tell BOINC to use.


I don't think that's right. I don't think that BOINC would, should, or even could control the hardware in that low-level way. When we say "use three cores", the most we can actually do is "launch three copies of a program which does most of its work in a single worker thread". From that point on, it's the operating system which decides which bit of real estate the code actually executes on. It's nowhere near as crude as "one program, one core" - we would call that 'processor affinity', and BOINC doesn't do it out of the box.

Jord
Joined: 26 Jan 05
Posts: 2,952
Credit: 5,878,802
RAC: 6,460

You wrote:I don't think that

You wrote:
I don't think that BOINC would, should, or even could control the hardware in that low-level way.

But I didn't say that.

Me wrote:
So if you tell BOINC to use 3 of 4 cores of a Quad core CPU, and we'd follow your "telling BOINC how many tasks to start on the actual number of available resources", it would always use 3 of the 4 CPUs and leave the 4th core free for the OS, not for the GPU. The GPU would then be used by any of the 3 cores we tell BOINC to use.


I think that the low-level, or CPU affinity way, would be that we tell BOINC which CPU core to leave free so it can be used by the GPU, the OS, any other program. We don't do that. I have always wondered how come that when we tell BOINC to use one less core for CPU work, and we have a GPU that we use for calculations, that this free CPU core gets automatically used for the GPU.

One could reasonably expect that when we tell BOINC to use one less CPU core, that this free CPU core (whichever of the 4 available on a quad core, and probably rotating between the 4 available) is then not used by any part of BOINC at all. Of course, then we could discuss whether or not science applications running under BOINC can be described by me as parts of BOINC or not. ;-)

Holmis
Joined: 4 Jan 05
Posts: 1,118
Credit: 1,055,935,564
RAC: 0

RE: I have always wondered

Quote:
I have always wondered how come that when we tell BOINC to use one less core for CPU work, and we have a GPU that we use for calculations, that this free CPU core gets automatically used for the GPU.


Clever OS!?

Quote:
So if you tell BOINC to use 3 of 4 cores of a Quad core CPU, and we'd follow your "telling BOINC how many tasks to start on the actual number of available resources", it would always use 3 of the 4 CPUs and leave the 4th core free for the OS, not for the GPU. The GPU would then be used by any of the 3 cores we tell BOINC to use.


That's my experience. You would end up with 2 CPU tasks and 1 GPU task running if the GPU task is "set" to 1 CPU + 1 GPU. What the actual load distribution on the cores end up as is another story.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.