A virtual GPU ??

BobMALCS
BobMALCS
Joined: 13 Aug 10
Posts: 20
Credit: 54539336
RAC: 0
Topic 197344

Currently I have the following running on my only GPU.

E@H 0.2 CPU + 0.49 GPU
S@H 0.033 CPU + 1 GPU

Can anybody explain why I appear to have 1.49 GPU. I'm sure I've only got 1 GPU installed. Or have I misunderstood what those numbers mean.

Fortunately it doesn't appear to be causing a problem.

BobM

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6537
Credit: 286450285
RAC: 95231

A virtual GPU ??

That's the language of allocation of resources ( CPU and GPU ) for the given workunits. It's a bit like having 1.5 full time employees, we call it EFT's DownUnda for Effective Full Time : a workload measure which of course can be broken up many ways into the time of many actual people. In the BOINC context it's a matter of the BOINC client on a given machine having a handle on what the various tasks ( allegedly ) require, and so helps scheduling decisions.

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

BobMALCS
BobMALCS
Joined: 13 Aug 10
Posts: 20
Credit: 54539336
RAC: 0

It still seems rather odd to

It still seems rather odd to me. I have S@H set up to run 2 tasks simultaneously on the GPU for a particular application. Similarly for E@H. Nowhere do I see any statement or implication that the client scheduler can mix and match across projects. If I state or imply that a task requires a whole GPU then I would expect that to be honoured even if the GPU is capable of running multiple tasks.

BobM

mikey
mikey
Joined: 22 Jan 05
Posts: 11944
Credit: 1832506866
RAC: 217104

RE: It still seems rather

Quote:

It still seems rather odd to me. I have S@H set up to run 2 tasks simultaneously on the GPU for a particular application. Similarly for E@H. Nowhere do I see any statement or implication that the client scheduler can mix and match across projects. If I state or imply that a task requires a whole GPU then I would expect that to be honoured even if the GPU is capable of running multiple tasks.

BobM

Boinc has evolved a LONG ways from it's beginnings and does NOT always do what us non programmers think it should. Remember Boinc is designed with using spare resources, not being the primary reason for running a machine, and yes LOTS of us do that anyway. What we think Boinc is doing when we set this or that setting is often different then what it is actually doing.

An example would be if you set it to only use 50% of your cpu, one would think it would just use 50% of the cpu 100% the time, but nooo it uses 100% of the cpu 50% of the time.

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2140
Credit: 2769901703
RAC: 933617

RE: It still seems rather

Quote:

It still seems rather odd to me. I have S@H set up to run 2 tasks simultaneously on the GPU for a particular application. Similarly for E@H. Nowhere do I see any statement or implication that the client scheduler can mix and match across projects. If I state or imply that a task requires a whole GPU then I would expect that to be honoured even if the GPU is capable of running multiple tasks.

BobM


What you have to remember is that where you (and BOINC) see one GPU, internally you have a large collection of individual compute units or 'shaders' - up to a couple of thousand of them (I don't know exactly how many in your case, because your computer details are hidden - but probably several hundred at least, if you can run GPUGrid).

I may be guilty of over-simplification here, but I think projects like Einstein and SETI break down their GPU tasks into hundreds of thousands of micro-task called 'kernels', and ask the GPU to run as many of them as it can - each kernel is put onto a queue, and the GPU itself picks the next one off the head of the queue and gives it to the first available shader.

So, no matter what utilisation factor you ask BOINC to operate, if there is only one 'task' (in BOINC's terminology) using the GPU, its queue of micro-tasks kernels will all be allocated off the queue until the GPU is fully occupied. But with a lower utilisation request, two (or more) tasks could be allocating kernels to the shader processing queue: the GPU would fill up more quickly, and the kernels from each of the separate tasks would be allocated more slowly, because there'd be more competition for the global pool of shaders.

Looking at it like that, there's nothing to stop both a SETI task and an Einstein task placing their kernels on the shader queue, and allowing a single GPU to share its shaders between the two tasks. In theory, I think kernels are allocated to shaders 'first come, first served' - but when I try it myself, Einstein seems to win the race to the head of the queue about two-thirds of the time. In other words, the two different types of task both make progress, but not necessarily at equivalent speeds.

The only curious thing is that you say you've configured SETI to run two tasks at once, but BOINC's scheduler (judging by the extract you posted in your first message) is telling you that it's only going to run SETI when it can have exclusive use of the GPU. You didn't say when or how you set that utilisation factor at SETI: if you used an app_config.xml file, it can take some time before the BOINC Manager display catches up and shows the new utilisation factor, although it takes effect immediately. BOINC might be scheduling 0.5 GPUs internally, but the display might be stuck on a previous 1.0 value.

mikey
mikey
Joined: 22 Jan 05
Posts: 11944
Credit: 1832506866
RAC: 217104

RE: In theory, I think

Quote:
In theory, I think kernels are allocated to shaders 'first come, first served' - but when I try it myself, Einstein seems to win the race to the head of the queue about two-thirds of the time. In other words, the two different types of task both make progress, but not necessarily at equivalent speeds.

Does the fact that your stats are significantly different between the two projects?
Einstein@Home 26,858,111 19,907 10 Dec 2005
SETI@home 43,786,299 10,720 4 Jul 1999

I know Boinc uses some complicated formulas, to me, to schedule cpu tasks, does that expand to the gpu tasks as well? Could that be why your Einstein tasks are moving up to the head of the queue quicker?

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2140
Credit: 2769901703
RAC: 933617

RE: RE: In theory, I

Quote:
Quote:
In theory, I think kernels are allocated to shaders 'first come, first served' - but when I try it myself, Einstein seems to win the race to the head of the queue about two-thirds of the time. In other words, the two different types of task both make progress, but not necessarily at equivalent speeds.

Does the fact that your stats are significantly different between the two projects?
Einstein@Home 26,858,111 19,907 10 Dec 2005
SETI@home 43,786,299 10,720 4 Jul 1999

I know Boinc uses some complicated formulas, to me, to schedule cpu tasks, does that expand to the gpu tasks as well? Could that be why your Einstein tasks are moving up to the head of the queue quicker?


LOL. That's a neat theory, but I'm afraid it doesn't hold water.

It was, indeed, the intention of the BOINC developers (around or a little after the time when they introduced CreditNew) to schedule the client according to overall credit - though they planned to use RAC, not total credit. But, as ClientSchedOctTen makes clear, their 'Proposal: credit-driven scheduling' had insurmountable problems, and they abandoned it in favour of the REC (estimated credit) scheme we actually use today.

Either way, BOINC only ever schedules jobs at the level of the macroscopic tasks that we're used to seeing listed in BOINC Manager. I don't think even project application developers can control the rate of kernel allocation from the micro-task queues I was talking about, and BOINC certainly can't (and doesn't attempt to) manage things right down at that level.

No, overall credit doesn't account for the differences I saw. But like you, I'd be interested to find out what the real explanation is.

mikey
mikey
Joined: 22 Jan 05
Posts: 11944
Credit: 1832506866
RAC: 217104

RE: RE: RE: In theory,

Quote:
Quote:
Quote:
In theory, I think kernels are allocated to shaders 'first come, first served' - but when I try it myself, Einstein seems to win the race to the head of the queue about two-thirds of the time. In other words, the two different types of task both make progress, but not necessarily at equivalent speeds.

Does the fact that your stats are significantly different between the two projects?
Einstein@Home 26,858,111 19,907 10 Dec 2005
SETI@home 43,786,299 10,720 4 Jul 1999

I know Boinc uses some complicated formulas, to me, to schedule cpu tasks, does that expand to the gpu tasks as well? Could that be why your Einstein tasks are moving up to the head of the queue quicker?


LOL. That's a neat theory, but I'm afraid it doesn't hold water.

It was, indeed, the intention of the BOINC developers (around or a little after the time when they introduced CreditNew) to schedule the client according to overall credit - though they planned to use RAC, not total credit. But, as ClientSchedOctTen makes clear, their 'Proposal: credit-driven scheduling' had insurmountable problems, and they abandoned it in favour of the REC (estimated credit) scheme we actually use today.

Either way, BOINC only ever schedules jobs at the level of the macroscopic tasks that we're used to seeing listed in BOINC Manager. I don't think even project application developers can control the rate of kernel allocation from the micro-task queues I was talking about, and BOINC certainly can't (and doesn't attempt to) manage things right down at that level.

No, overall credit doesn't account for the differences I saw. But like you, I'd be interested to find out what the real explanation is.

Thanks for the explanation!!
I know it will never happen but I always liked the simple ways, fifo taking deadlines into account. That is why I have 15 machines here at home now, I just can't get them to work multiple projects like I want, within my time frame. That and I like achieving lots of goals, and more machines does that.

BobMALCS
BobMALCS
Joined: 13 Aug 10
Posts: 20
Credit: 54539336
RAC: 0

Thanks for the comments. An

Thanks for the comments. An interesting discussion.

As for running different projects simultaneously on a single GPU, it makes sense to do so when the scheduling requires it. I just had the wrong idea in mind.

However I should never see tasks requiring 1.49 GPUs running on 1 GPU. A bug there somewhere.

Another point is that the scheduler does not, as far as I can determine, account for the CPU time of GPU tasks in with the CPU time for CPU only tasks. There always seems to be a sprinkling of questions as to why the GPU runs slow; the answer being there is not enough CPU available to feed the GPU.

If I specify to BOINC that it should use, for example, 2 cores out of 4 then that is what I expect to happen. I do not expect to have to guess that I actually need another 1 or 2 spare cores. If the same scheduling algorithm initially used for just CPU tasks is still used then perhaps it needs upgraded to handle two combined resources (CPU+GPU).

However, it is likely that BOINC consider that it works well enough most of the time to just ignore the occasional problem. Understandable given their probable resource limitations. However, I still find it irksome.

BobM

mikey
mikey
Joined: 22 Jan 05
Posts: 11944
Credit: 1832506866
RAC: 217104

RE: Thanks for the

Quote:

Thanks for the comments. An interesting discussion.

As for running different projects simultaneously on a single GPU, it makes sense to do so when the scheduling requires it. I just had the wrong idea in mind.

However I should never see tasks requiring 1.49 GPUs running on 1 GPU. A bug there somewhere.

Another point is that the scheduler does not, as far as I can determine, account for the CPU time of GPU tasks in with the CPU time for CPU only tasks. There always seems to be a sprinkling of questions as to why the GPU runs slow; the answer being there is not enough CPU available to feed the GPU.

If I specify to BOINC that it should use, for example, 2 cores out of 4 then that is what I expect to happen. I do not expect to have to guess that I actually need another 1 or 2 spare cores. If the same scheduling algorithm initially used for just CPU tasks is still used then perhaps it needs upgraded to handle two combined resources (CPU+GPU).

However, it is likely that BOINC consider that it works well enough most of the time to just ignore the occasional problem. Understandable given their probable resource limitations. However, I still find it irksome.
BobM

Gpu crunching in general is only a few years old, so the coordination between cpu and gpu crunching is still evolving. Also each project writes it's own software to do the actual crunching, I liken Boinc as like Windows that lets Boinc crunch under a standard piece of software, but each project doing it's won thing within the constraints of Boinc. This means that each project decides how efficient crunching will be and how many resources it will use. Some projects talk to each other, but not all of them, meaning some are pretty good about optimizing while others not as much.

In general leaving one cpu core free for each Nvidia gpu is helpful to keeping the gpu full of data and crunching optimally, but at Asteroids for instance my Nvidia gpu is only 0.01% of a cpu core for each unit crunched. AMD gpu's vary from project to project also but generally use less, but again at a project like DistrRTgen my AMD gpu;s were using 0.785% of a cpu core for each unit. Generally speaking gpu's can do 10 times the amount of work in the same amount of time as a cpu core can, so leaving a cpu core free, at least initially, is a good idea.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.