Latest data file for FGRPB1G GPU tasks

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4265
Credit: 244921706
RAC: 16859

Well, what is actually

Well, what is actually checked is the CUDA version, which ultimately depends on the driver. But it doesn't matter - how can any piece of (system) software be different between two cards (of the same type) on the same system? You will always need one software (e.g. a driver) that supports both cards, wouldn't you?

[nitpick edit: There was a time when the drivers for graphics cards for MacOS had to be on the cards themselves and could therefore be different, but that model precedes Mac OS X 10.2, or at least BOINC on OSX]

BM

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4265
Credit: 244921706
RAC: 16859

Richard Haselgrove wrote:Good

Richard Haselgrove wrote:

Good guess - you're right, plus there are extra factors considered lower down the priority order.

// return 1/-1/0 if device 1 is more/less/same capable than device 2.
// factors (decreasing priority):
// - compute capability
// - software version
// - memory
// - speed

What really puzzles me: assuming that you have a mainboard with an onboard NVidia GPU, which is rather recent but has very little (and probably slow) memory. Plugged in there you do have a slightly older card with dedicated, hence larger and faster memory. Is there any way to tell BOINC to report and use the card instead of the onboard GPU without completely disabling the latter? Is there any configuration in which BOINC will not prefer the onbaord GPU and might not get any GPU work because of the small memory?

BM

Oliver Behnke
Oliver Behnke
Moderator
Administrator
Joined: 4 Sep 07
Posts: 942
Credit: 25166626
RAC: 0

Well, you can configure the

Well, you can configure the client to either ignore a certain GPU (here the built-in one) or use all GPUs, not just the one deemed best.

Oliver

 

Einstein@Home Project

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4265
Credit: 244921706
RAC: 16859

Bernd Machenschalk wrote:Is

Bernd Machenschalk wrote:
Is there any way to tell BOINC to report and use the card instead of the onboard GPU?

Yes, apparently there is, apparently even per project (see <exclude_gpu> in https://boinc.berkeley.edu/wiki/Client_configuration).

BM

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4265
Credit: 244921706
RAC: 16859

@Oliver: sorry, didn't see

@Oliver: sorry, didn't see your post before posting myself.

BM

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2139
Credit: 2752648717
RAC: 1486471

Bernd Machenschalk wrote:What

Bernd Machenschalk wrote:
What really puzzles me: assuming that you have a mainboard with an onboard NVidia GPU, which is rather recent but has very little (and probably slow) memory. Plugged in there you do have a slightly older card with dedicated, hence larger and faster memory. Is there any way to tell BOINC to report and use the card instead of the onboard GPU without completely disabling the latter? Is there any configuration in which BOINC will not prefer the onbaord GPU and might not get any GPU work because of the small memory?

I don't think that's very common. Even if your motherboard has an NVidia video chipset, I don't think it's likely to be CUDA-compute capable.

More commonly, the compute-capable embedded video will be an Intel iGPU, and if a more powerful NVidia or AMD discrete video card is added later, they can be chosen selectively with user preferences.

In future, your scenario will come into play more often with the advent of the Ryzen G CPUs, compute-capable with the same drivers as the bigger AMD GPUs.

I more commonly see people purchasing a multi-slot motherboard, and slowly populating it with a mix'n'(un)match range of GPUs purchased off eBay, as and when funds permit. Such a user could easily end up with a higher number of shaders on a lower cc card - which should be given precedence?

The code I referenced was probably developed in the very early days of GPU computing, although it's migrated between source files since then. That makes it possibly ten years old, and David certainly indicated that things were done then that he would do differently now, when the GPU market is more established and better known (he made that clear, for example, during his opening talk at the 2014 BOINC Workshop in Budapest).

It would be a good idea to review all of this from ground level upwards, but it would be a huge job.

Juha
Juha
Joined: 27 Nov 14
Posts: 49
Credit: 4962184
RAC: 20

Well I went and dug out the

Well I went and dug out the commits that added the compare code. Very close to ten years old.

https://github.com/BOINC/boinc/commit/90f863f08ca86ad20af2e66e6ab06f26123afd58
https://github.com/BOINC/boinc/commit/5adb25381d972f159f3fb8a32f29de8fda1351fe

The first one adds the code and comments and the second one has commit message.

About the software/driver version. Anyone remember what the very early CUDA capable drivers very like? Driver packages are made of multiple components, each with their own version number. Could the driver packages have had an older version of some component for older GPUs and a newer version for newer GPUs? I did find one post from 2010 that didn't support that theory but the code pre-dates that post.

While David is certainly capable of writing odd code it seems to me even odder than usual that he would add a useless test for driver version.

As for Keith puzzle. BOINC only reports up to 4 GB VRAM for CUDA because Nvidia. And in the current incarnation of the code the test is for available VRAM. If there is a few kilobytes difference the values round up to 4 GB in Event Log yet for the code the cards are different. If you really want to know why the 1060 was best card you need to re-create the set up and copy-paste coproc_info.xml here.

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2139
Credit: 2752648717
RAC: 1486471

Juha wrote:BOINC only reports

Juha wrote:
BOINC only reports up to 4 GB VRAM for CUDA because Nvidia...

... only allows us to use 32-bit code and return values when querying the API.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4699
Credit: 17542442276
RAC: 6373301

Too much work to replicate

Too much work to replicate that configuration.  Will be interesting to see how two existing hosts will be identified when 1070's get replaced with 2080's in a few days.  Assume like my other Turing system that CC level will take precedence.

 

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5842
Credit: 109381902831
RAC: 35961308

Keith, you mentioned in an

Keith, you mentioned in an earlier message that you thought your current config might have been identified as all 1080s rather than 1080Tis.  I presume that a 1080Ti should be deemed as 'better' than a 1080 :-).

If you really see 3 x 1080s, maybe you could provide Juha with coproc_info.xml, so that he can perhaps work out why.

Cheers,
Gary.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.