Latest data file for FGRPB1G GPU tasks

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 370
Credit: 307,378,474
RAC: 233,038

Well I'm pretty sure it was

Well I'm pretty sure it was identified as 4 1080's in the past, not positive. But now it is identified as 48 1080Ti's since I am running the spoofed client for Seti. I would have to run my cache out and then go back to a stock client to test. Not that willing.  Will have to wait for someone else to identify the issue.

One of my 3 card hosts identifies as 3 1070's.  The actual card list is 2 X 1070 and a 1080.

The other 3 card host identifies as 3 1070's.  The actual card list is 2 X 1070 and a 1070Ti.

The other 3 card host identifies as 3 1070 Ti's.  The actual card list is 3 X 1070Ti's.

[Edit]  The coproc_info.xml shows the 1070's and the 1080 to have identical specs.  Same RAM, same SP and DP floating performance.  The only difference in the file is the mulitprocessorcount of the 1080 being 20 vice 15 for the 1070's


 Probably the luck of the draw how the system gets identified with such similar values in coproc_info.xml for the 1070's and 1080's.

BoincStats

mmonnin
mmonnin
Joined: 29 May 16
Posts: 248
Credit: 538,465,415
RAC: 450,957

The 'better/faster' card

The 'better/faster' card isn't right. Maybe in terms of compute functionality but not in terms of 1080 > 1070. Why would BOINC devs add that logic in. What happens with Vega 64 vs Polaris 580? A 580 is a bigger number...

My guess is that it's whatever card is on a top slot or listed 1st by the system to BOINC. I have a 1070 and 1070Ti in a system and this is how they show up.

Coprocessors:[2] NVIDIA GeForce GTX 1070 (4095MB) driver: 396.51

I had the 1070 1st so its in the 1st PCI-E slot and the 1070Ti is in the 2nd.

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 1,900
Credit: 127,668,984
RAC: 48,618

You'll really have to provide

You'll really have to provide some hard evidence for that 'top slot' assertion. From what I've seen of the BOINC code, it has no knowledge at all of hardware details like that.

Jim1348
Jim1348
Joined: 19 Jan 06
Posts: 312
Credit: 175,407,529
RAC: 2,811

I vaguely recall a discussion

I vaguely recall a discussion of some years ago on Folding about that issue, since the Folding client sometimes identifies the wrong cards when multiple cards are installed (e.g, mixes up a GTX 970 with a 980).  It was something of a black art, but I think the resolution was that it depended on the OS (Windows v. Linux for example), and also the motherboard BIOS.  You could never be sure which order was correct, but my experience is that BOINC is more consistent with its method, whatever it is.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 370
Credit: 307,378,474
RAC: 233,038

I too think it has to do with

I too think it has to do with the OS and how it identifies cards.  For example in Linux, nvidia-settings identifies this host's cards as:

GPU-0 RTX 2080

GPU-1 GTX 1070

GPU-2 GTX 1080

in order from top slot to bottom slot.

However nvidia-smi identifies the cards as:

GPU-0 GTX 1070

GPU-1 RTX 2080

GPU-2 GTX 1080

in order from top slot to bottom slot.

 

But the addition of the RTX 2080 yesterday in the hosts which had all Pascal cards in them and identified them as such instantly changed their identifications to (3) RTX 2080's hosts.

So the BOINC code rule Richard posted seemed to have applied.

BoincStats

Aurum
Aurum
Joined: 12 Jul 17
Posts: 35
Credit: 1,686,445,714
RAC: 3,746,405

Just rejiggered my fleet so I

Just rejiggered my fleet so I only have two cards of the same model on each motherboard. The extra space keeps them cooler. They both run at 16x 3.0. Can't tell what order they're listed in.

Two Degrees of Albert Einstein, One Degree of Sam Goudsmit.

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 1,900
Credit: 127,668,984
RAC: 48,618

Keith Myers wrote:So the

Keith Myers wrote:
So the BOINC code rule Richard posted seemed to have applied.

Nice to know that something in BOINC code works Wink

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 370
Credit: 307,378,474
RAC: 233,038

You would think that after

You would think that after debugging #PR2918 with you that I would have remembered what happens when Turing gets introduced to GPUGrid and Einstein projects.  Short memory.  I dumped 3 GPUGrid tasks instantly before I put in my gpu_exclude statements.

Then found myself not running any cpu tasks and all postponed and waiting to run again.  Oh yeah, now I remember can't use max_concurrent and gpu_excludes together any more. 

Eliminated the max_concurrent statements and used local preference for core control to get cpu work running again.

What did I just eat for breakfast today . . . . .??

BoincStats

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 3,835
Credit: 180,965,618
RAC: 38,288

Just FYI: Since the current

Just FYI: Since the current FGRP tasks run on all cards, we have a working solution to prevent the scheduler from sending old, problematic tasks to new cards, and our manpower is pretty limited, I'm postponing further work on that (FGRP) issue and attending to more urgent things. Among that is getting some GPU code for the Gravitational Wave serach to work on E@H.

BM

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 370
Credit: 307,378,474
RAC: 233,038

Yes have picked up the new TV

Yes have picked up the new TV app when I have requested more work here.  So don't need the gpu_exclude for Einstein anymore. Sadly still do for GPUGrid.

BoincStats

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.