How does E@H look at multi-GPU systems?

GWGeorge007
GWGeorge007
Joined: 8 Jan 18
Posts: 3,014
Credit: 4,926,047,771
RAC: 81,597
Topic 223824

Does E@H look at the multiple GPUs in a given system and select the common denominator (the lowest or slowest or oldest) as the coprocessor for all in the system?

In my 3950X system I had 2x RTX 2070 Super GPUs and then one went south and died.  I then pulled the RTX 2060 out of my other system and installed it into my 3950X along with the existing RTX 2070 Super.  Since then, my 3950X has been reading:

Coprocessors: [2] NVIDIA GeForce RTX 2060 (4095MB) driver: 456.55

When looking at BOINC Manager under tasks, I do have my E@H using two (0.5 GPU utilization factor) of GW apps per one core.  With that said, according to BOINC Manager, the RTX 2060 is taking roughly ~21 min/task and the RTX 2070 Super is taking ~20 min/task to perform one GW task.

Am I reading this information correctly?

BTW, I am running Microsoft Windows 10 Professional x64 Edition, (10.00.18363.00) using BOINC client version: 7.16.11.

George

Proud member of the Old Farts Association

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4,920
Credit: 18,474,458,419
RAC: 5,914,077

Has nothing to do with E@H. 

Has nothing to do with E@H.  The card detection is a BOINC function which it commonly does get wrong.

The way BOINC identifies cards is by which is the highest theoretical performance as based on:

1. Highest compute capability (CC value)

2. GFLOPS rating

3. Amount of memory

4. Driver level

[Edit]

My Threadripper host has four gpus.

RTX 2080

GTX 1080 Ti

RTX 2070

RTX 2070

Yet BOINC identifies the host with four RTX 2070's

The CC rating of the RTX cards is the same.

The memory on the cards is the same.

The video driver is the same.

The RTX 2080 has a higher GFLOPS rating over the RTX 2070's.  So BOINC should identify the host using four RTX 2080's, not  the 2070's

 

GWGeorge007
GWGeorge007
Joined: 8 Jan 18
Posts: 3,014
Credit: 4,926,047,771
RAC: 81,597

Keith Myers wrote: Has

Keith Myers wrote:

Has nothing to do with E@H.  The card detection is a BOINC function which it commonly does get wrong.

The way BOINC identifies cards is by which is the highest theoretical performance as based on:

1. Highest compute capability (CC value)

2. GFLOPS rating

3. Amount of memory

4. Driver level

The RTX 2080 has a higher GFLOPS rating over the RTX 2070's.  So BOINC should identify the host using four RTX 2080's, not  the 2070's

Thanks Keith once again for shedding light on a dimmly lit layperson like myself.  But...

I'm not sure where to find the CC value.  Is it the measured floating point speed or the measured integer speed, or a combination of both?  I've looked around, even at the Stderr Output from completed GW tasks, and I don't see anything related to a CC value.  Either I've overlooked it or I'm not looking in the right place(s).

I've seen the GFLOPS rating when looking for something else, but I can't remember where I saw it.

The memory and driver level are a given, even I can figure those out.

Could you enlighten me once more?  Please?

TIA

George

Proud member of the Old Farts Association

Richie
Richie
Joined: 7 Mar 14
Posts: 656
Credit: 1,702,989,778
RAC: 0

George wrote:I'm not sure

George wrote:
I'm not sure where to find the CC value.

https://developer.nvidia.com/cuda-gpus#compute

CUDA-Enabled GeForce and TITAN Products

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4,920
Credit: 18,474,458,419
RAC: 5,914,077

It is printed out in the

It is printed out in the Event Log at the very beginning of the file every time you start BOINC.

It is part of the output where the installed gpus are detected by BOINC.

21-Oct-2020 16:14:12 [---] Data directory: /home/keith/Desktop/BOINC
21-Oct-2020 16:14:13 [---] CUDA: NVIDIA GPU 0: GeForce RTX 2080 (driver version 450.80, CUDA version 11.0, compute capability 7.5, 7982MB, 7751MB available, 10598 GFLOPS peak)
21-Oct-2020 16:14:13 [---] CUDA: NVIDIA GPU 1: GeForce RTX 2080 (driver version 450.80, CUDA version 11.0, compute capability 7.5, 7979MB, 7508MB available, 10598 GFLOPS peak)
21-Oct-2020 16:14:13 [---] CUDA: NVIDIA GPU 2: GeForce RTX 2080 (driver version 450.80, CUDA version 11.0, compute capability 7.5, 7982MB, 7751MB available, 10598 GFLOPS peak)
21-Oct-2020 16:14:13 [---] OpenCL: NVIDIA GPU 0: GeForce RTX 2080 (driver version 450.80.02, device version OpenCL 1.2 CUDA, 7982MB, 7751MB available, 10598 GFLOPS peak)
21-Oct-2020 16:14:13 [---] OpenCL: NVIDIA GPU 1: GeForce RTX 2080 (driver version 450.80.02, device version OpenCL 1.2 CUDA, 7979MB, 7508MB available, 10598 GFLOPS peak)
21-Oct-2020 16:14:13 [---] OpenCL: NVIDIA GPU 2: GeForce RTX 2080 (driver version 450.80.02, device version OpenCL 1.2 CUDA, 7982MB, 7751MB available, 10598 GFLOPS peak)

 

GWGeorge007
GWGeorge007
Joined: 8 Jan 18
Posts: 3,014
Credit: 4,926,047,771
RAC: 81,597

Keith Myers wrote: It is

Keith Myers wrote:

It is printed out in the Event Log at the very beginning of the file every time you start BOINC.

Thank you AGAIN!  I didn't even think to look in the event log.

 

Richie wrote:

https://developer.nvidia.com/cuda-gpus#compute

CUDA-Enabled GeForce and TITAN Products

Thank you Richie.  I was unaware of the developer side of Nvidia.  Since I'm not a developer, I just never thought to look there.

George

Proud member of the Old Farts Association

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.