How can 1650 be so productive?

gravitonian
gravitonian
Joined: 24 Feb 13
Posts: 2
Credit: 80,097,599
RAC: 0
Topic 220377

I was looking for hosts with cards close to my 1660 and found this computer https://einsteinathome.org/ru/host/12793829. gtx 1650 completes Gamma-ray WU in 500 seconds. Although my 1660 super does these wu in 900 seconds. And the computer receives 1.5 times more points than it could receive with such a performance (86400/500 * 2 ~ 1200000). How is this possible?


And another question: what is the normal performance of 1660 super for wu GRPS and Gravwave?

 

Sry for my english.

San-Fernando-Valley
San-Fernando-Valley
Joined: 16 Mar 16
Posts: 90
Credit: 2,280,294,424
RAC: 1,520,145

Don't worry, I have several

Don't worry, I have several other fast/faster GPUs in the same rig than the GTX1650.

For some reason, the statistics show only the slowest one and just assumes that the rest are all also GTX1650s.

Even "old" finished WUs (before I added the GTX1650) are now shown "wrongly".

I don't know if this "problem" is a known issue or not ...

But it is a "nice way" to iritate my fellow crunchers and me!

San-Fernando-Valley
San-Fernando-Valley
Joined: 16 Mar 16
Posts: 90
Credit: 2,280,294,424
RAC: 1,520,145

On differnet PCs:  

On differnet PCs:

  GTX1650   on  Gamma-ray pulsar binary search #1 on GPUs:

RUN-times between   1600  and  1800       

 

gravitonian
gravitonian
Joined: 24 Feb 13
Posts: 2
Credit: 80,097,599
RAC: 0

Thank you for information.

Thank you for information.

archae86
archae86
Joined: 6 Dec 05
Posts: 2,823
Credit: 3,301,517,699
RAC: 2,550,175

As you might infer from the

As you might infer from the comments user SAN-FERNANDO-VALLEY gave you, any kind of card level performance estimation using data from multi-card machines is error-prone.

I think for Nvidia cards, somehow the rule is that if a host has more than one Nvidia card, the card model name used is the one with the highest compute capability number.

That sounds like it means highest performance, but it does not.  It is more like a revision level of the CUDA architectural hardware elements.

To be specific, if a host operates a previous generation high-end card, and a newer low-end card, you'll see the listing seem to say that it has two of the low-end new card.

How it chooses when both cards are at the same capability level I don't know.  Maybe by slot position on the motherboard?

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 832
Credit: 753,128,988
RAC: 1,348,248

The GPU detect module pulls

The GPU detect module pulls the card info from the appropriate vendor API.  The order that BOINC shows the cards is based on the gpu detect line in the Event Log printout at BOINC startup and how it lists the cards capabilities.

CUDA Level

Compute Capability

Memory amount

GFLOPS rating

This is how BOINC determines which card is the highest performance card.  As you surmised, a newer generation of card with CC 7.5 of the Turing generation beats out a Pascal card with CC 6.1 even though the Pascal card represented per se by a GTX 1080Ti is in reality more powerful than a GTX 1650 represented by their respective GFLOPS ratings.

 

San-Fernando-Valley
San-Fernando-Valley
Joined: 16 Mar 16
Posts: 90
Credit: 2,280,294,424
RAC: 1,520,145

Thanks to you both for the

Thanks to you both for the interesting solution.

I checked the CUDA and, indeed, the GTX1650 has the newest/highest version number of all my GPUs.

Good to know.

catavalon21
catavalon21
Joined: 5 Nov 11
Posts: 1
Credit: 8,472,355
RAC: 10,167

Good to know, thanks.

Good to know, thanks.

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 1,935
Credit: 270,807,615
RAC: 232,034

archae86 wrote:Maybe by slot

archae86 wrote:
Maybe by slot position on the motherboard?

No, it's done in software.

https://github.com/BOINC/boinc/blob/master/client/gpu_nvidia.cpp#L136

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.