Comprehensive GPU performance list?

mikey
mikey
Joined: 22 Jan 05
Posts: 11888
Credit: 1828048866
RAC: 206948

HWpecker wrote:Thank you for

HWpecker wrote:

Thank you for your reply, that is a great piece of information.

 

Now do the same 1660ti results (or GPUs in general) from that graph apply to einstein, is SETI NVidia leaning? Or is it better for my powerhungry 780ti to do some other project where NVidia beats AMD energywise and leave AMD cards to do Einstein? I would really like to know how many WU/24h a single 1660ti crunches on Einstein@Home specifically, before even maybe purchase one.

 

I haven't been crunching for a long time and almost all my past little crunches were E@H, my favorite project. I came back a few days ago to spend some KWh on E@H and with that the low TDP 1660ti drew my attention as potential replacement for the 780ti. In the beginning of me BOINCing I have done a little crunch (testing) with SETI to give the aliens a small computing ear, but personally I find E@H a more interesting project.

 Greetings :D

The only way to know for sure is to run each project and keep track of where it does best, but in general AMD cards do better here at Einstein. Einstein also reserves a cpu core for the gpu to use, so be sure you compare apples to apples when you crunch other projects.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5842
Credit: 109382692824
RAC: 35964532

HWpecker wrote:... do the

HWpecker wrote:
... do the same 1660ti results (or GPUs in general) from that graph apply to einstein ...


I would be confident they won't but would really recommend that you wait until someone with a 1660Ti reports their findings.  Hopefully, someone will respond - or one of the regulars might notice one as a `quorum partner' :-).

HWpecker wrote:
... is it better for my powerhungry 780ti to do some other project where NVidia beats AMD energywise and leave AMD cards to do Einstein?


Sounds like a good option to me :-).

HWpecker wrote:
...almost all my past little crunches were E@H, my favorite project. I came back a few days ago to spend some KWh on E@H and with that the low TDP 1660ti drew my attention as potential replacement for the 780ti. In the beginning of me BOINCing I have done a little crunch (testing) with SETI to give the aliens a small computing ear, but personally I find E@H a more interesting project.


If max output for both purchase price and ongoing running costs is the main consideration driving your purchase decision, you should spend some time going through this thread.  There's a lot of reading but you'll get the full story, warts and all.

The author is a long time NVIDIA user who has made the transition to AMD.  He's a very careful and thorough person who was particularly keen on energy efficiency and good output for minimal/acceptable fan noise since the computers are in a 'living room' type environment.  He has documented his energy efficiency comparisons and he seems (on balance) quite pleased with the eventual outcome.  He's very clear thinking about what he wants and not at all a 'fan boy' of either brand.

His initial purchase was an RX 570, and he quickly followed up with a second one.  If they are still available at the 'post-crypto-mining-boom' low prices of around $US 140, one of those would make an excellent replacement for a 780Ti - 3x the output for less power.  He has documented his power consumption figures very thoroughly.  There's a lot to read and digest in his story, but it's worth it :-).

EDIT:  One other point I should mention.  All comments about GPU crunching at Einstein relate to the current long running (and stable) gamma-ray pulsar search app (FGRPB1G).  There is a new search on the horizon - a GPU search for continuous gravitational wave (GW) emissions from massive bodies like spinning neutron stars.  Up until now, GW searches have been 'CPU only'.  There is a new GPU app being tested and initial indications are that the performance will (initially) be rather poor when compared to that of the FGRPB1G app.  Performance should hopefully improve over time but it's quite unclear how long this might take and how different brands/models of GPUs might be affected.

The 'holy grail' for this project will be the first detection of continuous GW emissions so the non-credit die hard supporters amongst us will likely switch to the GW GPU search irrespective of credit reductions :-).   No doubt there will be a whole new round of discussions about the performance of the new GW app and which brands/models do best so you might like to think about that aspect as well :-).

Cheers,
Gary.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4699
Credit: 17542616669
RAC: 6377289

Thanks for your comment

Thanks for your comment Gary.  I didn't think it necessary to make any disclaimer as referencing a post at Seti I thought implied I was talking ONLY about Seti.

Anyone that has been doing BOINC for any length of time should know that performance differences and even data precision requirements for gpus is project specific.

AMD cards with their better math speeds have always been better than Nvidia here.

 

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5842
Credit: 109382692824
RAC: 35964532

Keith Myers wrote:Thanks for

Keith Myers wrote:
Thanks for your comment Gary.  I didn't think it necessary to make any disclaimer as referencing a post at Seti I thought implied I was talking ONLY about Seti.


Sure, and all the 'regulars' will understand that.  I get to approve profiles for new volunteers (or people returning after a lengthy absence) so I'm rather conscious of a significant group of people who might want to get a GPU once they see how much more productive that can be.  They are unlikely to fully understand just how project specific the performance can be.  They tend to be in a hurry and the beautifully and professionally laid out information at Seti is so compelling .... so why the hell bother with any further research of their own? :-).

Keith Myers wrote:
Anyone that has been doing BOINC for any length of time should know that performance differences and even data precision requirements for gpus is project specific.


I entirely agree for long term volunteers who keep up to date with what's happening at the various projects they support.  Apart from new/returning people who don't have that knowledge, the bulk of volunteers probably belong to the 'set and forget' group so these also won't necessarily understand the performance differences.  A short disclaimer is a polite way of reminding them they still should do their own research for any planned purchases.

In case you think otherwise, my post wasn't directed negatively at you at all, so I'm very sorry if it gave that impression.  I wasn't intending to 'lecture' you, although you may have thought I was.  It was really directed at people planning to jump on the GPU bandwagon without really understanding all the nuances.  After all, a thread entitled, "Comprehensive GPU Performance List" is bound to attract lots of inexperienced people wanting an instant fix :-).  I gave the examples for their benefit - not yours - since I know that you are fully competent to make your own judgements about these things.

Cheers,
Gary.

abcde12345
abcde12345
Joined: 14 Apr 14
Posts: 10
Credit: 10676522
RAC: 0

Again, thank you for your

Again, thank you for your replie(s).

 

The GTX780TI to AMD consideration: The MSI RX570 Gaming X 4GB seems to be a very silent card. Anyone has experienced that card?

 

Also, I've been switching from win to lin (OpenSUSE Leap 15.1) and down to 2080s/WU, but I cannot overclock the NV GPU. Anyone knows what package I can use for that? (I'm not a commandline expert)

 

One more thing: I'm still curious for GTX1660TI performance in WU/day or average s/WU on E@H specifically. Any GTX1660TI owner reading this and willing to post that info, merci d'avance.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5842
Credit: 109382692824
RAC: 35964532

HWpecker wrote:The MSI RX570

HWpecker wrote:
The MSI RX570 Gaming X 4GB seems to be a very silent card.


I have 3 of that exact model, running since Nov 2017.  I also have others from Asus, Sapphire and Gigabyte and I haven't really noticed much difference in fan noise between any of them.  That's not really surprising since they all run in a room with forced ventilation driven by large (and noisy) industrial fans :-).  I've had no hardware issues with any of them.

HWpecker wrote:
... I cannot overclock the NV GPU. Anyone knows what package I can use for that?


Do a google search for 'nvidia coolbits' and you should find lots of hits.  I think there is even an ArchWiki entry describing how to set a number of things to do with frequency, voltage and fan control.  I don't use nvidia so I don't have specific information.  Some of the hits are bound to give you the details you need.

HWpecker wrote:
... GTX1660TI performance in WU/day or average s/WU on E@H specifically.


If you get no responses here, you could always try watching Shaggie's thread at Seti.  As a result of the new mention of the 1660Ti, you might find people announcing that they have or are getting one.  Maybe you might get lucky and find such a post where the owner might run it here as well.

I did a quick look and found one of these GPUs in this user's computer list.  Unfortunately it's Seti only.  As I was browsing, I think I saw a comment that the 1660Ti performs similarly to a 1070 (I think but I could be wrong).  If you browse enough posts, you may well be able to find such information.  If you look at how a 1070 performs at Einstein, you might get some basic idea of what a 1660Ti might do.  Of course, it would be much better to find one working here and to discuss it with the owner.

Cheers,
Gary.

cecht
cecht
Joined: 7 Mar 18
Posts: 1420
Credit: 2444512334
RAC: 1499644

Gary Roberts wrote:I have 3

Gary Roberts wrote:
I have 3 of that exact model, running since Nov 2017.  I also have others from Asus, Sapphire and Gigabyte and I haven't really noticed much difference in fan noise between any of them.  That's not really surprising since they all run in a room with forced ventilation driven by large (and noisy) industrial fans :-).

What are the GPU temperatures on those cards? My two RX570s run between 64 C and 69 C, depending on the room temp. I'm just wondering what a sustainable long-term temperature range is for those GPUs.

Ideas are not fixed, nor should they be; we live in model-dependent reality.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4699
Credit: 17542616669
RAC: 6377289

Quote:Also, I've been

Quote:
Also, I've been switching from win to lin (OpenSUSE Leap 15.1) and down to 2080s/WU, but I cannot overclock the NV GPU. Anyone knows what package I can use for that? (I'm not a commandline expert)

What Gary said.  Coolbits tweak is your friend.  Setting the xorg.conf file to have the coolbits option enables fan speed control, gpu core clock and memory clock speeds for Nvidia cards.  You then can use the Nvidia Settings application to control each card or run a script to set all your parameters before starting crunching for all cards.


sudo nvidia-xconfig --thermal-configuration-check --cool-bits=28 --enable-all-gpus


You should be able reclaim the memory clock penalty Nvidia forces on consumer cards when the driver detects a compute load.  By adding an offset to the penalty power state, you can get back to the clocks the card would run for a video load and published card spec.

 

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5842
Credit: 109382692824
RAC: 35964532

Thanks for the extra

Thanks for the extra information about coolbits - should be helpful to the user.

Your last paragraph is a bit of an eye-opener.  I didn't realise that nvidia did sneaky things like that!  I guess they are trying to ensure fewer warranty claims from people running 24/7 heavy compute operations - perhaps of the mining type :-).

Cheers,
Gary.

abcde12345
abcde12345
Joined: 14 Apr 14
Posts: 10
Credit: 10676522
RAC: 0

Thank you,   I'll read up a

Thank you,

 

I'll read up a bit before I might burn the card.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.