Any suggestions on GPU choice?

Boca Raton Community HS
Boca Raton Comm...
Joined: 4 Nov 15
Posts: 244
Credit: 10650965586
RAC: 17058247

B.I.G wrote:Now this is

B.I.G wrote:

Now this is completely contradicting the experiences from other threads where people said that those professional cards have less computing power in this project than the consumer cards. And on paper the consumer cards have way more computing power than the Pro cards. I am talking about workstation cards here, not accelerator cards. How do you come to this conclusion, do you know of any hosts that have the Pro vs. the Gaming card? I have one Host with a W7600 btw. so if somebody has a RX7600 that would be great.

Um, yes. We have a "few" workstation cards if you would like to look. It depends on the actual work being completed and how the code was written to use the actual GPU. Most of the work on BOINC will not use anything that makes a workstation card "better" than a consumer card (but it would be great if they did). Examples: tensor cores, ray tracing cores, and VRAM quantity. There are pros and cons for workstation GPUs that I would be more than happy to expand on, but most on here are well aware of those already. 

Ben Scott
Ben Scott
Joined: 30 Mar 20
Posts: 53
Credit: 1635982260
RAC: 4864169

B.I.G wrote: Keith Myers

B.I.G wrote:

Keith Myers wrote:

Better drivers, more consistent, easy to install etc  . . .  for Nvidia.  AMD . . . . good luck!

That is a myth that was proven wrong hundreds of times now. There were times when this was true for both manufacturers each at a different time. But years later it's time to let go and accept the now. There are differences but to say one is generally better than the other in this regard: at the moment no way.

 

On Linux it is no myth, the AMD driver landscape is a minefield of missing drivers, terrible docs and installation nightmares. I would never recommend AMD for GPU compute on Linux, it just isn't worth the headache.

mikey
mikey
Joined: 22 Jan 05
Posts: 12711
Credit: 1839114599
RAC: 3624

Ben Scott wrote: B.I.G

Ben Scott wrote:

B.I.G wrote:

Keith Myers wrote:

Better drivers, more consistent, easy to install etc  . . .  for Nvidia.  AMD . . . . good luck!

That is a myth that was proven wrong hundreds of times now. There were times when this was true for both manufacturers each at a different time. But years later it's time to let go and accept the now. There are differences but to say one is generally better than the other in this regard: at the moment no way.

 

On Linux it is no myth, the AMD driver landscape is a minefield of missing drivers, terrible docs and installation nightmares. I would never recommend AMD for GPU compute on Linux, it just isn't worth the headache.

I wouldn't disagree with you for over 90% of the Linux Distros out there, maybe even higher than that but there is ONE distro I know of, Zorin Linux, that does add-in the AMD drivers by default. Yes you can use an Nvidia gpu but you have to click to add them in during the installation process. The problem for me is Zorin is not as easy as the more usual distros so is probably not for newbies even though they say they are 'Windows-like'. It is also Ubuntu based.  From their website "Zorin OS comes with NVIDIA, AMD, and Intel graphics drivers as well as game optimizations, so you can get great performance easily." 

https://distrowatch.com/table.php?distribution=zorin

Beko Pharm
Beko Pharm
Joined: 4 Dec 23
Posts: 2
Credit: 876182
RAC: 0

This is highly confusing for

This is highly confusing for someone like me coming from Linux Gaming. Trying to break this down - have patience with me ????

Usually it's NVIDIA drivers that are not included in Linux distributions and are a pain in the neck on each update. This is mostly thanks to NVIDIA drama wheedling around GPL restrictions so their "open" connector to their proprietary driver breaks all the time. It is true though that most distributions offer some option to add a foreign 3rd party repository for this driver with one additional click, so most users do not have to deal with this directly anymore and the connector gets automatically compiled on kernel updates anyway. And Cuda really just works once installed.

AMD users however usually just roll with open source mesa driver nowadays. This just works™ without having to install anything outside of what distribution provides - even on system install - already. OpenCL support is provided by ROCm because Mesa OpenCL is apparently not fully feature complete on Polaris and Vega. Navi 1X and Navi 2X seem to be fully supported by now with OpenCL 2.0 and 2.1. Bonus: We even get firmware updates via fwupd (v1.9.6+) nowadays ????

There is also the AMDGPU-PRO driver, which is proprietary, not shipped by most distributions and usually far behind, but comes with full supported OpenCL for Polaris GPUs. Anything mode modern uses ROCm though anyway even when AMDGPU-PRO was installed - so why bother with this particular driver nowadays?

Source: https://en.wikipedia.org/wiki/ROCm

And in a crunching age long long ago there was also the fglrx driver by ATI - which was always a major pain in the neck to get going - and it's open source equivalents "radeon", "r128" and "mach64". This I'd really consider off topic in 2023 though.

 

Did I get the gist of this correct?

 

BTW: This is from my Fedora 38 workstation without jumping any hoops, like downloading drivers outside of the packages provided directly by the distribution:

[---] OpenCL: AMD/ATI GPU 0: AMD Radeon RX 6700 XT (driver version 3558.0 (HSA1.1,LC), device version OpenCL 2.0, 12272MB, 12272MB available, 14618 GFLOPS peak)
[---] OpenCL: Intel GPU 0: Intel(R) UHD Graphics 630 (driver version 23.35.27191.9, device version OpenCL 3.0 NEO, 25561MB, 25561MB available, 221 GFLOPS peak)

Well to be fair there was one show stopper here. SELinux had to be persuaded to be permissive with boinc started via systemd (worked fine for a user though), because it blocked access to the ROCm device /dev/kfd but that's hardly the drivers fault and really very Fedora specific.

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3971
Credit: 47341492642
RAC: 65957986

Linux AMD drivers are only

Linux AMD drivers are only well sorted for gaming. As Mesa OpenCL is broken most of the time and isn’t well supported for a lot of compute loads or has terrible performance, including most of the OpenCL applications at BOINC projects. Here at Einstein you can’t even process BRP7 on Mesa as the project explicitly excludes Mesa drivers and won’t send you work. 
 

I’ve never understood the obsession with drivers being “included” in Linux. They’re not included in Windows (AMD or Nvidia or otherwise) and installing drivers (IMO) is just a fact of life for installing many kinds of add on devices. Having to install a driver makes no difference to me and probably most people. 
 

the difference is that the Nvidia Linux install process goes smooth most of the time, whereas trying to install the AMD Linux driver is wrought with a complicated minefield of kernel compatibility and incomplete features and overall buggy behavior sometimes. I had to jump through hoops to get proper OpenCL 2.0 features working on Polaris cards, something that the hardware should support, but the AMDGPU drivers from the AMD website were broken and wouldn’t execute certain OpenCL 2.0 code despite “claiming” that it was 2.0. It only worked when I found the right recipe to install ROCm. A lot of novice users have trouble with this and it’s not as straight forward and click click click done. And recent ROCm doesn’t even officially support the consumer cards, they are focusing their support on CDNA accelerators, the consumer cards that end up working are a happy accident, not intentional support.  
 

if you’re a Windows user, you have less issues as AMD puts way more work into their Windows drivers (still buggy and less polished than Nvidia) because they’d be burned at the stake if the Windows users had to deal with the issues that Linux users do. 

_________________________________________________________________________

B.I.G
B.I.G
Joined: 26 Oct 07
Posts: 117
Credit: 1180715977
RAC: 949571

Ok but then this is AMD on

Ok but then this is AMD on Linux specific and it's important to say because probably most people who contribute to the project use Windows. And I also made the error to assume the system I work with, which is Windows. So sorry for not asking before writing my reply. On Windows - with the gaming cards - for quite a while now it's always the same with AMD: initial release drivers are buggy. One year later drivers are excellent and the cards gain performance. For some reason AMD prefers to develop them while they sell the product. While NVIDIA has good drivers from the start but doesn't improve to the level AMD does in the end. That's what I meant with different, some people prefer one or the other.

And I do think that even here in the forum it's important to mention if something applies to a BOINC project only or general card performance because after all many people use their daily computer to participate. Like me, I'm invested into the project, I love to read the forums and I even consider the 24/7 crunching as a criteria when buying systems and I am willing to spend more on hardware that performs better on BOINC yet the system is made to primary satisfy my work needs.

Boca Raton Community HS wrote:

There are pros and cons for workstation GPUs that I would be more than happy to expand on, but most on here are well aware of those already. 

General ones or Einstein@home specific? Because the general ones I know, project specific ones I don't and would be very interested to hear. But I also have the impression this goes into a level of detail that's hard to keep up. When I look how much difference in estimated and real performance there can be depending on the task and generation of card used... there is a temptation to call it all just Voodoo ;)

Beko Pharm
Beko Pharm
Joined: 4 Dec 23
Posts: 2
Credit: 876182
RAC: 0

B.I.G wrote:And I do think

B.I.G wrote:
And I do think that even here in the forum it's important to mention if something applies to a BOINC project only or general card performance because after all many people use their daily computer to participate.

Yes, same here. That's mostly my angle. It's mostly my battlestation for work _and games_ (and also some from the home server, an old HP Proliant Gen8). And only in the wintertime. I've to heat during that time anyway, may as well make something productive with the wasted energy xD

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3971
Credit: 47341492642
RAC: 65957986

AMD also kind of exaggerates

AMD also kind of exaggerates their compute specs which rarely translate to real world performance. with RDNA3, they have "dual-issue" capability, which effectively doubles their rated compute specs. but it is almost never possible.

One of the SIMD32 units is capable of both INT and FP compute, in addition to Matrix, while the other can only process FP and Matrix instructions. Each of the SIMD32 vector units (pair) can execute one wave64 FMA or two wave32 instruction groups in a single clock cycle. However, this is the absolute peak throughput, possible only on paper. In Wave32 mode, the two 32-wide FMA instructions have access to only one operand register (vGPR), instead of two and an intermediate shared value.

_________________________________________________________________________

Boca Raton Community HS
Boca Raton Comm...
Joined: 4 Nov 15
Posts: 244
Credit: 10650965586
RAC: 17058247

B.I.G wrote:General ones or

B.I.G wrote:

General ones or Einstein@home specific? Because the general ones I know, project specific ones I don't and would be very interested to hear. But I also have the impression this goes into a level of detail that's hard to keep up. When I look how much difference in estimated and real performance there can be depending on the task and generation of card used... there is a temptation to call it all just Voodoo ;)

As a simple example, O3ASHF1 before the newest work units. The extra VRAM allowed me to run 4x, easily. However, now this doesn't matter with the newest work units. Other than that, the general advantages (consistent power consumption, blower-style, size, longevity) all still apply to E@H work. As far as calculations done on E@H, since they are all "basic" calculations, there is really no specific advantage at the given moment. 

Well, there is one example that should be brought up in reference to when this all came up. The RTX 6000 Ada will most likely (on paper) outperform the RTX 4090 at factory settings. The 4090 IS slightly crippled in comparison (16,384 active cores in the 4090 versus 18,432 cores in the RTX 6000 Ada). No RTX 6000 Ada GPUs have shown up here that I have seen... maybe we will be able to change that next calendar year.

 

Ian&Steve C. wrote:

AMD also kind of exaggerates their compute specs which rarely translate to real world performance. with RDNA3, they have "dual-issue" capability, which effectively doubles their rated compute specs. but it is almost never possible.

One of the SIMD32 units is capable of both INT and FP compute, in addition to Matrix, while the other can only process FP and Matrix instructions. Each of the SIMD32 vector units (pair) can execute one wave64 FMA or two wave32 instruction groups in a single clock cycle. However, this is the absolute peak throughput, possible only on paper. In Wave32 mode, the two 32-wide FMA instructions have access to only one operand register (vGPR), instead of two and an intermediate shared value.

 

This single paragraph will give me more to learn than anything in a while. Ha!

B.I.G
B.I.G
Joined: 26 Oct 07
Posts: 117
Credit: 1180715977
RAC: 949571

Ian&Steve C. wrote: AMD also

Ian&Steve C. wrote:

AMD also kind of exaggerates their compute specs which rarely translate to real world performance. with RDNA3, they have "dual-issue" capability, which effectively doubles their rated compute specs. but it is almost never possible.

Thanks, that could explain this: project specific if I compare my W5500m in the notebook which is RDNA1 to the W7600 in the desktop, which is RDNA 3 I did notice that the W7600 should have at least 50% more RAC according to specs than it delivers. The W5500m is rated with a bit less than 5 teraflops, the W7600 at 20. I can't compare them 100% because in the desktop the card is running at it's TDP while if the Laptop had better cooling I could get even better results, but you get the picture.

 

I wondered if memory bandwidth plays a role there because the W7600 really isn't a card with good specifications but when I bought the system the W7700 wasn't available and I didn't want to spend 2500€ on a W7800 versus the 600€ I had to spend for the W7600. The W7700 at 1000€ had been the sweet spot in everything. Sucks when you have to buy right that moment.

 

Boca Raton Community HS wrote:

(consistent power consumption, blower-style, size, longevity)

Well I have mixed feelings there, yes blower-style is great to get the heat out of the case. But yet every blower style fan, be it AMD or NVIDIA,  failed me in the past and I had to get a replacement - which is an absolute pain in the rear. And with the single slot Workstation cards: their cooling is completely inadequate, the fan has to spin at 3000-4000 rpm which is very loud and I don't see how at that speed it can last long. Time will tell if materials have improved but alone for the noise I'd wish for a top down dual or triple fan cooling and then just install another 1-2 fans to create an airflow that sucks the heat out.

On the other hand, when I used Apple I had to get gaming cards as upgrade and last one, the RX 580 still works flawless after running 4 years almost the entire time 24/7 so I don't know what else to wish for.

 

In general while sometimes the fan failed no card itself ever failed me, except the ones in the MacBooks, these machines gave me nightmares, but this is a different story.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.