Is it worth waiting for the 4080 super card in early January or should I just get a 4080?
Nobody can answer that, if you can wait then wait and you'll see in January if it's worth the money or not.
B.I.G wrote:
But why not go for the AMD card? According to the hosts I posted earlier in this thread the 7900xtx is both: faster and cheaper than the 4080.
I went back and had a look at the figures you provided & it would seem the 4080 is faster in the 200 second range the 7900 XTX is in the range of 300 to 400 seconds, although the average turnaround time is a lot lower on this card. The 4080 has a lower power draw & is faster with other projects I will be contributing to
Amicable Numbers
Trial Factoring (SR base) In the range of hundreds of seconds as opposed to low 90s with the current range being worked on 74-75_436-446M my 3080 can do these in 2 minutes 25 seconds
yeah the 4080 is better for sure, and opens you up to projects that are CUDA-only and the AMD cant contribute to, like GPUGRID.
In general, Nvidia performs better for BOINC. I really can't think of a project where AMD is objectively better. It's no wonder that the leaderboards for GPU projects are filled with Nvidia cards. the only exception to that seems to be valterc's 8x AMD MI100 datacenter cards (~$1500 each, used market, plus cooling solution needed). they are strong cards for sure, but perform about the same or a little better as only 6x 3080Tis on my system for Einstein BRP7 work.
even Milkyway (before they pulled the GPU app) with its FP64 app was dominated by low cost older gen Nvidia cards like Titan Vs or P100s instead of AMD solutions, despite AMD supposed to being better at FP64. it just never panned out that way in results.
How times change... is this because AMD went for a different architecture and lost it's advantages it had with Polaris or did NVIDA add features that benefit BOINC?
Better drivers, more consistent, easy to install etc . . . for Nvidia. AMD . . . . good luck!
Both AMD and Nvidia have dumbed down their consumer cards FP16/FP32/FP64 compute capabilities to force the general consumer looking for compute capability to their professional card lines. MORE $$$$ to their bottom lines.
AMD no longer has the floating point advantage it once held from the Polaris generation era.
nvidia is also generally more power efficient these days, which is probably a big consideration to those giving free compute resources to projects. more compute per watt is a big factor for me personally. much more important than upfront cost of the hardware. AMD pretty much only beats nvidia on gaming performance per dollar or FPS/dollar, and that rarely translates to boinc. but if you're someone after more density, nvidia seems to always have the fastest single card.
I've been very impressed with the Titan V for BRP7 work. it runs about 30% slower than a 3080Ti, but uses less than 1/2 the power. only about 120W for my cards. which makes it's performance per watt about 2x that of the 3080Ti, which is pretty incredible for an old card and efficiency on par with the latest 40-series in this particular workload.
Better drivers, more consistent, easy to install etc . . . for Nvidia. AMD . . . . good luck!
That is a myth that was proven wrong hundreds of times now. There were times when this was true for both manufacturers each at a different time. But years later it's time to let go and accept the now. There are differences but to say one is generally better than the other in this regard: at the moment no way.
Quote:
Both AMD and Nvidia have dumbed down their consumer cards FP16/FP32/FP64 compute capabilities to force the general consumer looking for compute capability to their professional card lines. MORE $$$$ to their bottom lines.
Now this is completely contradicting the experiences from other threads where people said that those professional cards have less computing power in this project than the consumer cards. And on paper the consumer cards have way more computing power than the Pro cards. I am talking about workstation cards here, not accelerator cards. How do you come to this conclusion, do you know of any hosts that have the Pro vs. the Gaming card? I have one Host with a W7600 btw. so if somebody has a RX7600 that would be great.
Ian&Steve C. wrote:
AMD pretty much only beats nvidia on gaming performance per dollar or FPS/dollar, and that rarely translates to boinc. but if you're someone after more density, nvidia seems to always have the fastest single card.
Well here things get complicated, yes gaming doesn't translate to BOINC or productivity. There are quite some tasks in which AMD cards are more efficient than NVIDIA cards although in general yes the power efficiency of NVIDIA cards is great. And while when working, or gaming a few hours a day often enough power consumption has not such a big impact. But when running a system 24/7 on full load... yes power costs a lot of money.
As you can see I have AMD cards only because of my specific application needs for work. And my W7600 is using about 110 watts and the system has a RAC of about 500.000, which I think is quite reasonable, then again what power consumptions do the latest NVIDIA cards have at this RAC?
Ian&Steve C. wrote:
I've been very impressed with the Titan V for BRP7 work. it runs about 30% slower than a 3080Ti, but uses less than 1/2 the power. only about 120W for my cards. which makes it's performance per watt about 2x that of the 3080Ti, which is pretty incredible for an old card and efficiency on par with the latest 40-series in this particular workload.
It is so interesting to see how some crazy old hardware can be extremely efficient for a certain task. I had the same experiences with a Core 2 duo CPU that beat a more modern. much higher clocked Xeon with the Gamma-ray pulsar CPU calculations. And now said Xeon although 13 years old is as fast as the Ryzen 7800x3d on a single core with the Gamma-ray pulsar search #5 app, while with the BRP Arecibo data the AMD completely outclasses the old Intel.
I don't know what threads you have been reading, certainly not the ones I try and help lost and confused people trying to get their current generation AMD cards to work with BOINC compute. I'm talking about dozens of forum threads here asking for assistance trying to get their AMD cards and drivers to install correctly for compute.
I'm not talking about the video graphic drivers used for display and games, talking about compute. The latest consumer cards still do not work for compute loads because AMD has not released any RoCm compute drivers for them. Have those cards and you are out of luck. Only the old and power hungry cards from your Polaris generation were easy to set up for compute. Your prosumer cards are a different story. They are meant for compute in the first place.
No such issues with Nvidia drivers getting them to run compute loads.
Have you looked at the gpu leader boards at projects? Almost no AMD cards there, only Nvidia. And plenty of prosumer Nvidia cards so must be good enough for the user compared to consumer cards. Talking about gpu counts greater than 1 or 2.
Look at the hosts that Ian's reply referenced with Nvidia Titan V's. Only the old Radeon VII prosumer cards were basically equivalent. Try and put your hands on any of those. Good luck. Nobody is letting them go because they are too good for BOINC compute. And AMD had good drivers for them. Sadly not the case with the latest AMD drivers.
Better drivers, more consistent, easy to install etc . . . for Nvidia. AMD . . . . good luck!
That is a myth that was proven wrong hundreds of times now. There were times when this was true for both manufacturers each at a different time. But years later it's time to let go and accept the now. There are differences but to say one is generally better than the other in this regard: at the moment no way.
Be careful now, you said: "That is a myth that was proven wrong hundreds of times now." I'd like to see an example of WHEN this was proven the case for E@H (since we're chatting in the E@H forum).
I think Keith has a very valid point, and I'd take specific notice to it before jumping to a conclusion.
As for your comment such as: "Now this is completely contradicting the experiences from other threads where people said that those professional cards have less computing power in this project than the consumer cards."
Could you give specific quotes of what threads (?) people are referring to in THIS PROJECT?
Remember, Keith, Ian, and myself (among others) are in the GPU Users Group and we all agree that NVIDIA suites our needs best.
I'll repeat what Keith said: "Better drivers, more consistent, easy to install etc . . . for Nvidia. AMD . . . . good luck!" With a clarification that AMD drivers can be much more 'finicky' than an NVIDIA driver, NVIDIA drivers are definitely easier to install than an AMD driver, and the NVIDIA's are much more consistent than AMDs. The NVIDIA GPUs are a better performer than an AMD GPU which is why we chose to use NVIDIA. If AMDs are more "efficient" in terms of energy consumption, well, that doesn't way as much as their ineffective performance compared to NVIDIA as far as we're concerned.
Speedy wrote:Is it
)
Nobody can answer that, if you can wait then wait and you'll see in January if it's worth the money or not.
I went back and had a look at the figures you provided & it would seem the 4080 is faster in the 200 second range the 7900 XTX is in the range of 300 to 400 seconds, although the average turnaround time is a lot lower on this card. The 4080 has a lower power draw & is faster with other projects I will be contributing to
yeah the 4080 is better for
)
yeah the 4080 is better for sure, and opens you up to projects that are CUDA-only and the AMD cant contribute to, like GPUGRID.
In general, Nvidia performs better for BOINC. I really can't think of a project where AMD is objectively better. It's no wonder that the leaderboards for GPU projects are filled with Nvidia cards. the only exception to that seems to be valterc's 8x AMD MI100 datacenter cards (~$1500 each, used market, plus cooling solution needed). they are strong cards for sure, but perform about the same or a little better as only 6x 3080Tis on my system for Einstein BRP7 work.
even Milkyway (before they pulled the GPU app) with its FP64 app was dominated by low cost older gen Nvidia cards like Titan Vs or P100s instead of AMD solutions, despite AMD supposed to being better at FP64. it just never panned out that way in results.
_________________________________________________________________________
This GPU caught my
)
This GPU caught my eye: https://www.tomshardware.com/pc-components/gpus/asus-lists-rtx-4070-gpu-with-a-blower-design-making-it-possible-to-build-a-budget-multi-gpu-machine-for-ai-and-deep-learning
A 4070 blower (official?) with standard 8-pin PCIe power connector? Count me in (well, I wish...).
Boca Raton Community HS
)
Yeah, I saw that as well. With no price yet, it'll be awhile. I doubt before X-mas though.
Proud member of the Old Farts Association
Ian&Steve C. wrote: In
)
How times change... is this because AMD went for a different architecture and lost it's advantages it had with Polaris or did NVIDA add features that benefit BOINC?
Better drivers, more
)
Better drivers, more consistent, easy to install etc . . . for Nvidia. AMD . . . . good luck!
Both AMD and Nvidia have dumbed down their consumer cards FP16/FP32/FP64 compute capabilities to force the general consumer looking for compute capability to their professional card lines. MORE $$$$ to their bottom lines.
AMD no longer has the floating point advantage it once held from the Polaris generation era.
nvidia is also more generally
)
nvidia is also generally more power efficient these days, which is probably a big consideration to those giving free compute resources to projects. more compute per watt is a big factor for me personally. much more important than upfront cost of the hardware. AMD pretty much only beats nvidia on gaming performance per dollar or FPS/dollar, and that rarely translates to boinc. but if you're someone after more density, nvidia seems to always have the fastest single card.
I've been very impressed with the Titan V for BRP7 work. it runs about 30% slower than a 3080Ti, but uses less than 1/2 the power. only about 120W for my cards. which makes it's performance per watt about 2x that of the 3080Ti, which is pretty incredible for an old card and efficiency on par with the latest 40-series in this particular workload.
_________________________________________________________________________
Keith Myers wrote: Better
)
That is a myth that was proven wrong hundreds of times now. There were times when this was true for both manufacturers each at a different time. But years later it's time to let go and accept the now. There are differences but to say one is generally better than the other in this regard: at the moment no way.
Now this is completely contradicting the experiences from other threads where people said that those professional cards have less computing power in this project than the consumer cards. And on paper the consumer cards have way more computing power than the Pro cards. I am talking about workstation cards here, not accelerator cards. How do you come to this conclusion, do you know of any hosts that have the Pro vs. the Gaming card? I have one Host with a W7600 btw. so if somebody has a RX7600 that would be great.
Well here things get complicated, yes gaming doesn't translate to BOINC or productivity. There are quite some tasks in which AMD cards are more efficient than NVIDIA cards although in general yes the power efficiency of NVIDIA cards is great. And while when working, or gaming a few hours a day often enough power consumption has not such a big impact. But when running a system 24/7 on full load... yes power costs a lot of money.
As you can see I have AMD cards only because of my specific application needs for work. And my W7600 is using about 110 watts and the system has a RAC of about 500.000, which I think is quite reasonable, then again what power consumptions do the latest NVIDIA cards have at this RAC?
It is so interesting to see how some crazy old hardware can be extremely efficient for a certain task. I had the same experiences with a Core 2 duo CPU that beat a more modern. much higher clocked Xeon with the Gamma-ray pulsar CPU calculations. And now said Xeon although 13 years old is as fast as the Ryzen 7800x3d on a single core with the Gamma-ray pulsar search #5 app, while with the BRP Arecibo data the AMD completely outclasses the old Intel.
I don't know what threads you
)
I don't know what threads you have been reading, certainly not the ones I try and help lost and confused people trying to get their current generation AMD cards to work with BOINC compute. I'm talking about dozens of forum threads here asking for assistance trying to get their AMD cards and drivers to install correctly for compute.
I'm not talking about the video graphic drivers used for display and games, talking about compute. The latest consumer cards still do not work for compute loads because AMD has not released any RoCm compute drivers for them. Have those cards and you are out of luck. Only the old and power hungry cards from your Polaris generation were easy to set up for compute. Your prosumer cards are a different story. They are meant for compute in the first place.
No such issues with Nvidia drivers getting them to run compute loads.
Have you looked at the gpu leader boards at projects? Almost no AMD cards there, only Nvidia. And plenty of prosumer Nvidia cards so must be good enough for the user compared to consumer cards. Talking about gpu counts greater than 1 or 2.
Look at the hosts that Ian's reply referenced with Nvidia Titan V's. Only the old Radeon VII prosumer cards were basically equivalent. Try and put your hands on any of those. Good luck. Nobody is letting them go because they are too good for BOINC compute. And AMD had good drivers for them. Sadly not the case with the latest AMD drivers.
B.I.G wrote: Keith Myers
)
Be careful now, you said: "That is a myth that was proven wrong hundreds of times now." I'd like to see an example of WHEN this was proven the case for E@H (since we're chatting in the E@H forum).
I think Keith has a very valid point, and I'd take specific notice to it before jumping to a conclusion.
As for your comment such as: "Now this is completely contradicting the experiences from other threads where people said that those professional cards have less computing power in this project than the consumer cards."
Could you give specific quotes of what threads (?) people are referring to in THIS PROJECT?
Remember, Keith, Ian, and myself (among others) are in the GPU Users Group and we all agree that NVIDIA suites our needs best.
I'll repeat what Keith said: "Better drivers, more consistent, easy to install etc . . . for Nvidia. AMD . . . . good luck!" With a clarification that AMD drivers can be much more 'finicky' than an NVIDIA driver, NVIDIA drivers are definitely easier to install than an AMD driver, and the NVIDIA's are much more consistent than AMDs. The NVIDIA GPUs are a better performer than an AMD GPU which is why we chose to use NVIDIA. If AMDs are more "efficient" in terms of energy consumption, well, that doesn't way as much as their ineffective performance compared to NVIDIA as far as we're concerned.
Proud member of the Old Farts Association