With the 6900XT at default settings, I tried 1-3 WUs. Run times below were averages across some 10 WUs each without any CPU load, so not a large result base.
With the 20.45 ROCm based driver BOINC would recognise both cards, but computing would fail on device 1. clinfo would also segfault on the second card, regardless which one that was. Both worked fine when installed alone.
Milkyway@home also terminates with errors, it never liked Mesa/Clover nor ROCm so far, the new driver didn't change chat.
We tried to use the OpenCL part of driver 20.40 plus the kernel AMDGPU driver, but clinfo didn't find BigNavi at all.
The EPYC systems are not optimised for power consumption or noise, no testing was done into that direction.
there is a massive price run-up on Gpu's being caused by record-high BitCoin prices.
Perhaps the impact is more from other coins than for BitCoin itself. I think the competition from efficient ASIC implementation has driven GPUs out of new builds for serious BitCoin mining. But perhaps there are not yet efficient ASIC implementations for some of the many smaller and newer coins, not all of which hash the same way.
Whatever the details, yes, I can easily see the GPU price and availability situation out there. I imagine it is a combination of shut-in gamer players with spare change and mining activity.
I should probably dust off the stack of 570 cards in my garage (which equipped my three machines before the 5700s) and peddle them on eBay. I'd not get a princely sum, but presumably the buyers would be happy to get them, and they would be out of my garage and into service.
As Gary reminds us, the 570 cards work well on Einstein GRP, but they are not very competitive anymore in the gaming market. But even they seem likely to sell easily for over $100 used on eBay at the moment. Possibly quite a bit over.
Meanwhile I look for the specific 6800 XT card that has caught my eye every morning, and only find resellers with an appreciable multiple over list price offering it.
no one (sane) is mining bitcoin with GPUs, but the price of BTC does influence and drive the prices of all the other altcoins which are being mined with GPUs, namely Ethereum.
no one (sane) is mining bitcoin with GPUs, but the price of BTC does influence and drive the prices of all the other altcoins which are being mined with GPUs, namely Ethereum.
And GridCoin to some degree as they can partly be used thru Boinc.
p3d-cluster's reports are disappointing. How far behind the VII is the 6900 XT?
I was going to say that other benchmarks show an advantage for 6000 series over the VII, but I see Phoronix shows the memory and double precision advantage for VII also in some generic benchmarks:
(Hmm, been after a 6800 XT for two months, no stock available. No stock for VII either though, and approx same price. My used 5700XT has thermal problems, unstable, so I'm weary of trying Ebay again for a used VII. Overpriced there, anyway.)
p3d-cluster's reports are disappointing. How far behind the VII is the 6900 XT?
I was going to say that other benchmarks show an advantage for 6000 series over the VII, but I see Phoronix shows the memory and double precision advantage for VII also in some generic benchmarks:
(Hmm, been after a 6800 XT for two months, no stock available. No stock for VII either though, and approx same price. My used 5700XT has thermal problems, unstable, so I'm weary of trying Ebay again for a used VII. Overpriced there, anyway.)
I HATE the way Ebay does the price gougers!!!
I wish at least one of the testing sites for the new gpu's would pick a Boinc Project or two and run a dozen units thru it to give us an example of how they could work. I realize their systems are often tweaked more than most of us would do but the idea is just to run a few not for the next 3 weeks. ie running 10 units on Collatz, Einstein, Milkyway or Moo and then showing the time and temps etc. Just pick the same kind of units for each Project so we can compare the 6800 to the VII and the 5700 and the 6900, 3070, 3080 and 3090. As it progresses over time a database of times would be built up giving us an idea of which gpu would do better at which project. No it won't help those that switch projects but at least it's data and that's never a bad thing.
But, there is no reason you couldn't run the BOINC apps as a regular benchmark. I don't think there is any legal reason that wouldn't be allowed. And, you don't have to average over different WUs if you use the same WU on every GPU. That's how other benchmarks work. Also, I agree, that would be way more helpful than the generic "OpenCL" one in geekbench, which is the only one I've found that lets you fairly compare performance over time and product classes. These shoot-out reviews are nice, but nobody has time to test 30 GPUs every time a new card comes on the market. The fact that the VII isn't included in the current set of comparison GPUs shows the disconnect between the BOINC and enthusiast communities.
On the other hand, the Phoronix test suite does have many OpenCL-based benchmarks and you can seen these variations based on architecture that folks like ARCHAE86 suggest could be very relevant. And, it is quite fortunate that Phoronix even does these OpenCL-based test suite comparisons at all; one one else does that.
But, still, no database of results is complete enough for what we want. We don't know how far behind the VII is the 6900XT on *our* workload. Is it a lot or a little? I was thinking someone on this thread might just know or remember the VII numbers and we could solve this mystery quickly.
Looking more closely at the Phoronix results I linked, previously, however, it looks like the VII actually *is* much faster in most of the cl* group of benchmarks. So, maybe it is not a close contest. In every other benchmark group, however, even the 6800XT is about 2x as fast as the VII, leaving a big disagreement that is clearly workload-dependent.
The VII is looking better, to me, but a measurement of the performance difference would make me feel a lot better about my next purchase.
p3d-cluster wrote: With the
)
in addition to the questions from the users above me, can you report on the relative speed/power consumption for gravitational wave tasks?
_________________________________________________________________________
We removed the cards again
)
We removed the cards again from the system.
With the 20.45 ROCm based driver BOINC would recognise both cards, but computing would fail on device 1. clinfo would also segfault on the second card, regardless which one that was. Both worked fine when installed alone.
Milkyway@home also terminates with errors, it never liked Mesa/Clover nor ROCm so far, the new driver didn't change chat.
We tried to use the OpenCL part of driver 20.40 plus the kernel AMDGPU driver, but clinfo didn't find BigNavi at all.
The EPYC systems are not optimised for power consumption or noise, no testing was done into that direction.
On top of everything else
)
On top of everything else (shortage of next-gen gpu cards), there is a massive price run-up on Gpu's being caused by record-high BitCoin prices. :(
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor)
Tom M wrote: there is a
)
Perhaps the impact is more from other coins than for BitCoin itself. I think the competition from efficient ASIC implementation has driven GPUs out of new builds for serious BitCoin mining. But perhaps there are not yet efficient ASIC implementations for some of the many smaller and newer coins, not all of which hash the same way.
Whatever the details, yes, I can easily see the GPU price and availability situation out there. I imagine it is a combination of shut-in gamer players with spare change and mining activity.
I should probably dust off the stack of 570 cards in my garage (which equipped my three machines before the 5700s) and peddle them on eBay. I'd not get a princely sum, but presumably the buyers would be happy to get them, and they would be out of my garage and into service.
As Gary reminds us, the 570 cards work well on Einstein GRP, but they are not very competitive anymore in the gaming market. But even they seem likely to sell easily for over $100 used on eBay at the moment. Possibly quite a bit over.
Meanwhile I look for the specific 6800 XT card that has caught my eye every morning, and only find resellers with an appreciable multiple over list price offering it.
no one (sane) is mining
)
no one (sane) is mining bitcoin with GPUs, but the price of BTC does influence and drive the prices of all the other altcoins which are being mined with GPUs, namely Ethereum.
_________________________________________________________________________
Ian&Steve C. wrote: no one
)
And GridCoin to some degree as they can partly be used thru Boinc.
p3d-cluster's reports are
)
p3d-cluster's reports are disappointing. How far behind the VII is the 6900 XT?
I was going to say that other benchmarks show an advantage for 6000 series over the VII, but I see Phoronix shows the memory and double precision advantage for VII also in some generic benchmarks:
https://www.phoronix.com/scan.php?page=article&item=amd-rx6800-opencl&num=3
(Hmm, been after a 6800 XT for two months, no stock available. No stock for VII either though, and approx same price. My used 5700XT has thermal problems, unstable, so I'm weary of trying Ebay again for a used VII. Overpriced there, anyway.)
Paul wrote: p3d-cluster's
)
I HATE the way Ebay does the price gougers!!!
I wish at least one of the testing sites for the new gpu's would pick a Boinc Project or two and run a dozen units thru it to give us an example of how they could work. I realize their systems are often tweaked more than most of us would do but the idea is just to run a few not for the next 3 weeks. ie running 10 units on Collatz, Einstein, Milkyway or Moo and then showing the time and temps etc. Just pick the same kind of units for each Project so we can compare the 6800 to the VII and the 5700 and the 6900, 3070, 3080 and 3090. As it progresses over time a database of times would be built up giving us an idea of which gpu would do better at which project. No it won't help those that switch projects but at least it's data and that's never a bad thing.
Yeah, that is a good
)
Yeah, that is a good idea.
But, there is no reason you couldn't run the BOINC apps as a regular benchmark. I don't think there is any legal reason that wouldn't be allowed. And, you don't have to average over different WUs if you use the same WU on every GPU. That's how other benchmarks work. Also, I agree, that would be way more helpful than the generic "OpenCL" one in geekbench, which is the only one I've found that lets you fairly compare performance over time and product classes. These shoot-out reviews are nice, but nobody has time to test 30 GPUs every time a new card comes on the market. The fact that the VII isn't included in the current set of comparison GPUs shows the disconnect between the BOINC and enthusiast communities.
On the other hand, the Phoronix test suite does have many OpenCL-based benchmarks and you can seen these variations based on architecture that folks like ARCHAE86 suggest could be very relevant. And, it is quite fortunate that Phoronix even does these OpenCL-based test suite comparisons at all; one one else does that.
But, still, no database of results is complete enough for what we want. We don't know how far behind the VII is the 6900XT on *our* workload. Is it a lot or a little? I was thinking someone on this thread might just know or remember the VII numbers and we could solve this mystery quickly.
Looking more closely at the Phoronix results I linked, previously, however, it looks like the VII actually *is* much faster in most of the cl* group of benchmarks. So, maybe it is not a close contest. In every other benchmark group, however, even the 6800XT is about 2x as fast as the VII, leaving a big disagreement that is clearly workload-dependent.
The VII is looking better, to me, but a measurement of the performance difference would make me feel a lot better about my next purchase.
Was able to order a 6900xt
)
Was able to order a 6900xt directly from AMD. Would rather have the 6800xt (price/performance), but I think I can't complain.
4 GRP tasks (1cpu + 0.25gpu) under 800 seconds. Haven't tested with more tasks yet.
This with 100% cpu load in BOINC (rosetta)
Also lowered clock speed (2200) because otherwise it would be hitting the memory thermal limits constantly. Using 70 watts less now compared to stock.