Looks like an old AMD R9 390 still produces as much as RX 480/580 or GTX 1080. Initial cost to get one is much lower but then it will take more electricity.
I don't know about that. The cheapest AMD R9 390's I can find on eBay is $150. I just ordered a RX 580 from eBay at $89.50 including shipping.
On the other hand I can pick up a Rage 9 Fury for $125 locally which is a really good price. I may have to grab that for my primary machine, it would game better than the RX 570 and I have a HD 6950 I want to retire so the RX 570 can go into that.
Looks like an old AMD R9 390 still produces as much as RX 480/580 or GTX 1080. Initial cost to get one is much lower but then it will take more electricity.
I don't know about that. The cheapest AMD R9 390's I can find on eBay is $150. I just ordered a RX 580 from eBay at $89.50 including shipping.
On the other hand I can pick up a Rage 9 Fury for $125 locally which is a really good price. I may have to grab that for my primary machine, it would game better than the RX 570 and I have a HD 6950 I want to retire so the RX 570 can go into that.
You are right. One thing I didn't take into account was the different prices of new cards in USA or Europe. Here in Europe new cards are more expensive in the beginning already, but probably best for me to speak from even more local perspective. Here in Finland used RX 580's are more expensive than R9 390's (currently about 60-70 euros difference). But prices are coming down of course for both products as AMD and Nvidia keep announcing new models.
Also when I wrote "initial cost to get one is much lower" that comparison works better with GTX1080, which is still clearly more expensive than those AMD's here (something like 250 euros more).
Sadly I can't try my new eBay RX 580 that arrived in the mail today, or that Rage Fury that I caved and bought from someone locally. I put a 700 watt PSU in my desktop so I could power the Rage Fury and promptly fried the motherboard somehow.
Waiting on a RMA from the manufacturer, but I suspect I won't have a desktop PC for quite a bit of time.
For those running Vega cards...I sure hope you're undervolting the GPU core and HBM2 to get the most out of your card. Lowers power consumption and heat and allows it to run like it should. Why so called expert reviewers don't emphasize this more is beyond me, but I have my theories.
Anyways, on my Sapphire Nitro+ Vega 64, I run 1000mv on both Core and HBM2. Average core clocks are over 1540 MHz for me and I have HBM2 set to 1000 MHz, Can pull 1.1 to 1.5M PPD depending on data set when running 3 concurrrent WUs.
For those running Vega cards...I sure hope you're undervolting the GPU core and HBM2 to get the most out of your card. Lowers power consumption and heat and allows it to run like it should. Why so called expert reviewers don't emphasize this more is beyond me, but I have my theories.
Yes, the AMD cards can benefit a lot from undervolting the GPU, and not just Vegas. My RX 570 (Window 7 64-bit) can operate on Milky Way down to 0.900 volts, and runs quite cool. And on Folding, it can run on 1.050 volts before errors occurs (on both, I can even increase the GPU clock from 1244 to 1348 MHz).
But here in Einstein, I have to keep the voltage at 1.150 and the clock at 1244 MHz (both the defaults for this card), or else invalids result (not errors). So give it a try, but keep an eye on the results.
I got a couple of RX cards temporarily to see how they would run, so here's just some quick observations.
Running 0104Y tasks 2x on every card and all tasks currently pretty close to freq 900. Looking average "remaining" times that Boinc is displaying for tasks in queue after running plenty of similar tasks. I can see these times hold well with the actual completion times around those freqs.
R9 390 + Linux Mint ... freq 884 ----- 09:58
R9 390 + Windows 7 ... freq 892 ----- 10:58
RX 580 + Windows 10 ... freq 892 ----- 13:05
RX 570 + Windows 10 ... freq 908 ----- 14:50
* All cards are running stock clocks, no overclocking. Hosts are almost identical (motherboards, cpu speeds etc.).
** R9 390's are from different brands and I believe the one with Linux host has a bit faster technical specs than the other model. But I have a feeling again that particular card would be a little bit slower if it was running in Windows like the rest are. I'm going to test that later when there will be a peaceful moment to boot it into Windows
A quick add-on on the R9 - RX comparison below. I replaced the RX 570 with a RX 580, so there's two cards of both types now.. All are running 1042L_188 tasks 1x and nothing else.
Boinc Event Log at startup:
R9 390 (5325 GFLOPS peak)
RX 580 (6359 GFLOPS peak)
Wikipedia also says that single precision processing power of those cards would be 5120 and 5792-6175 respectively.
Then here's the reality with this current Einstein app and data set:
I believe that's got something to do with the anti-gravity and solar winds. The older cards for some reason are more acceptable for those phenomenons and are willing to internally take the benefits of the northern lights. So if those waves here at planetary ground level gently wash the cards thoroughly there is happening a tunneling effect which instantly causes a boosted bit stream. When the data lines are momentarily being introduced to heavily warped anti-gravitational bending it opens up sort of additional time slots for the card to do useful things. I suspect the card might be doing some calculations in double precision even if the app was not programmed to do that. But I've understood that happens by accident because the processing just becomes so lubricated and it comes for free anyway, because there's that additional time happening in those moments. The resulting data from processing in those "slots" might right afterwards look like it came from nowhere, because the bending has already disappeared by then. It's fascinating that doesn't seem to disturb the process in whole. Even the electricity that was used while chips were operating in those temporary dimensional cracks is free I believe because they can't be traced back.
It's unbelievable how much the total run time can drop when waves from space sometimes interact in an exceptionally favorable angle in the heart of the R9 390. There must have been some serious lubrication going on while running this task:
I see them as great value these days. Power hungry but great.
They're power hungry enough to largely eliminate the nominal savings for crunching. They've got ~40-50% of the nominal performance of a 580, but at 65W higher power consumption (250 vs 185); at average US power rates (12c/kwh) a 280 would cost about $80/year more to run. Looking on ebay, I see 280s selling for as low as $40 and 580s as low as $120. The 580 would pay for itself in lower energy bills in about a year while doing twice as much work.
At this point they're only potentially worth it for gamers on really tight budgets.
Richie wrote:Looks like an
)
I don't know about that. The cheapest AMD R9 390's I can find on eBay is $150. I just ordered a RX 580 from eBay at $89.50 including shipping.
On the other hand I can pick up a Rage 9 Fury for $125 locally which is a really good price. I may have to grab that for my primary machine, it would game better than the RX 570 and I have a HD 6950 I want to retire so the RX 570 can go into that.
kb9skw wrote:Richie
)
You are right. One thing I didn't take into account was the different prices of new cards in USA or Europe. Here in Europe new cards are more expensive in the beginning already, but probably best for me to speak from even more local perspective. Here in Finland used RX 580's are more expensive than R9 390's (currently about 60-70 euros difference). But prices are coming down of course for both products as AMD and Nvidia keep announcing new models.
Also when I wrote "initial cost to get one is much lower" that comparison works better with GTX1080, which is still clearly more expensive than those AMD's here (something like 250 euros more).
Sadly I can't try my new eBay
)
Sadly I can't try my new eBay RX 580 that arrived in the mail today, or that Rage Fury that I caved and bought from someone locally. I put a 700 watt PSU in my desktop so I could power the Rage Fury and promptly fried the motherboard somehow.
Waiting on a RMA from the manufacturer, but I suspect I won't have a desktop PC for quite a bit of time.
For those running Vega
)
For those running Vega cards...I sure hope you're undervolting the GPU core and HBM2 to get the most out of your card. Lowers power consumption and heat and allows it to run like it should. Why so called expert reviewers don't emphasize this more is beyond me, but I have my theories.
Anyways, on my Sapphire Nitro+ Vega 64, I run 1000mv on both Core and HBM2. Average core clocks are over 1540 MHz for me and I have HBM2 set to 1000 MHz, Can pull 1.1 to 1.5M PPD depending on data set when running 3 concurrrent WUs.
bluestang wrote:For those
)
Yes, the AMD cards can benefit a lot from undervolting the GPU, and not just Vegas. My RX 570 (Window 7 64-bit) can operate on Milky Way down to 0.900 volts, and runs quite cool. And on Folding, it can run on 1.050 volts before errors occurs (on both, I can even increase the GPU clock from 1244 to 1348 MHz).
But here in Einstein, I have to keep the voltage at 1.150 and the clock at 1244 MHz (both the defaults for this card), or else invalids result (not errors). So give it a try, but keep an eye on the results.
I got a couple of RX cards
)
I got a couple of RX cards temporarily to see how they would run, so here's just some quick observations.
Running 0104Y tasks 2x on every card and all tasks currently pretty close to freq 900. Looking average "remaining" times that Boinc is displaying for tasks in queue after running plenty of similar tasks. I can see these times hold well with the actual completion times around those freqs.
R9 390 + Linux Mint ... freq 884 ----- 09:58
R9 390 + Windows 7 ... freq 892 ----- 10:58
RX 580 + Windows 10 ... freq 892 ----- 13:05
RX 570 + Windows 10 ... freq 908 ----- 14:50
* All cards are running stock clocks, no overclocking. Hosts are almost identical (motherboards, cpu speeds etc.).
** R9 390's are from different brands and I believe the one with Linux host has a bit faster technical specs than the other model. But I have a feeling again that particular card would be a little bit slower if it was running in Windows like the rest are. I'm going to test that later when there will be a peaceful moment to boot it into Windows
A quick add-on on the R9 - RX
)
A quick add-on on the R9 - RX comparison below. I replaced the RX 570 with a RX 580, so there's two cards of both types now.. All are running 1042L_188 tasks 1x and nothing else.
Boinc Event Log at startup:
R9 390 (5325 GFLOPS peak)
RX 580 (6359 GFLOPS peak)
Wikipedia also says that single precision processing power of those cards would be 5120 and 5792-6175 respectively.
Then here's the reality with this current Einstein app and data set:
run cpu
R9 390 Linux 500 76
R9 390 Win7 510 170
RX 580 Win10 595 190
RX 580 Win10 605 190
I believe that's got something to do with the anti-gravity and solar winds. The older cards for some reason are more acceptable for those phenomenons and are willing to internally take the benefits of the northern lights. So if those waves here at planetary ground level gently wash the cards thoroughly there is happening a tunneling effect which instantly causes a boosted bit stream. When the data lines are momentarily being introduced to heavily warped anti-gravitational bending it opens up sort of additional time slots for the card to do useful things. I suspect the card might be doing some calculations in double precision even if the app was not programmed to do that. But I've understood that happens by accident because the processing just becomes so lubricated and it comes for free anyway, because there's that additional time happening in those moments. The resulting data from processing in those "slots" might right afterwards look like it came from nowhere, because the bending has already disappeared by then. It's fascinating that doesn't seem to disturb the process in whole. Even the electricity that was used while chips were operating in those temporary dimensional cracks is free I believe because they can't be traced back.
It's unbelievable how much the total run time can drop when waves from space sometimes interact in an exceptionally favorable angle in the heart of the R9 390. There must have been some serious lubrication going on while running this task:
https://einsteinathome.org/task/825563335
How does the R9 280X
)
How does the R9 280X compare?
I see them as great value these days. Power hungry but great.
Chooka wrote:How does the R9
)
They're power hungry enough to largely eliminate the nominal savings for crunching. They've got ~40-50% of the nominal performance of a 580, but at 65W higher power consumption (250 vs 185); at average US power rates (12c/kwh) a 280 would cost about $80/year more to run. Looking on ebay, I see 280s selling for as low as $40 and 580s as low as $120. The 580 would pay for itself in lower energy bills in about a year while doing twice as much work.
At this point they're only potentially worth it for gamers on really tight budgets.
Well there you go! Thank
)
Well there you go!
Thank you Danneely.