As you surmised they're all reference designs. The rumor mill at release time was that the output volume of the cards was so low that no non-reference designs were to be made and I can't find any leaks of one since then.
After watching NowInStock for a couple of days, and reviewing the back record portrayed there of brief windows of pre-order or order availability, and considering the limited supply situation implied by the tweaktown story, I swallowed my dislike of paying extra and facilitating scalpers, and bought one of the allegedly new cards available on eBay for around $850.
It seems likely that this specific card model has a huge differential advantage in running Einstein Gamma-Ray Pulsar work under the current application relative to the performance it gives in game play. In cold economic logic, that makes the card worth more to me than to a game player (the target market).
I should be able to report my experience next week, and have started reducing the work queue on my main machine by way of preparation.
My biggest current concern is fan noise, and my hope is that the card will respond to power restriction and fan curve specification by Afterburner to give a satisfactory operating point. While I'm not a no-walls case person, the case this card will go into is unusually well ventilated (fans on five of the six surfaces!) and reviews do say the thermal solution is very capable--just that the default fan control is very aggressive in response to game loads. So I'm optimistic.
I was too gloomy on availability. My NowInstock alerts started going off in the middle of the night. As I type Amazon is accepting pre-orders for the Gigabyte-branded version with estimated delivery March 25-26. That is at list $699.99 price with free delivery to Prime members. NowInstock thinks they also had windows of offering the PowerColor for pre-order during the night.
Until an hour ago my main system hosted an RTX 2080 and a GTX 1070, both running 1X, with power reduced by a -40% limitation imposed by MSIAfterburner. Under these conditions the box burned 295 watts, and gave average Elasped times from the 2080 of 8:55 and from the 2070 of 13:40, for an indicated productivity (before subtractions for invalid rate, out of service time, and losses from sharing the machine with my web browsing, tax return prep, and such) of about 925,000.
Now it has a Radeon VII.
It came right up and got fresh "ATI" work from Einstein immediately. The first task consumed 3:39 elapsed time, and initial power indication for the box was a bit over 300 Watts. On a similarly over-optimistic calculation that suggests Einstein daily credit production a little over 1,300,000, and a very substantial power productivity improvement of about 35%.
As I am running 1X, and have not attempted any power limitation or fan control yet, the fan noise is appreciable and is time-varying to a somewhat distracting degree. However, the fan noise is quite pleasing in character, lacking tones and other annoyances. While GPU-Z reports (from the sensor AMD prefers to have seen) an agreeable 75C temperature, TThrottle sees quite a lot more and I had to immediately raise the TThrottle maximum temperature to avoid severe restriction. Until just now I had it set to 105, which was actually causing a slight amount of throttling, as the temperature Tthrottle is controlling by is reporting typical around 100C. (I'm told the temperature the card controls to can get up to about 125, so if TThrottle is seeing the same one, there is quite a bit of safe headroom left, perhaps).
My near-term intention is to let it run as-is to assess stability without tampering and wait for some successful validity. I've promoted some _2 and _3 tasks, so with luck, I may get more than one fulfilled quorum pretty soon. (update before hitting the "post" button--that worked, and I already have three validations).
I'll then try 2X, expecting to get a nice productivity bump, and a big decrease in fan noise variation. Then I'll try to reduce power and adjust fan speed with MSIAFterburner--which works nicely with my recently acquired RX 570. If that fails, I guess I'll have to try Wattman.
With my wish to save power and dislike of fan noise, I may not get a lot more productivity out of this box, but have good hopes it will respond to power limitation, with a nice further improvement in what is already a big power productivity gain over the 2080 + 1070 configuration.
While this card is consuming rather more power than my 2080 was, it is outproducing it in Einstein credit by a factor of 2.3 already.
Continuing initial work with my shiny new Radeon VII.
Switching to 2X gave a big boost in productivity. Elapsed times run about 6:10, so indicated daily credit is about 1,600,000+. As power went up to 327 watts from 303 (both averaged over a few cycles by the same meter), the power productivity took another nice notch upward.
I think this is a rather bigger improvement from running 2X than I've been seeing on my recent Nvidia cards running the Einstein current application. I used the trick of prioritizing _2 and _3 tasks again, and already have validations.
As expected the fan speed is greatly stabilized by the change from 1X to 2X. The temperature went up a bit more than I expected. GPU-Z reports 82.4C average during my 2X running (in a 75F room, in a pretty well-ventilated case). TThrottle thought it saw up to 110, and started slight limitation, so I've raised that limit to 115. The TThrottle reported line hits an eyeball average of about 108, greatly stabilized by the change to 2X.
While my RX 570 running 1X is only consuming about 17% of a CPU, this much faster card draws considerably more CPU support. BoincTasks reported my 1X tasks as consuming about 43% of a CPU. It reports my 2X tasks as consuming about 28% of a CPU, so the two summed to 56% still are using little enough of a (real) quad-core machine that I've taken the step of raising process priority by Process Lasso, and allowing the support task to use any core it likes. Those measures have given more consistent elapsed times--comparing work done when I'm away with work done when I'm browsing and such.
Power limitation attempts should come next, but for the moment I think I'll sit back and lick my chops a bit.
That did not take long. I noticed a backup of unreported work, found myself on a 20+ hour backoff, and spotted the message that I had exceeded my daily quota. I forget the exact number but it was definitely in the 300s, and definitely not enough even for the current (long-running) GRP work type, let alone the faster ones that might come back some day.
A tickle in my brain said I'd seen you folks talking about this, so in a few minutes, I found myself editing the ncpus entry in the options section of my cc_config.xml. It had the value -1, and I gave it for the moment the value of 16 (I genuinely have a 4-core CPU, without hyperthreading). Then on to clicking Options|read config files on the BOINCMgr top line, and I got some more work.
On a less jolly note, when I came back to the machine after about an hour, it was merrily running work (as I could see by remote monitoring from another machine using BOINCTasks, but I could not the attention of the machine. Blank screen, and no response to mouse wiggling, keyboard pecking, ctrl+alt+delete, and a few mild curses. So I did the dreaded long hold on the power button to force shutdown.
I've had my machines do that now and again in the past, but the last time was many months ago. I'm a bit concerned that happening so soon after the Radeon install it might indicate something about the card or the driver made this behavior more likely. In case it happens again, do any of you have a favorite wake-up trick for somnolent Windows machines?
... it was merrily running work (as I could see by remote monitoring from another machine using BOINCTasks, but I could not the attention of the machine.
When you say "merrily running" does that mean 'at the same rate'. In other words, by remote monitoring, could you see existing tasks finishing and new tasks starting, with roughly the same crunch time as before?
Something like this used to happen to me (say 18 months ago) quite a bit. With driver updates, there came a point where it became very much less frequent. In my case, the crunch rate would slow to a crawl. The machine was fully contactable remotely, even by BOINC Manager. Trying to stop and restart BOINC remotely tended to cause the machine to lock up, so I always rebooted which always seemed to solve the problem.
In Linux, there is a key combination that is very useful for a 'safe' reboot. Even though the screen remains black and there is no visible indication initially of anything happening, the key combination would always have the desired effect. The only way you knew it was going to work was the 3 keyboard lights flashing and the BIOS 'beep' which occurred a few seconds after hitting the magic sequence. Then the screen would light up. There never seemed to be any task damage due to what I assumed was a driver lockup that didn't crash the host itself.
I'm guessing the problem was largely resolved when some driver 'fragility' was hardened :-). I still get this black screen behaviour occasionally. It can be when motherboards (or PSUs) have age related issues. I do a lot of replacing bulging capacitors on older hardware. This is often triggered by noticing certain hosts failing more frequently than usual. The machines seem to become much more reliable after such 'treatment' :-).
I don't know if there is any similar key sequence for emergency safe shutdown and reboot in Windows. Hopefully, you may get feedback from people like Mesman21 as to whether he is seeing any similar black screen behaviour for his card.
I was interested to see how this compared to the similarly priced 2080 and know I know.
This doesn't sound like a good card to run in the Australian summer though ;) With our air con set to 23°C (73°F), I can't get the ambient room temp down below 29°C (84°F) in my study. Mt Threadripper already throttles :(
I just checked AMD SHOPPING
)
I just checked AMD SHOPPING and they were available for $699.
Shopping Cart
In Stock - Limit 1 Per Customer $699.00
rjs5 wrote: I just checked
)
I was unaware of that option. So apparently I paid too much.
I was too gloomy on
)
I was too gloomy on availability. My NowInstock alerts started going off in the middle of the night. As I type Amazon is accepting pre-orders for the Gigabyte-branded version with estimated delivery March 25-26. That is at list $699.99 price with free delivery to Prime members. NowInstock thinks they also had windows of offering the PowerColor for pre-order during the night.
I hadn't heard in Instocknow.
)
I hadn't heard in Instocknow. Looks like it only tracks Newegg here in Aus.
There's virtually no supply of these cards in Australia and the few that are here are selling for USD$880..... no thanks.
The Eagle has landed. Until
)
The Eagle has landed.
Until an hour ago my main system hosted an RTX 2080 and a GTX 1070, both running 1X, with power reduced by a -40% limitation imposed by MSIAfterburner. Under these conditions the box burned 295 watts, and gave average Elasped times from the 2080 of 8:55 and from the 2070 of 13:40, for an indicated productivity (before subtractions for invalid rate, out of service time, and losses from sharing the machine with my web browsing, tax return prep, and such) of about 925,000.
Now it has a Radeon VII.
It came right up and got fresh "ATI" work from Einstein immediately. The first task consumed 3:39 elapsed time, and initial power indication for the box was a bit over 300 Watts. On a similarly over-optimistic calculation that suggests Einstein daily credit production a little over 1,300,000, and a very substantial power productivity improvement of about 35%.
As I am running 1X, and have not attempted any power limitation or fan control yet, the fan noise is appreciable and is time-varying to a somewhat distracting degree. However, the fan noise is quite pleasing in character, lacking tones and other annoyances. While GPU-Z reports (from the sensor AMD prefers to have seen) an agreeable 75C temperature, TThrottle sees quite a lot more and I had to immediately raise the TThrottle maximum temperature to avoid severe restriction. Until just now I had it set to 105, which was actually causing a slight amount of throttling, as the temperature Tthrottle is controlling by is reporting typical around 100C. (I'm told the temperature the card controls to can get up to about 125, so if TThrottle is seeing the same one, there is quite a bit of safe headroom left, perhaps).
My near-term intention is to let it run as-is to assess stability without tampering and wait for some successful validity. I've promoted some _2 and _3 tasks, so with luck, I may get more than one fulfilled quorum pretty soon. (update before hitting the "post" button--that worked, and I already have three validations).
I'll then try 2X, expecting to get a nice productivity bump, and a big decrease in fan noise variation. Then I'll try to reduce power and adjust fan speed with MSIAFterburner--which works nicely with my recently acquired RX 570. If that fails, I guess I'll have to try Wattman.
With my wish to save power and dislike of fan noise, I may not get a lot more productivity out of this box, but have good hopes it will respond to power limitation, with a nice further improvement in what is already a big power productivity gain over the 2080 + 1070 configuration.
While this card is consuming rather more power than my 2080 was, it is outproducing it in Einstein credit by a factor of 2.3 already.
Continuing initial work with
)
Continuing initial work with my shiny new Radeon VII.
Switching to 2X gave a big boost in productivity. Elapsed times run about 6:10, so indicated daily credit is about 1,600,000+. As power went up to 327 watts from 303 (both averaged over a few cycles by the same meter), the power productivity took another nice notch upward.
I think this is a rather bigger improvement from running 2X than I've been seeing on my recent Nvidia cards running the Einstein current application. I used the trick of prioritizing _2 and _3 tasks again, and already have validations.
As expected the fan speed is greatly stabilized by the change from 1X to 2X. The temperature went up a bit more than I expected. GPU-Z reports 82.4C average during my 2X running (in a 75F room, in a pretty well-ventilated case). TThrottle thought it saw up to 110, and started slight limitation, so I've raised that limit to 115. The TThrottle reported line hits an eyeball average of about 108, greatly stabilized by the change to 2X.
While my RX 570 running 1X is only consuming about 17% of a CPU, this much faster card draws considerably more CPU support. BoincTasks reported my 1X tasks as consuming about 43% of a CPU. It reports my 2X tasks as consuming about 28% of a CPU, so the two summed to 56% still are using little enough of a (real) quad-core machine that I've taken the step of raising process priority by Process Lasso, and allowing the support task to use any core it likes. Those measures have given more consistent elapsed times--comparing work done when I'm away with work done when I'm browsing and such.
Power limitation attempts should come next, but for the moment I think I'll sit back and lick my chops a bit.
Thanks very much for taking
)
Thanks very much for taking the time and effort to provide all the detail. It's very much appreciated!
I look forward to the results of how the card responds to power limitation.
Cheers,
Gary.
That did not take long. I
)
That did not take long. I noticed a backup of unreported work, found myself on a 20+ hour backoff, and spotted the message that I had exceeded my daily quota. I forget the exact number but it was definitely in the 300s, and definitely not enough even for the current (long-running) GRP work type, let alone the faster ones that might come back some day.
A tickle in my brain said I'd seen you folks talking about this, so in a few minutes, I found myself editing the ncpus entry in the options section of my cc_config.xml. It had the value -1, and I gave it for the moment the value of 16 (I genuinely have a 4-core CPU, without hyperthreading). Then on to clicking Options|read config files on the BOINCMgr top line, and I got some more work.
On a less jolly note, when I came back to the machine after about an hour, it was merrily running work (as I could see by remote monitoring from another machine using BOINCTasks, but I could not the attention of the machine. Blank screen, and no response to mouse wiggling, keyboard pecking, ctrl+alt+delete, and a few mild curses. So I did the dreaded long hold on the power button to force shutdown.
I've had my machines do that now and again in the past, but the last time was many months ago. I'm a bit concerned that happening so soon after the Radeon install it might indicate something about the card or the driver made this behavior more likely. In case it happens again, do any of you have a favorite wake-up trick for somnolent Windows machines?
archae86 wrote:... it was
)
When you say "merrily running" does that mean 'at the same rate'. In other words, by remote monitoring, could you see existing tasks finishing and new tasks starting, with roughly the same crunch time as before?
Something like this used to happen to me (say 18 months ago) quite a bit. With driver updates, there came a point where it became very much less frequent. In my case, the crunch rate would slow to a crawl. The machine was fully contactable remotely, even by BOINC Manager. Trying to stop and restart BOINC remotely tended to cause the machine to lock up, so I always rebooted which always seemed to solve the problem.
In Linux, there is a key combination that is very useful for a 'safe' reboot. Even though the screen remains black and there is no visible indication initially of anything happening, the key combination would always have the desired effect. The only way you knew it was going to work was the 3 keyboard lights flashing and the BIOS 'beep' which occurred a few seconds after hitting the magic sequence. Then the screen would light up. There never seemed to be any task damage due to what I assumed was a driver lockup that didn't crash the host itself.
I'm guessing the problem was largely resolved when some driver 'fragility' was hardened :-). I still get this black screen behaviour occasionally. It can be when motherboards (or PSUs) have age related issues. I do a lot of replacing bulging capacitors on older hardware. This is often triggered by noticing certain hosts failing more frequently than usual. The machines seem to become much more reliable after such 'treatment' :-).
I don't know if there is any similar key sequence for emergency safe shutdown and reboot in Windows. Hopefully, you may get feedback from people like Mesman21 as to whether he is seeing any similar black screen behaviour for his card.
Cheers,
Gary.
Thanks for the
)
Thanks for the posts ARCHAE86!
I was interested to see how this compared to the similarly priced 2080 and know I know.
This doesn't sound like a good card to run in the Australian summer though ;) With our air con set to 23°C (73°F), I can't get the ambient room temp down below 29°C (84°F) in my study. Mt Threadripper already throttles :(