Nvidia Pascal and AMD Polaris, starting with GTX 1080/1070, and the AMD 480

Gamboleer
Gamboleer
Joined: 5 Dec 10
Posts: 173
Credit: 168389195
RAC: 0

RE: RE: That would seem

Quote:
Quote:
That would seem to indicate a problem with the tables.....

I'm not at all a betting man but I'd be prepared to wager my entire credit stash on one simple proposititon :-).

I believe the tables don't take into account the concurrency with which the GPU tasks have been crunched.

Here's a scenario to think about. I run Linux and I have lots of HD 7850 (Pitcairn) GPUs. The table rates them at 0.208 compared to a Tahiti at 0.538. In other words, the ratio between them is 2.59 to 1.

My Pitcairns all run 4x and are run on a variety of both AMD and Intel boards from quite recent to eight years old. The results are surprisingly consistent across all architectures (this wasn't always the case) and really indicates what a great job Bikeman (Heinz-Bernd) did in making the app run at top efficiency on the different PCIe generations. Over all these different hosts, the crunch time per task averages out to be around 77 mins with very little deviation, maybe +/- a couple of minutes.

I recently bought a Tahiti series GPU on ebay. It's running 6x and currently averaging 46 mins per task. I'm still testing but I feel it may be at its 'best' efficiency for my purposes. So I would contend that the true ratio between a Tahiti and a Pitcairn is more like 1.67 to 1. So why the big difference from what the table gives?

I would contend that all my Pitcairn results are being taken as 308 mins (the actual running time of a task) without any regard for the fact that the concurrency of 4x makes it really 77 mins per task. So nasty people like me are skewing the results to make them look worse than they really are :-). Of course I don't know this for a fact so I am only guessing :-).

BTW, thanks very much for the link. I'm getting so old and decrepit that I'd completely forgotten that such information existed. I had seen it quite a while ago but had completely forgotten about it.

EDIT: I should also make it very clear that this isn't the only factor - there are probably quite a few. An obvious one is that there are big differences in the performance of a given app type over time. CUDA32 compared to CUDA55 is one that's currently in play. Another is that the very high-end AMD GPUs (Hawaii??) return invalid results above a concurrency of 1x. This would skew the relative worth of affected GPUs to look better than they really are because some of the GPUs below them are going to be running 2x, 3x, etc, and adding longer run times to the stats.

The 'take-home' message should be to treat the numbers as a rough guide only and hope that early adopters like Archae86 give us the 'real deal'. It's great that there are people willing to do this and, particularly, willing to document it to the level of detail that gives great confidence that the numbers are accurate.

That makes much more sense. I was going to ask how those tables are generated, and I see they have changed since I last posted, so I assume they are automatically populated.

I was pondering how to make this more useful for people deciding which card to use in a build. Digging through "post your results" threads is tedious and prone to human error. Simply populating the tables with GFlops compute ratings of cards would not account for things like difference in app efficiency or the Maxwell series under-performance. Perhaps a way to make useful tables would be to have a "test app" (enabled by prefs, as usual) that was simply another copy of the most current GPU app, coded only to allow one simultaneous unit per GPU, and the server only sending X units to be processed per week per host running test apps. That would give reasonably accurate, automated benchmarking people could use for card shopping.

mikey
mikey
Joined: 22 Jan 05
Posts: 11889
Credit: 1828138831
RAC: 204413

RE: Perhaps a way to make

Quote:
Perhaps a way to make useful tables would be to have a "test app" (enabled by prefs, as usual) that was simply another copy of the most current GPU app, coded only to allow one simultaneous unit per GPU, and the server only sending X units to be processed per week per host running test apps. That would give reasonably accurate, automated benchmarking people could use for card shopping.

The argument against this has always been 'but you get no credits for running the unit' so people have always said no. Your option to voluntarily agree to 'opt in' to run the tests is one I haven't seen before, but it doesn't get around the 'no credits will be given for running the workunit' problem. Maybe an idea would be to setup a single workunit of a fixed time limit, say 10 minutes, and use that as the benchmark workunit. Running a 10 minutes workunit once a week isn't a major problem for most people, and it doesn't take much time away from earning credits either.

Gamboleer
Gamboleer
Joined: 5 Dec 10
Posts: 173
Credit: 168389195
RAC: 0

RE: RE: Perhaps a way to

Quote:
Quote:
Perhaps a way to make useful tables would be to have a "test app" (enabled by prefs, as usual) that was simply another copy of the most current GPU app, coded only to allow one simultaneous unit per GPU, and the server only sending X units to be processed per week per host running test apps. That would give reasonably accurate, automated benchmarking people could use for card shopping.

The argument against this has always been 'but you get no credits for running the unit' so people have always said no. Your option to voluntarily agree to 'opt in' to run the tests is one I haven't seen before, but it doesn't get around the 'no credits will be given for running the workunit' problem. Maybe an idea would be to setup a single workunit of a fixed time limit, say 10 minutes, and use that as the benchmark workunit. Running a 10 minutes workunit once a week isn't a major problem for most people, and it doesn't take much time away from earning credits either.

I can't see a reason not to give any credit if it were simply a copy of the current GPU app, but restricted to 1 concurrent unit per GPU. That would, however, mean a slight loss of overall credit for the brief time spent running it if the GPU would otherwise run multiple units.

Your idea would work as well. It would be nice to have a robust way to show differences between GPUs.

Gamboleer
Gamboleer
Joined: 5 Dec 10
Posts: 173
Credit: 168389195
RAC: 0

So much for a Day One

So much for a Day One purchase...I've been periodically checking Amazon to see if they had a pre-order page up yet. Apparently they did, and EVGA sold out in 6 minutes, Zotac in 12.

A few are being sold on eBay for nearly double MSRP. I wonder how many of the bidders are aware that the Founder's Edition will continue to be sold during the life of the product, and is not a limited-release collector's card?

mikey
mikey
Joined: 22 Jan 05
Posts: 11889
Credit: 1828138831
RAC: 204413

RE: I can't see a reason

Quote:


I can't see a reason not to give any credit if it were simply a copy of the current GPU app, but restricted to 1 concurrent unit per GPU. That would, however, mean a slight loss of overall credit for the brief time spent running it if the GPU would otherwise run multiple units.

Your idea would work as well. It would be nice to have a robust way to show differences between GPUs.

And as long as it was 'opt in' the people running the units shouldn't care about the very short hiccup in their credits either. Plus if it happened only once per week, or even once per month as things averaged out, it wouldn't even be noticeable. What it may take though is someone to setup and administer the program so it stays on track and the numbers get to the right place so they are usable.

archae86
archae86
Joined: 6 Dec 05
Posts: 3145
Credit: 7023934931
RAC: 1805324

RE: So much for a Day One

Quote:
So much for a Day One purchase...I've been periodically checking Amazon to see if they had a pre-order page up yet. Apparently they did, and EVGA sold out in 6 minutes, Zotac in 12.


Somehow I got the EVGA Founders Edition 1080 into my Amazon wish list a few days ago. After reading your note I checked and as of this moment my wish list item shows no Amazon availability but one third-party. That third-party is listed as offering placing a preorder for the tidy sum of $1999, with a note that says that if what I want delivery by June 10 I should select expedited shipping at checkout. I don't think I'll bite on that particular seller.

How sure are you are that the 6/12 minute pre-order matter was for Amazon themselves, as distinct from an Amazon Marketplace partner?

Daniels_Parents
Daniels_Parents
Joined: 9 Feb 05
Posts: 101
Credit: 1877689213
RAC: 0

If someone would be able to

If someone would be able to test a GTX 1080 (and an otherone a GTX 1070 later on) with a Windows 64 Bit version (and of course a Linux too, but not of my interest) I would spend twice $25 (to the tester's Paypal account). Perhaps others are also interested in knowing what Pascal can make for us cruncher and are willing too to spend some money (kind of croud funding) to experts here. This persons should buy (as cheap as possible) and test them "through its paces" and publish the results in this forum. The tester can keep the graphics card as a reward not only for testing the new architecture but also for the time spending to help all of us "non experts" (more or less :-) with our small and big problems. What do you think about ?

Arthur

I know I am a part of a story that starts long before I can remember and continues long beyond when anyone will remember me [Danny Hillis, Long Now]

archae86
archae86
Joined: 6 Dec 05
Posts: 3145
Credit: 7023934931
RAC: 1805324

RE: What do you think about

Quote:

What do you think about ?

Arthur


Arthur,
I currently plan to buy a 1080 the moment available, and a 1070 soon after availability (unless the 1080 outcome is too disastrous). A primary intention of my purchase is to provide useful information here, specifically to help others whose primary interest might be Einstein to make better informed purchasing (or refraining) decisions.

As it happens I wish not to receive any payment in appreciation for this, but your proposal raises my hopes that my activity and postings may be of some use and some interest.

Possibly your offer might interest someone else.

Regarding my own systems and intentions:

My initial test system, which is my soon-to-be daily driver currently under commissioning work, is a Windows 10 Pro system running on a mid-range Haswell quad-core. In preparation for this 1080 opportunity I chose a more capable supply than I otherwise might (850 watt X-series Seasonic) and a more thoroughly ventilated case (Corsair 400R with fans mounted at all eight external mount locations, though most running rather slowly).

Extending the idea of being useful, after my initial work I may listen to and perhaps comply with suggestions of tests I might conduct. If I agree that they are likely to help people make good judgments on Einstein performance, and good purchase decisions, I am likely to embrace a suggestion. I have no interest in conducting game-oriented tests. I'm not very interested in SETI at the moment, but have deep respect for the whole Lunatics team in general and Richard Haselgrove and Jason Gee in particular, and if there is something reasonable I can do here which they would find useful, I'd be strongly inclined to try it.

I'm not a Linux person, nor an Apple person, so everything coming from me will be on Windows 64 bit. I may provide some Windows 7 results later on.

Gamboleer
Gamboleer
Joined: 5 Dec 10
Posts: 173
Credit: 168389195
RAC: 0

RE: How sure are you are

Quote:
How sure are you are that the 6/12 minute pre-order matter was for Amazon themselves, as distinct from an Amazon Marketplace partner?

Apparently "nowinstock.net" shows how quickly items become unavailable once they become available. Here is a news article that references Kotaku, which is apparently the original source monitoring and reporting what they saw on nowinstock:

http://www.techtimes.com/articles/160399/20160523/nvidia-geforce-gtx-1080-preorders-sell-out-in-minutes-on-amazon-best-buy-and-newegg.htm

I know Amazon's API allows separation of 3rd party new vs Amazon new for price-monitoring software, so I'm fairly sure it was the sold-by-Amazon items.

archae86
archae86
Joined: 6 Dec 05
Posts: 3145
Credit: 7023934931
RAC: 1805324

RE: Apparently

Quote:
Apparently "nowinstock.net" shows how quickly items become unavailable once they become available.


Interesting that the real action there is shown on May 20, three days ago, but varying times of the day for different models at different sellers.

I have myself signed up for Newegg's notification on all six brands, and did not receive any messages.

We'll see what Friday brings. I think NVidia wants a first day image rather like the lines snaking out of selling points for Harry Potter books on first day, as the image of frantic demand may help build sales enthusiasm. But they'll want to have guessed the demand well enough to meet much of it relatively soon.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.