Pascal again available, Turing may be coming soon

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4,885
Credit: 18,416,179,400
RAC: 5,877,297

The Phoronix power

The Phoronix power consumption tests show that the 2070 and 2080 cards may be more power efficient than Pascal for compute. The 2080Ti not so much and equivalent to the Vega64 on compute loads.  Not friendly to the power bill.

 

DanNeely
DanNeely
Joined: 4 Sep 05
Posts: 1,364
Credit: 3,562,358,667
RAC: 0

Keith Myers wrote:Read the

Keith Myers wrote:

Read the AnandTech benchmarks and pay attention to the Compute portion of the benchmarks.  The Folding@Home, the N-body Physics and the OpenCL Geekbench4 results are up and are pertinent to distributed computing tasks.  Geekbench4 results are 200% better than Pascal.

Actual finished tasks at each project will determine the real benefit of Turing.

 

Geekbench4 seems to be an extreme outlier, the rest of the compute benchmarks are roughly in line with the gaming ones.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4,885
Credit: 18,416,179,400
RAC: 5,877,297

The Geekbench4 OpenCL

The Geekbench4 OpenCL benchmarks are very limited in scope, mostly focusing on image manipulation. This is one of my benchmark scores for my 1080 Ti.

OpenCL Performance

Score 257227  
Sobel  656875  28.9 Gpixels/sec
 
Histogram Equalization  339839  10.6 Gpixels/sec
 
SFFT  56948  142.0 Gflops
 
Gaussian Blur  183544  3.22 Gpixels/sec
 
Face Detection  52115  15.2 Msubwindows/sec
 
RAW  2586482  25.0 Gpixels/sec
 
Depth of Field  1305527  3.79 Gpixels/sec
 
Particle Physics  46677  7379.0 FPS

 

archae86
archae86
Joined: 6 Dec 05
Posts: 3,156
Credit: 7,174,664,931
RAC: 713,086

On reviewing the actual space

On reviewing the actual space in my cases, I realized that Stoll7 and Stoll8 are very tight fits or impossible for cards of the length of the 2080 and 2080 Ti offerings.  I also suspect that my candidate fleet redo would have left me at higher power consumption, while I'd actually prefer to go down a little.

On the other hand, the Stoll9 case is generously sized, is equipped with lots of fans and an 850 watt supply, so should easily handle the 2080 I just ordered for it.  Amazon was out of stock, so I don't know when I'll be able to provide initial results here.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4,885
Credit: 18,416,179,400
RAC: 5,877,297

CUDA/OpenCL compute

CUDA/OpenCL compute benchmarks out at Phoronix.com.  These tests more representative of our distributed computing workloads.  Comparisons to 1080Ti and Vega 64.

NVIDIA GeForce RTX 2080 Ti Shows Very Strong Compute Performance Potential

 

archae86
archae86
Joined: 6 Dec 05
Posts: 3,156
Credit: 7,174,664,931
RAC: 713,086

archae86 wrote: the 2080 I

archae86 wrote:
the 2080 I just ordered for it.  Amazon was out of stock

I switched the order to NewEgg, which appears to claim stock, and estimates I'll receive the card Wednesday.

As I suspect the current flavor of Einstein GRP GPU work may have significant work content variation within file, to facilitate comparisons I intend to run down my queue to a day between now and Tuesday, then take a big gulp hoping to get a large number of substantially identical work content WUs.  Then I'll do a closely monitored performance run to get Einstein production rate and system power consumption on my existing overclocked 1070 + 1060 6GB configuration, then remove the overclocks and get another measurement, before opening the box and plugging in the 2080.

For simplicity, I'll start at 1X running at factory defaults, but if it works (including validated results), I'll quickly change to 2X, checking only to see whether that is more productive.  If so I'll delay fiddling with the knobs and get a day's worth of stability indication before seeing if I can twist the tail.

In my optimistic frame of mind, this thing could possibly be in the Vega+ class for Einstein productivity.  In my pessimistic mode, I may fail to equal my current output, while burning more power, and getting more noise.  Most likely it will fall somewhere between.

The specific card I ordered was the Gigabyte GV-N2080WF3OC-8GC.  This choice comes from no deep assessment of superiority.  I am allergic to high card temperatures and to fan noise, so wanted a three-fan card.  This one was the cheapest appearing to be available for shipment of 2080s.  Were a 2080Ti readily available, I'd have taken the chance, as this box has plenty of power supply and generous ventilation, but this way I'll get a read on the key question of whether Turing happens to be a nice fit to current Einstein GRP code or not.

Aside from my own interest, I hope to generate some useful information for the Einstein community (maybe help save others from wasting their money if it proves a poor fit).  So I'll consider suggestions for trials and reports.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4,885
Credit: 18,416,179,400
RAC: 5,877,297

I will be watching for your

I will be watching for your results.  Thank you for being a "pioneer" . . .that is, with the arrows in your backSmile, of finding out where these new Turing cards fall out with respect to actual production distributed computing work.

 

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5,870
Credit: 115,697,596,703
RAC: 34,746,354

archae86 wrote:... I hope to

archae86 wrote:
... I hope to generate some useful information for the Einstein community ....

Thank you very much for doing this.  I'm sure many will be interested.

You, sir, are a scholar and a gentleman! :-).

 

Cheers,
Gary.

mikey
mikey
Joined: 22 Jan 05
Posts: 12,531
Credit: 1,838,581,893
RAC: 3,639

archae86 wrote:archae86

archae86 wrote:
archae86 wrote:
the 2080 I just ordered for it.  Amazon was out of stock

I switched the order to NewEgg, which appears to claim stock, and estimates I'll receive the card Wednesday.

As I suspect the current flavor of Einstein GRP GPU work may have significant work content variation within file, to facilitate comparisons I intend to run down my queue to a day between now and Tuesday, then take a big gulp hoping to get a large number of substantially identical work content WUs.  Then I'll do a closely monitored performance run to get Einstein production rate and system power consumption on my existing overclocked 1070 + 1060 6GB configuration, then remove the overclocks and get another measurement, before opening the box and plugging in the 2080.

For simplicity, I'll start at 1X running at factory defaults, but if it works (including validated results), I'll quickly change to 2X, checking only to see whether that is more productive.  If so I'll delay fiddling with the knobs and get a day's worth of stability indication before seeing if I can twist the tail.

In my optimistic frame of mind, this thing could possibly be in the Vega+ class for Einstein productivity.  In my pessimistic mode, I may fail to equal my current output, while burning more power, and getting more noise.  Most likely it will fall somewhere between.

The specific card I ordered was the Gigabyte GV-N2080WF3OC-8GC.  This choice comes from no deep assessment of superiority.  I am allergic to high card temperatures and to fan noise, so wanted a three-fan card.  This one was the cheapest appearing to be available for shipment of 2080s.  Were a 2080Ti readily available, I'd have taken the chance, as this box has plenty of power supply and generous ventilation, but this way I'll get a read on the key question of whether Turing happens to be a nice fit to current Einstein GRP code or not.

Aside from my own interest, I hope to generate some useful information for the Einstein community (maybe help save others from wasting their money if it proves a poor fit).  So I'll consider suggestions for trials and reports.

A guy at PrimeGrid got one and is doing similar testing over there http://www.primegrid.com/forum_thread.php?id=8183&nowrap=true#120682 and here http://www.primegrid.com/forum_thread.php?id=4305&nowrap=true#120679

He discovered it is faster but not SCREAMING OMG faster, it's the typical evolution in crunching speed we have seen over the years as they put out new versions of gpu's. It will be interesting to see how Einstein and the other projects also do as more and more people get the new cards.

DanNeely
DanNeely
Joined: 4 Sep 05
Posts: 1,364
Credit: 3,562,358,667
RAC: 0

mikey wrote:archae86

mikey wrote:
archae86 wrote:
archae86 wrote:
the 2080 I just ordered for it.  Amazon was out of stock

I switched the order to NewEgg, which appears to claim stock, and estimates I'll receive the card Wednesday.

As I suspect the current flavor of Einstein GRP GPU work may have significant work content variation within file, to facilitate comparisons I intend to run down my queue to a day between now and Tuesday, then take a big gulp hoping to get a large number of substantially identical work content WUs.  Then I'll do a closely monitored performance run to get Einstein production rate and system power consumption on my existing overclocked 1070 + 1060 6GB configuration, then remove the overclocks and get another measurement, before opening the box and plugging in the 2080.

For simplicity, I'll start at 1X running at factory defaults, but if it works (including validated results), I'll quickly change to 2X, checking only to see whether that is more productive.  If so I'll delay fiddling with the knobs and get a day's worth of stability indication before seeing if I can twist the tail.

In my optimistic frame of mind, this thing could possibly be in the Vega+ class for Einstein productivity.  In my pessimistic mode, I may fail to equal my current output, while burning more power, and getting more noise.  Most likely it will fall somewhere between.

The specific card I ordered was the Gigabyte GV-N2080WF3OC-8GC.  This choice comes from no deep assessment of superiority.  I am allergic to high card temperatures and to fan noise, so wanted a three-fan card.  This one was the cheapest appearing to be available for shipment of 2080s.  Were a 2080Ti readily available, I'd have taken the chance, as this box has plenty of power supply and generous ventilation, but this way I'll get a read on the key question of whether Turing happens to be a nice fit to current Einstein GRP code or not.

Aside from my own interest, I hope to generate some useful information for the Einstein community (maybe help save others from wasting their money if it proves a poor fit).  So I'll consider suggestions for trials and reports.

A guy at PrimeGrid got one and is doing similar testing over there http://www.primegrid.com/forum_thread.php?id=8183&nowrap=true#120682 and here http://www.primegrid.com/forum_thread.php?id=4305&nowrap=true#120679

He discovered it is faster but not SCREAMING OMG faster, it's the typical evolution in crunching speed we have seen over the years as they put out new versions of gpu's. It will be interesting to see how Einstein and the other projects also do as more and more people get the new cards.

 

Looks about what I'd expected.  The 2080 being ~20% faster than the 1080 Ti (closest match pricewise, the new 2080 Ti is priced equivalent to the Titan, which has transitioned from a top end gamer card to an entry level scientific compute one and soared in price as a result) is roughly what most gaming benchmarks showed.

 

That's much smaller than the ~50% bump that's been the average yearly boost for the last decade or so.  The reason for that is that NVidia spent about 50% of the GPU's compute capacity on two new features that do nothing for current games, and which will probably be useless for most Boinc projects. 

 

------------- Remainder of post is just about gaming on Turing, can skip if you don't care about that ----------

The 25% that went to Turing cores will probably be a nice bonus for gamers over the next few months because it moves one of the most expensive steps of rendering frames off the main GPU cores (the few demos for it are showing about a 33% boost in FPS); the caveat there is that it's dependent on NVidia building the AI models to run it for each game and baking them into the drivers.  Unfortunately these weren't available at release day.

The 25% that went to the Ray Tracing cores OTOH is mostly aspirational to help devs start building up the feature for the next generation of two of GPUs.  The issue here is that even the 2080 TI is struggling do to ray traced lighting at 1080p60, and gamers who buy $1200 (or even $600) GPUs don't do it for just 1080p60, a $300 card can do that quite well.  While there's almost certainly room to improve implementations over time, 1440p60 would need it to roughly double, 1440p144 or 4k60 would need it to quadruple.  That's almost certainly not going to happen until the 7nm and 5nm process shrinks when NVidia can spend a lot more transistors on it. (Or put the enough to do 1080p60 on a more mainstream $300ish card.)

 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.