Pascal again available, Turing may be coming soon

Gavin
Gavin
Joined: 21 Sep 10
Posts: 191
Credit: 40644337738
RAC: 1

archae86 wrote:First results

archae86 wrote:
First results from a 2080 running on the host I call Stoll9 which most recently was running a 1070 + a 1060

Looking at Stoll9's details page the RTX 2080 is being reported as only having 4095MB of memory (about half of what it should have) and I wonder if this is a consequence of requiring a newer Boinc client version to properly support this card as per Richard Haselgroves earlier post to this thread... Not that I think it has any real bearing on runtimes it's just an observation and I'm forever curious!

 

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4968
Credit: 18762173900
RAC: 7168991

No, nothing unusual with the

No, nothing unusual with the Turing cards.  BOINC hasn't reported more than 4GB of VRAM on any Nvidia card for the past 3 generations.

 

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2143
Credit: 2960276054
RAC: 713069

re the 4GB maximum report:

re the 4GB maximum report: BOINC is stuck on using 32bit code for NVidia detection, and has been since day 1. That part isn't likely to change any time soon. (In fact, BOINC started by reporting negative memory for cards with 2GB or above. Rom Walton remoted into my 2 GB GTX 670 to debug that bit of code.)

My code update for v7.14 modified the peak speed report for Turing cards only - not any other metric.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4968
Credit: 18762173900
RAC: 7168991

Some comments about upcoming

Some comments about upcoming 411.70 drivers in an article at TechReport

Quote:

Some more general issues remain unfixed with the 411.70 update. Users of GTX 1060 cards connected to AV receivers will find that those devices will switch to two-channel stereo mode if they allow audio output to remain idle for five seconds. GTX 1080 Ti cards might exhibit random DPC watchdog violations when used as members of a multiple-GPU setup on motherboards with PLX switches. Cursor corruption might appear in Firefox when a user hovers over certain links.

 

Worrisome for Intel HEDT board users with PLX chips and running GTX 1080Ti cards.  I have such a system and it won't be seeing the 410 series drivers for quite a while.

 

Zalster
Zalster
Joined: 26 Nov 13
Posts: 3117
Credit: 4050672230
RAC: 0

Keith Myers wrote:Some

Keith Myers wrote:

Some comments about upcoming 411.70 drivers in an article at TechReport

Quote:

Some more general issues remain unfixed with the 411.70 update. Users of GTX 1060 cards connected to AV receivers will find that those devices will switch to two-channel stereo mode if they allow audio output to remain idle for five seconds. GTX 1080 Ti cards might exhibit random DPC watchdog violations when used as members of a multiple-GPU setup on motherboards with PLX switches. Cursor corruption might appear in Firefox when a user hovers over certain links.

 

Worrisome for Intel HEDT board users with PLX chips and running GTX 1080Ti cards.  I have such a system and it won't be seeing the 410 series drivers for quite a while.

 

Negative Ghost Rider.....The pattern is full........

archae86
archae86
Joined: 6 Dec 05
Posts: 3157
Credit: 7228018254
RAC: 1100764

Sorry I was a bit slow to

Sorry I was a bit slow to post 2X, 3X, 4X improvement results.

These are specific not only to the Einstein current Windows code for Gamma-ray pulsar, but probably to the specific sort of file from which current WUs are being formed.  For these files, the infamous pause at just below 90% indicated completion is of very short length.  I strongly suspect that running 2X with offset WU starting times benefits particularly during this pause, so the (small) 2X improvement I observed may well be below long-term Einstein average behavior.

Anyway, here is what I saw today, setting productivity at 1X to 1.0 for a reference.:

1X 1.0
2X 1.031
3X 1.035
4X 1.037

With such modest improvement from higher multiplicity, I may well run the system at 1X long-term, and certainly will not go higher than 2X.  In any case, the search for overclock ceilings goes faster at lower multiplicity, so I've started up that ladder running 1X.

So far I have made it far enough up the core clock/memory clock improvement ladder to get 1.049 times the 1X default clock output.  

I've seen clear granularity in the reported core clock frequencies in the past, so went looking for it before starting this time.  It appears that for my 8020 the core clock granularity is 15 MHz, so I've used that as my step size.  The reported memory clock granularity seems too low to observe (so maybe the reporting is spurious).  In any case I've adopted what MSI Afterburner regards as a 30 MHz increment for memory clock trial steps.

So far I've gotten one momentary black screen, and at that moment errored out one WU.  This was at my first try to get started on granularity detection.  It may be hinting that my current core clock level (1980 max so far) is not so far below a hard limit for my card.

Speaking of limits, if you look around at game-oriented Turing reviews, you may see a lot of power limit ceiling discussion.  As we have often seen the Einstein application appears to employ fewer of the GPU chip resources than do typical games, so tends to run somewhat lower power at a given clock.  I've so far not made it quite so high as 80% of (unraised default) power limit.  I expect to run out of high speed correct function headroom before I run out of power limit headroom.

 

 

Zalster
Zalster
Joined: 26 Nov 13
Posts: 3117
Credit: 4050672230
RAC: 0

Archae86 Are you saying that

Archae86

Are you saying that the average time for 2X was actually slower than 1??

Good luck with your OC, look forward to your results.
Z

archae86
archae86
Joined: 6 Dec 05
Posts: 3157
Credit: 7228018254
RAC: 1100764

Zalster wrote:Are you saying

Zalster wrote:
Are you saying that the average time for 2X was actually slower than 1??Z

Not sure what you mean by "average time".  Since it was working on two tasks at once when running 2X, the average time, of course, was almost twice as long.  But since it was a bit less than double, the Einstein productivity was higher, in the ratio I posted.

To be really specific, I saw 8:16 average elapsed time at 1X at stock clocks, and 16:02 running 2X.

As I climb the overclocking ladder, I'm down to about 7:50 at the moment (for tasks run when I am not actively using the PC--this is my daily driver).

I'll retry 2X when I've found the clock rate ceilings.

 

 

 

mmonnin
mmonnin
Joined: 29 May 16
Posts: 291
Credit: 3420056540
RAC: 3661387

Thanks for the numbers. It

Thanks for the numbers. It seems like an AMD RX card would still be preferred vs an RTX NV card due to the upfront cost.

The Anand and Phoronix FAHBBench #s are really far off. Linux hasn't provided that much of a boost in the past for FAH.

archae86
archae86
Joined: 6 Dec 05
Posts: 3157
Credit: 7228018254
RAC: 1100764

I think I've bumped into the

I think I've bumped into the core clock rate ceiling for my 2080 card under current conditions.  After I had requested +140 (giving a reported 2010) core clock for a half hour run, my attempt to go up one rung to +155 gave multiple oddities, including a prompt transient in reported clock rate.  Then I noticed the red line which indicated that I had errored out the in-progress WU.

So I am currently continuing up the memory clock rate ladder, using a core clock dialed back slightly to +125 (gives 1995).  Based on a report or two from early benchmarkers, it may be that my card may have considerably more memory clock steps to go (I'm currently trialing at +300, which gives 1775 as reported by GPU-Z, or 7100 on the scale some other applications use).

I consider the relatively small percentage core clock overclock available to be an unwelcome surprise.  Of course, the specific sample of the GPU chip present on the card I received gets a vote on this, so other users are likely to see different results on this point than I.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.