CUDA Performance Disparity

Holmis
Joined: 4 Jan 05
Posts: 1118
Credit: 1055935564
RAC: 0

Jonathan Jeckell wrote:The

Jonathan Jeckell wrote:
The one with all of the nasty warnings about how you will end the world if you change the setting?

That's the one! Take note that changing the setting on the website and then clicking "Update" in Boinc will not make it take effect, you have to download new work for the GPU before it takes effect and then it will effect all tasks in your cache.

mmonnin
mmonnin
Joined: 29 May 16
Posts: 291
Credit: 3232287015
RAC: 55653

I prefer to use the

I prefer to use the app_config file to get more than 1 unit at a time. It updates immediately and it can be unique for many, many machines.

 

What does nvidia-settings show as the GPU utilization, PCI-E link speed and PCI-E utilization?

Jonathan Jeckell
Jonathan Jeckell
Joined: 11 Nov 04
Posts: 114
Credit: 1341945207
RAC: 0

This is weird.  The setting

This is weird.  The setting keeps changing itself back to 1.0 on the settings page.

I converted the machine in question to a Hackintosh running Mac OS Sierra this weekend, and it is now running CUDA tasks twice as fast.

So to be completely honest, I don't know if the setting indeed took effect (it's still running only one at a time), but it's burning through them twice as fast under CUDA 8 under Mac OS than it did under CUDA 7.5 under Ubuntu 14.04.

I fully admit I could have had the configuration screwed up under Linux, but I doubt it.  I scrounged Ubuntu and Linux help boards to ensure nouveau wasn't conflicting with it.  And I saw a lot of references to friction between NVIDIA and the Linux crowd over drivers.

Jonathan Jeckell
Jonathan Jeckell
Joined: 11 Nov 04
Posts: 114
Credit: 1341945207
RAC: 0

Well, I'll have to switch

Well, I'll have to switch hard drives again to find out, but the BIOS and some other items told me I had it in the correct slot for PCI-16, and the card temp never bumped over 39C.

I tried to change the setting on the project preferences page to a higher number, but it keeps reverting back to 1.0. While using top on the command line, I saw the GPU task a little more often, but didn't run it long enough to determine if it was doing more than one task at a time.

Over the weekend I converted the machine to a Hackintosh.  It's still only running one task at a time, but TWICE AS FAST with CUDA 8 on Mac OS Sierra than it was on CUDA 7.5 under Ubuntu Linux 7.5.  As mentioned above, I admit I could've had a bad configuration, but I don't think so.  It's OBE anyway.

 

Here's the results:

i7-5820k with GTX 960 (running 12 CPU units + 1 GPU unit)

Linux: BRP4G average time: 3,465seconds

Mac OS: BRP4G average time: 1,754 seconds

 

And again, the i3-4160 with a GTX 950 running Windows 10 4 CPU units + 1 GPU unit is running BRP4G averaged 2,504 seconds.

So the GTX 960 on the i7 is running right where you'd expect it to be under Mac OS Sierra vis-a-vis the GTX 950 based on their respective benchmarks.

 

I'm sure CUDA doesn't actually suck that badly under Linux, so I must've missed something.  Either way, it's fixed under Mac OS, and I'm extremely pleased with it.

mmonnin
mmonnin
Joined: 29 May 16
Posts: 291
Credit: 3232287015
RAC: 55653

I think it was mentioned that

I think it was mentioned that version is slower, for SETI as well.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5851
Credit: 110313974520
RAC: 29692305

Jonathan Jeckell wrote:I

Jonathan Jeckell wrote:
I tried to change the setting on the project preferences page to a higher number, but it keeps reverting back to 1.0.

Were you trying to change it to 2 (or higher) rather than 0.5 (or 0.33, etc)???

It's a GPU utilization factor - it represents the fraction of a GPU to be 'used' by a single task and not the actual number of concurrent tasks.

 

Cheers,
Gary.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.