CPU speed impact to GPU processing

Joined: 4 May 18
Posts: 7
Credit: 9,888,094
RAC: 0

Just did some more runs with

Just did some more runs with nothing running besides GW. I just ran ngpus both 0.5 and 1.0, and I gave it unlimited cores, i even manually reniced the process.  The maximum cpu utilization (ignoring first few seconds for ingestion) on the process (doesn't matter one or two wu) peaked 28% and 18% on GPU, when i run 2 WU GPU utilization splits the difference with about ~8% each. 

My guess is the app was coded for some level of multithread capability. 

This is what it feels like.  GW is running and Chrome is utilizing more cycles. My system is so bored it's literally scheduling apple background tasks, like photo analysis, time machine backup. Boinc is currently #7 on CPU utilization, and i'm doing nothing besides posting here hahahaha.



Joined: 7 May 07
Posts: 140
Credit: 2,549,620,730
RAC: 1,502,011

I have started crunching

I have started crunching Einstein on all of my mining rigs.    First thing I noticed was a huge difference in CPU utilization between the windows and the Linux app.  The following shows two systems, both TB85 motherboards with comparable CPU.  I have plenty of CPU to spare on windows but Linux is maxed out **

AMD/Wind10 -vs- NVidia/Linux

 **  A third Linux system is H110BTC with comparable CPU but has 9 GPUs.  I excluded the slowest GPU as the %CPU was consistently in the 80 on each GPU.  That raised the %CPU back into the 90s for the remaining 8 and elapsed time dropped considerably.  I also found it was no improvement to running two tasks on a GPU.


[EDIT] - Added performance plot

The two blue lines:  S9100 and five s9000 cards

Orange:  Mix of 3 P102-100 (the thick orange line) and 3 mostly gtx 1070 class

Red: mix of gtx1060 class with that 32 minute time being the P106-90 card that I had to exclude

Thred mining systtems running einstein

Joined: 6 Dec 05
Posts: 3,105
Credit: 6,179,523,253
RAC: 1,653,912

JStateson wrote:I have plenty

JStateson wrote:
I have plenty of CPU to spare on windows but Linux is maxed out **

You mention this as Windows vs. Linux, but quite likely the important distinction is instead Nvidia vs. AMD graphics card.  At least for the current Einstein GRP GPU applications, the build path employed for Nvidia cards handles processor support of the card using a polling loop that runs non-stop, doing nothing but asking "do you want service?" the great majority of the time.

Assuming the Linux build uses a similar construct, that explains the high utilization you observe.  It does not mean the application needs lots of processor power to support it.  However, it is pretty sensitive to latency.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.