O3ASE1 - Windows versus Linux comparison on GTX 1650 Super

Jim1348
Jim1348
Joined: 19 Jan 06
Posts: 463
Credit: 257957147
RAC: 0
Topic 225327

I thought I would compare Windows vs. Linux performance on two comparable GTX 1650 Supers, neither overclocked, and didn't see much difference.

 

Win10 (457.51 driver) with 399 MHz work units: 27 minutes 42 seconds

Ubuntu 20.04.02 (460.73 driver) with 299 MHz work units: 24 minutes 27 seconds

 

It looks like the difference can be accounted for by the difference in the work unit frequencies, but the OS's are very similar.

 

Also, each needed the support of only one CPU core, a Ryzen 3600 in both cases.

Adding an additional core did not do much.

 

Now I am wondering about AMD.  Has anyone used an RX 570, especially on Windows?

 

 

cecht
cecht
Joined: 7 Mar 18
Posts: 1421
Credit: 2445692943
RAC: 1497930

Jim1348 wrote:Now I am

Jim1348 wrote:
Now I am wondering about AMD.  Has anyone used an RX 570, especially on Windows?

Ubuntu 20.04.02 (AMDGPU 21.1 driver) with deltaF 0.20 tasks: ~16 m avg. @ 1X, ~9 m avg. @ 2X tasks.

Ideas are not fixed, nor should they be; we live in model-dependent reality.

Jim1348
Jim1348
Joined: 19 Jan 06
Posts: 463
Credit: 257957147
RAC: 0

Thanks.  That motivates me to

Thanks.  That motivates me to pull out the GTX 1650 Super and try the RX 570. 

It looks like a worthwhile improvement, but I am reluctant to try AMD cards unless there is a reason for doing so.  With luck, Win10 will do as well as Ubuntu.

Jim1348
Jim1348
Joined: 19 Jan 06
Posts: 463
Credit: 257957147
RAC: 0

Jim1348 wrote:That

Jim1348 wrote:

That motivates me to pull out the GTX 1650 Super and try the RX 570. 

Running 2X on the RX 570, I am getting a little over 13 minutes per work unit, or twice as fast as on the GTX 1650 Super at 1X (I didn't think it could do 2X, and didn't try it). 

That is a nice improvement, but it appears from CECHT's results that Ubuntu helps a little more on the AMD cards than on the Nvidia. 

 

bozz4science
bozz4science
Joined: 4 May 20
Posts: 15
Credit: 64712558
RAC: 6043

I think it is mainly due to

I think it is mainly due to the CPU processing part at the end of the WU for the toplist calcualtion. In another thread, I noted that my reasonably fast 3700X processor needed betw. 5-7 min for the CPU part at the end while my wingmen usually took much less time on slower CPUs. That got me thinking and after extensive testing it seems that the only variable left is the OS. Even lowering the CPU load and freeing more threads for the O3 GW tasks didn't yield much improvement (~15 sec).

I think this illustrates my observation pretty well: Work unit 544636413 (all 3 wingmen used a similar GPU)

dmargulis:
Win 10 / 5800X / GTX 1650:
1,395 sec (Toplist calc: ~6:17 min)

ServicEnginIC:
Linux Ubuntu 20.04.02 LTS / i3-9100F / GTX 1650:
1,240 sec (Toplist calc: ~2:11min)

mine:
Win 10/ 3700X / GTX 1660S:
1,543 sec (Toplist calc: ~ 6:33 min)

To me it seems, that the CPU calc at the end is highly optimized/much more efficienct on Linux as compared to Linux. Will continue running some tests.

 

@Jim: Just took the liberty to look quickly through the task log of some of your reported WUs in your hosts. It supports what I am seeing so far on my end as well. Your Linux host took ~2:30-3:30 min for the toplist calculation, your Win10 host took roughly twice as long with ~6min

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3681
Credit: 33841299400
RAC: 37073558

good observation bozz, and

good observation bozz, and totally believable. Usually Linux is maybe 10-15% faster, but in some cases Linux really shows a big improvement. there's a similar 2-3x speedup on Universe@home (CPU crunching) between newer Linux distributions and Windows. Looks like this is another one of those cases.

_________________________________________________________________________

Jim1348
Jim1348
Joined: 19 Jan 06
Posts: 463
Credit: 257957147
RAC: 0

bozz4science wrote:To me it

bozz4science wrote:
To me it seems, that the CPU calc at the end is highly optimized/much more efficienct on Linux as compared to Linux. Will continue running some tests.

Yes, I think that explains it.  Thanks for looking.

I used a fast CPU (Ryzen 3600) on Win10.  In fact, I had two cores free, so that should have been faster, except for the OS.

I will try to get my other RX 570 installed on an Ubuntu machine again.  It is always a challenge.

 

mikey
mikey
Joined: 22 Jan 05
Posts: 11889
Credit: 1828173831
RAC: 202941

Jim1348 wrote: Jim1348

Jim1348 wrote:

Jim1348 wrote:

That motivates me to pull out the GTX 1650 Super and try the RX 570. 

Running 2X on the RX 570, I am getting a little over 13 minutes per work unit, or twice as fast as on the GTX 1650 Super at 1X (I didn't think it could do 2X, and didn't try it). 

That is a nice improvement, but it appears from CECHT's results that Ubuntu helps a little more on the AMD cards than on the Nvidia. 

I have a 1660Ti in a laptop and what I did was suspend all the tasks except the one running and then when it got to the cpu part I started a 2nd task and it picked up like normal on the gpu which meant I could do 2 units during that short amount of time saving some overall time. Each unit ran it's full length but running 2 at once for a little bit helped, although the baby sitting was not fun.

Jim1348
Jim1348
Joined: 19 Jan 06
Posts: 463
Credit: 257957147
RAC: 0

mikey wrote:Each unit ran

mikey wrote:
Each unit ran it's full length but running 2 at once for a little bit helped, although the baby sitting was not fun.

Very nice, but I am past the baby-sitting era by this time.

 

However, I did manage to transfer the RX 570 over to another Ryzen 3600 running Ubuntu 20.04.2, and got the 21.10 drivers installed.  But the time for the first one was 31 minutes 23 seconds, and I was expecting around 16 minutes, as CECHT got.

https://einsteinathome.org/host/12878436/tasks/0/0

 

I was reserving one CPU core also, though it used only 18%, so that was not the problem.  I don't think the work units vary that much (it was 302.50 MHz), but maybe they have gotten longer?  They could use more data anyway.

 

Jim1348
Jim1348
Joined: 19 Jan 06
Posts: 463
Credit: 257957147
RAC: 0

Jim1348 wrote:I was reserving

Jim1348 wrote:
I was reserving one CPU core also, though it used only 18%, so that was not the problem.  I don't think the work units vary that much (it was 302.50 MHz), but maybe they have gotten longer?  They could use more data anyway.

It seems that one core was a problem.  When the GPU is supported by two cores, the times (for the 495 Hz work units) are reduced from 31 minutes to 22 minutes.  This is still rather high.  I suspect that what the project is running on the other cores still makes a difference even with two cores free, so I will have to investigate that further.  

mikey
mikey
Joined: 22 Jan 05
Posts: 11889
Credit: 1828173831
RAC: 202941

Jim1348 wrote: Jim1348

Jim1348 wrote:

Jim1348 wrote:
I was reserving one CPU core also, though it used only 18%, so that was not the problem.  I don't think the work units vary that much (it was 302.50 MHz), but maybe they have gotten longer?  They could use more data anyway.

It seems that one core was a problem.  When the GPU is supported by two cores, the times (for the 495 Hz work units) are reduced from 31 minutes to 22 minutes.  This is still rather high.  I suspect that what the project is running on the other cores still makes a difference even with two cores free, so I will have to investigate that further.  

Einstein always reserves a core for the gpu did you perhaps override that and now by giving it back it works like it should

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.