Discussion Thread for the Continuous GW Search known as O2MD1 (now O2MDF - GPUs only)

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3965
Credit: 47234412642
RAC: 65372947

Peter Hucker wrote:Ian&Steve

Peter Hucker wrote:
Ian&Steve C. wrote:
now that I’ve got all my systems configured for GPUGRID, I’ll only really be crunching Einstein as a backup project when GPUGRID is out of work. I can load up a couple GW tasks to test, but completion times might not be really relevant since I run nvidia cards, RTX 2070s (w/ E5-2680v2) and RTX 2080s (w/ E5-2667v2) 

As you're on GPU Grid a lot, do you happen to know if there's any chance of them making an AMD GPU version?  I don't have any Nvidias, but I'd like to contribute to their project.

don't think they have any plans for that right now. not sure why, maybe they only have CUDA developers or something. best to ask on their forums

_________________________________________________________________________

cecht
cecht
Joined: 7 Mar 18
Posts: 1537
Credit: 2915755289
RAC: 2109762

Ian&Steve C. wrote:..it was

Ian&Steve C. wrote:
..it was several months ago when I first signed up for Einstein and I was doing some testing On different hardware and configs. I noticed that I had good performance (~80% GPU utilization) with the GW GPU app on the system running 10x 2070s with the E5-2680v2 CPUs, but pretty poor performance (40-50% GPU utilization) on the system with 7x 2080s and just the E5-2630Lv2. swapping to the E5-2667v2 pretty much solved that and brought the GPU utilization back up. But I’ve just stuck to running the Gamma Ray tasks since then since they run better.

Thanks. Interesting results. It seems that for 10 GPUs running with a 10 core E5-2680v2 CPU and 7 GPUs running with an 8 core E5-2667v2, things worked well; it was only when you ran 7 GPUs with a 6 core E5-2630Lv2 that things didn't work so well.

So perhaps the solution is to ensure a full core is available for each GW GPU task. I assume you had hyperthreading enabled, so that tentative conclusion would apply just as well to 2 threads/task. This would fit with my observations on my 2 core/4 thread system.

Ideas are not fixed, nor should they be; we live in model-dependent reality.

Mr P Hucker
Mr P Hucker
Joined: 12 Aug 06
Posts: 838
Credit: 519371204
RAC: 15292

Ian&Steve C. wrote:don't

Ian&Steve C. wrote:
don't think they have any plans for that right now. not sure why, maybe they only have CUDA developers or something. best to ask on their forums

I saw something saying they don't have time to do it.  Maybe they have enough power from the Nvidias anyway.  Oh well, my GPUs can run Einstein Gamma and Milkyway tasks.  Gravity not possible as I don't have enough CPU power or GPU RAM.

If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3965
Credit: 47234412642
RAC: 65372947

cecht wrote:Ian&Steve C.

cecht wrote:
Ian&Steve C. wrote:
..it was several months ago when I first signed up for Einstein and I was doing some testing On different hardware and configs. I noticed that I had good performance (~80% GPU utilization) with the GW GPU app on the system running 10x 2070s with the E5-2680v2 CPUs, but pretty poor performance (40-50% GPU utilization) on the system with 7x 2080s and just the E5-2630Lv2. swapping to the E5-2667v2 pretty much solved that and brought the GPU utilization back up. But I’ve just stuck to running the Gamma Ray tasks since then since they run better.

Thanks. Interesting results. It seems that for 10 GPUs running with a 10 core E5-2680v2 CPU and 7 GPUs running with an 8 core E5-2667v2, things worked well; it was only when you ran 7 GPUs with a 6 core E5-2630Lv2 that things didn't work so well.

So perhaps the solution is to ensure a full core is available for each GW GPU task. I assume you had hyperthreading enabled, so that tentative conclusion would apply just as well to 2 threads/task. This would fit with my observations on my 2 core/4 thread system.

I actually have two CPUs in that system. 2x 10c/20t with HT enabled so 40 total threads for the system. CPU use is only about 25-27%. lots of space CPU cycles to handle anything the GPUs need. it's overkill for what it's doing now. I used to run CPU work, but I decided to stop all CPU processing as GPUs are way more efficient and CPU work just seems like a waste of electricity anymore.

_________________________________________________________________________

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3965
Credit: 47234412642
RAC: 65372947

Ian&Steve C. wrote:just to

Ian&Steve C. wrote:

just to update, I switched this system to run GW tasks. I watched a couple of them (not VelaJr) and running just 1 task per GPU: (RTX 2070 fed by a E5-2680v2 @3.0GHz)

~11:30 run time

~75-78% GPU utilization

~1800MB VRAM use

~110-120% (spikes to 140) CPU thread utilization (meaning dipping into a second thread)

~2% PCIe utilization of a 3.0 x1 link

 

also not sure if you saw this at the end of the last page. you can track this system if you want. i'll leave it running GW on 2 GPUs until i get the parts i  need to get it on GPUGRID full-time. looks like the fist two tasks ran for 11+ min, but all the ones after ran for about ~9:50, different freq.

 

i'll have to wait for some of the velajr to show up i guess to see how they do

_________________________________________________________________________

cecht
cecht
Joined: 7 Mar 18
Posts: 1537
Credit: 2915755289
RAC: 2109762

Ian&Steve C. wrote:also not

Ian&Steve C. wrote:
also not sure if you saw this at the end of the last page. you can track this system if you want. i'll leave it running GW on 2 GPUs until i get the parts i  need to get it on GPUGRID full-time. looks like the fist two tasks ran for 11+ min, but all the ones after ran for about ~9:50, different freq. ...

Good, thanks for the updates. I see some tasks are even coming in around 8 min. Yes, I did miss your earlier post. It's nice to know that PCIe bandwidth isn't a bottleneck. How did you measure %PCIe use?

Ideas are not fixed, nor should they be; we live in model-dependent reality.

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3965
Credit: 47234412642
RAC: 65372947

The Nvidia X Server Settings

The Nvidia X Server Settings application which is included in the Nvidia driver package on Linux will tell you the PCIe use. That’s how I measured it. 

 

it looks like the VelaJr tasks are the ones running quicker ~8min. same system use parameters though, same memory use, pcie use, gpu use, etc.

_________________________________________________________________________

Mr P Hucker
Mr P Hucker
Joined: 12 Aug 06
Posts: 838
Credit: 519371204
RAC: 15292

Ian&Steve C. wrote:I actually

Ian&Steve C. wrote:
I actually have two CPUs in that system. 2x 10c/20t with HT enabled so 40 total threads for the system. CPU use is only about 25-27%. lots of space CPU cycles to handle anything the GPUs need. it's overkill for what it's doing now. I used to run CPU work, but I decided to stop all CPU processing as GPUs are way more efficient and CPU work just seems like a waste of electricity anymore.

I sort of agree.  I use GPUs and not CPUs on projects that can use GPUs - Milkyway and Einstein.  But I use the spare CPU cores to do stuff that cannot run on GPU: LHC, Universe, and Rosetta.  Some stuff just can't be ported to GPU chips.

If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6471
Credit: 9596674871
RAC: 6265041

Quote:Gravity not possible as

Quote:
Gravity not possible as I don't have enough CPU power or GPU RAM.

Maybe you can find another project that will work with a more modest cpu/ram combo while still running E@H Pulsar#1?

 

Tom M

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)  I want some more patience. RIGHT NOW!

Mr P Hucker
Mr P Hucker
Joined: 12 Aug 06
Posts: 838
Credit: 519371204
RAC: 15292

Tom M wrote:Maybe you can

Tom M wrote:
Maybe you can find another project that will work with a more modest cpu/ram combo while still running E@H Pulsar#1?

They run Pulsar and Milkyway.  Between the two they always have something to do.  After that it would be either maths projects (I prefer physics or biology - not so sure finding an even bigger prime number helps anyone) or a non-Boinc project (which I can't be bothered doing).

If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.