The O2-All Sky Gravitational Wave Search on GPUs - discussion thread.

Jim1348
Jim1348
Joined: 19 Jan 06
Posts: 463
Credit: 257957147
RAC: 0

Thanks.  For some reason I

Thanks.  For some reason I did not find it earlier.  It seems more or less comparable to the 4900 seconds I am getting with my RX 570 on Windows.  As you say, development continues, and I hope they can offload more work to the GPU.

cecht
cecht
Joined: 7 Mar 18
Posts: 1432
Credit: 2468191925
RAC: 699655

Jim1348 wrote:How about the

Jim1348 wrote:
How about the RX 570?  I haven't seen any Linux times for it.  I could do that if necessary, though it is more convenient on Win7 for me.

Here's a post for Linux RX 570 times I made here last month. I also posted a few days later to include RX 460 times, but there was an error in that: the 3x RX 460 times listed were actually for 2x times.

Ideas are not fixed, nor should they be; we live in model-dependent reality.

archae86
archae86
Joined: 6 Dec 05
Posts: 3145
Credit: 7058594931
RAC: 1608770

Gary Roberts wrote:I'm not so

Gary Roberts wrote:
I'm not so sure that Linux behaves any differently from Windows.

I plead a bit of brain fog clouding my correctness when I posted a claim of greatly better Linux than Windows elapsed times on V1.07.

That, I think, is false.

However, I do think two other things, one pretty confidently, the other with diffidence.

Confidently I believe we have seen Linux hosts consuming far less CPU time to accomplish a V1.07 GPU task than otherwise comparable Windows hosts.

Diffidently, I think perhaps the wonderful AMD advantage in Einstein GRP performance over Nvidia cards (relatively to widely published gaming comparisons) is reversed for GW GPU jobs on the current v1.07 application, at least for a few reasonably modern cards of each type.

In particular GTX 1060 does much better relative to a RX 570 on GW V1.07 than on current Einstein GRP, and GTX 2080 does much better relative to Radeon VII.

Yes, I'm changing my position.  I welcome corrections (supported by data) to my most recent positions.

Of course, if the project people revise the code, or adjust the compilation or other elements of the build path for the application, these observations could change tomorrow.

Betreger
Betreger
Joined: 25 Feb 05
Posts: 987
Credit: 1433947593
RAC: 585209

This inconclusiveness

This inconclusiveness concerning the efficacy of Nvida vs AMD cards is causing me to postfone moving a GTX1060 to another host on a different project. That host is really crying for a 2nd  GTX1060. I just don't know what to throw into my Einstein host.  

Zalster
Zalster
Joined: 26 Nov 13
Posts: 3117
Credit: 4050672230
RAC: 0

archae86 wrote:Confidently I

archae86 wrote:
Confidently I believe we have seen Linux hosts consuming far less CPU time to accomplish a V1.07 GPU task than otherwise comparable Windows hosts.

If you refer back to a fairly lengthy post I did sometime ago, I stated that CPU times for GW GPU are longer than the run times. I equate this to the fact that more than 1 cpu is being used for each work unit.  I made the comparison to QC work units at GPUGrid that consume more than 1 cpu for each work unit as well. CPU time is equal to run time if divided by the amount of CPUs used, example 1144 sec CPU time/1.1 CPUs = 1040 sec which is very close to the runtimes.  Since I'm not running a Windows machine on this project, I don't have run times to see if their times are longer compared to Linux machine. I would need to install windows on a machine with some 2080s to compare those times.

 

Jim1348
Jim1348
Joined: 19 Jan 06
Posts: 463
Credit: 257957147
RAC: 0

archae86 wrote:In particular

archae86 wrote:
In particular GTX 1060 does much better relative to a RX 570 on GW V1.07 than on current Einstein GRP, and GTX 2080 does much better relative to Radeon VII.

That is interesting.  I just happened to have a GTX 1060 (3 GB) available on an Ubuntu 18.04 machine, and fed it with a single core of a Ryzen 2600 (it actually uses 104%).  The run time for 1.07 was 50 minutes (1X), and it used about 60 watts according to my nvidia-smi estimate.  That works out to an energy of 3000 watt-minutes per work unit.

On the other hand, my RX 570 (Win7) took 82 minutes per work unit and consumed 50 watts (GPU-Z), for 4100 watt-minutes of energy per work unit.

So the GTX 1060 is 4100/3000 = 1.37 times as efficient.  That is useful to know.  I wouldn't mind if they improved it further though.  Neither one is all that much more efficient than a CPU (maybe about twice as good).

 

Betreger
Betreger
Joined: 25 Feb 05
Posts: 987
Credit: 1433947593
RAC: 585209

I haven't had an invalid

I haven't had an invalid since the 19th, sent on the 18th, on my GTX1060.  Methinks Bernd has been busy. The odds are I just jinxed it. 

A couple of observations. 

The CPU run time is almost equal the the GPU time.

If I continue to run GWs exclusively My RAC will probably bottom out around 25k or so. I would rather find a GW than a pulsar. 

Betreger
Betreger
Joined: 25 Feb 05
Posts: 987
Credit: 1433947593
RAC: 585209

Only 2 invalids since the

Only 2 invalids since the 18th and both were resends that first went out on the 9th. I don't know if they were using code from the 9th or the current code. I either case things seem to be much better than we first started out with. 

Betreger
Betreger
Joined: 25 Feb 05
Posts: 987
Credit: 1433947593
RAC: 585209

No more invalids in the last

No more invalids in the last 2 days, is it ready for prime time?

archae86
archae86
Joined: 6 Dec 05
Posts: 3145
Credit: 7058594931
RAC: 1608770

Betreger wrote:No more

Betreger wrote:
No more invalids in the last 2 days, is it ready for prime time?

Plenty of the rest of us are still getting plenty of invalids.  I don't know what drives changes in the rate for a given machine.  I don't think all them reflect changes in the application, the data, or the validator.

Bernd advised us one single time of a moderate relaxation in the strictness of the validator quite a few days ago, and I've seen nothing from officialdom since then of changes in either the application or the validator.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.