Is there a problem with wingmen on GWs on GPUs?

Betreger
Betreger
Joined: 25 Feb 05
Posts: 992
Credit: 1591522360
RAC: 769410
Topic 224350

My GTX1660super host has 1440 GWs pending with a very high percentage of them waiting to send out to a wingman. This seems to have started a bit before Christmas. Oddly my other host with a pair of 3 GB GTX1060s does not seem to suffer. 

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3958
Credit: 46993872642
RAC: 64845444

My RTX 3070 host also has a

My RTX 3070 host also has a high percentage of GW GPU pendings. it's been running for several months now, and more pendings than valid. about 1600 pending and 1400 valid (no invalids).

this has been an issue for a while with GPU GW from what i remember. part of it is because the points chasers all run the GR tasks instead since you get 2-3x more credit per time invested, and the GR doesn't require much CPU support, where with GW the CPU speed/power is more important to feed data to the GPU to get good crunch times. so with GW you're putting more effort, and ultimately more money, into a system that produces less credit. kind of disincentivizes some people from running GW, and you are left with a smaller pool of wingmen. leading to increased pendings and longer validation delays.

_________________________________________________________________________

archae86
archae86
Joined: 6 Dec 05
Posts: 3157
Credit: 7225234931
RAC: 1040422

I've seen such a situation

I've seen such a situation temporarily both on a machine of my own and on the machine of a new participant who posted here in alarm.

The recipe seems to be that a fast machine which is downloading incrementally and not in big gulps can get a huge number of sequential tasks from one base frequency.  Generally the server will assign work up at the top of the list to another machine.  But if that machine tends to error out or is just slow to replay, then validations won't flow.

It gets worse as the server seems not very fast to assign yet more new machines to the base frequency.  And even when it does, they likely don't get assigned work at nearly the rate at which the original machine got it, so things fall yet farther behind.

In the two cases which I watched in detail, it took about three weeks for things to shake out, as eventually the server assigned quorum partners in sufficient number who actually returned valid work.

Gary Roberts has suggested elsewhere, I think, that if you download tasks in giant gulps, the server is more likely to give you a mix of base frequencies, as there is a smallish buffer (in round number something like 100-200 tasks) it draws from for any given request, and if it consumes all tasks of the preferred base frequencies (preferred because you already have the files) for your hosts, it will send others remaining in the buffer.

Almost every other consideration, however, favors small gulps, so unless this really bothers you a lot, I suggest patience.

 

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3958
Credit: 46993872642
RAC: 64845444

Arch, did you give up running

Arch, did you give up running GW? any particular reason?

in your referenced thread, you made a post claiming to run one of your RX 5700 system 24/7 on GW, but none of your systems now show any GW work. was the low credit reward too much to bear?

_________________________________________________________________________

Betreger
Betreger
Joined: 25 Feb 05
Posts: 992
Credit: 1591522360
RAC: 769410

A couple  comments I was

A couple  comments

I was hypothesizing hosts taken off line for the holiday was causing this and Archae gave a very reasonable alternative'

i take pride in my low RAC from crunching GWs. After all a GW is first prize, a pulsar is 2nd.

I shall be patient and ride this thing out.

 

I take pride in my low RAC caused by dru

 

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3958
Credit: 46993872642
RAC: 64845444

i agree. the one host i leave

i agree. the one host i leave running 24/7 gets both GW and GR tasks to run on the RTX 3070, but the scheduler seems to send it GW 90% of the time. I don't mind, the science benefit seems more important for GW to me. I wish others thought this way. But it'll probably remain this situation until the credit reward between the two task types is normalized. the flops estimate on the GW tasks is way low compared to the flops it actually takes to crunch them. 

_________________________________________________________________________

San-Fernando-Valley
San-Fernando-Valley
Joined: 16 Mar 16
Posts: 409
Credit: 10210163455
RAC: 22826969

Yes, be patient. I have

Yes, be patient.

I have over 6000 pending GRs!  I'm not worrying about that.

 

I tend to switch every month from GR to GW and back.

That way each get their fair share.

 

It is now end of the month and I am switching over to GW.

Maybe this will help you ...

Betreger
Betreger
Joined: 25 Feb 05
Posts: 992
Credit: 1591522360
RAC: 769410

"It is now end of the month

"It is now end of the month and I am switching over to GW.

 

Maybe this will help you ..."

I hope so, wingmen are needed.

mikey
mikey
Joined: 22 Jan 05
Posts: 12692
Credit: 1839096599
RAC: 3693

Betreger wrote: "It is now

Betreger wrote:

"It is now end of the month and I am switching over to GW.

 

Maybe this will help you ..."

I hope so, wingmen are needed. 

Actually I have 960 Gamma-ray pulsar binary search #1 on GPUs v1.22 () windows_x86_64 tasks waiting for wingmen to finish up their part.

I also have 17 Gravitational Wave search O2 Multi-Directional GPU v2.09 () windows_x86_64 tasks waiting for wingmen

and 5 Gamma-ray pulsar search #5 v1.08 () windows_intelx86 tasks waiting for wingmen.

Waiting for wingmen is always a process here.

Betreger
Betreger
Joined: 25 Feb 05
Posts: 992
Credit: 1591522360
RAC: 769410

Waiting solved the problem,

Waiting solved the problem, pendings are reasonable, RAC shot up and will settle down to where it belongs, the search for those sneaky GWs continues.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.