Discussion Thread for the Continuous GW Search known as O2MD1 (now O2MDF - GPUs only)

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3953
Credit: 46804442642
RAC: 64229548

why is this host not properly

why is this host not properly requesting GW GPU work?

https://einsteinathome.org/host/12803486/log

all of my hosts are on the same "venue" with the same settings, yet only one system seems to be consistently getting new work. I can't find what's different. they are all identical software wise, just different hardware.

_________________________________________________________________________

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4964
Credit: 18720615373
RAC: 6410915

I'm in the same boat with my

I'm in the same boat with my daily driver.  I can't get GW work on it.  Yet the other two hosts have had no issues maintaining their caches.

 

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6441
Credit: 9571400448
RAC: 8350539

Ian&Steve C. wrote: why is

Ian&Steve C. wrote:

why is this host not properly requesting GW GPU work?

https://einsteinathome.org/host/12803486/log

all of my hosts are on the same "venue" with the same settings, yet only one system seems to be consistently getting new work. I can't find what's different. they are all identical software wise, just different hardware.

That is odd.

I am assuming you and Keith are running the "Pandora" client on all Linux systems.  Are the parameters identical on all systems?

Is the odd system out running Windows or Linux?

What happens if the odd system out (Linux) starts running the stock "All in One"?

Tom M

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)  I want some more patience. RIGHT NOW!

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3953
Credit: 46804442642
RAC: 64229548

Well it’s moot now that the

Well it’s moot now that the admins turned GW back on. So all the systems have work again. 
 

it was just really strange that one system continuously topped up its work every single request. While the others struggled. 
 

if it was just lack of tasks, I would expect all systems to struggle. And strange that the one system that was getting work just fine was getting all new work _0 and _1, where the struggling systems were only getting resends. Very odd. 

_________________________________________________________________________

archae86
archae86
Joined: 6 Dec 05
Posts: 3157
Credit: 7221554931
RAC: 967170

Locality scheduling. If you

Locality scheduling. If you take a look I suspect you'll notice that the systems that were getting differently serviced had different base frequencies in their existing stock.

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3953
Credit: 46804442642
RAC: 64229548

Ok, so why didn't those

Ok, so why didn't those systems download the necessary base frequencies which were available? As far as I can tell there's nothing I've done which prevents the server sending me any frequency. If I already have the necessary data for work "A" but the server only has work "B" available, shouldn't the server send me the data for B instead of letting the system sit idle?

_________________________________________________________________________

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4964
Credit: 18720615373
RAC: 6410915

The mysteries of locality

The mysteries of locality scheduling.  I have 10GB of existing base frequencies on the daily driver yet it has been downloading non-stop for the past hour new base frequencies.

And that is for a cache size of ten tasks.

 

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5872
Credit: 117573779913
RAC: 35192372

We seem to be running a new

We seem to be running a new series of tasks after a fairly abrupt change - which probably explains the temporary 'outage window' while they were setting up the changeover.

Take a look at the task names.  The former series tasks were _O2MDFS2_Spotlight_.  The new series have the "S2" bit changed to "S3".

Some of my hosts were getting the former 'original' tasks (not resends) during this time, some were getting resends and some were getting the 'no work available' message - intermittently with resends.  Then the new series with the new data downloads started flooding in.

This all looked a bit 'unusual' so I put my work cache size back to a low setting to see what might eventuate.  I have enough on board to wait for the dust to settle :-).

Cheers,
Gary.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4964
Credit: 18720615373
RAC: 6410915

These new S3 tasks are

These new S3 tasks are running two minutes longer than the old S2 tasks. 7-8 minutes needed now.

I wonder if they will reward larger credits.

 

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3953
Credit: 46804442642
RAC: 64229548

Keith Myers wrote:These new

Keith Myers wrote:

These new S3 tasks are running two minutes longer than the old S2 tasks. 7-8 minutes needed now.

I wonder if they will reward larger credits.

They won't. E@h credit reward is based on the predetermined "Estimated computation size" only, and these have the same 144,000 GFlops estimate as before, so they will get the same 1000 credits.

 

that estimated flops size really needs to be increased to a more representative value of actual Flops required, at least double.

 

i also notice that these new S3 tasks seem "less optimized" than the previous run. maybe requiring more CPU support than the previous run. I see a noticeable dip in GPU utilization, which probably explains (at least in part) the longer run times.

 

pretty hefty VRAM use too, about 2.5GB. RIP to the 2GB cards

_________________________________________________________________________

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.