all of my hosts are on the same "venue" with the same settings, yet only one system seems to be consistently getting new work. I can't find what's different. they are all identical software wise, just different hardware.
all of my hosts are on the same "venue" with the same settings, yet only one system seems to be consistently getting new work. I can't find what's different. they are all identical software wise, just different hardware.
That is odd.
I am assuming you and Keith are running the "Pandora" client on all Linux systems. Are the parameters identical on all systems?
Is the odd system out running Windows or Linux?
What happens if the odd system out (Linux) starts running the stock "All in One"?
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Well it’s moot now that the admins turned GW back on. So all the systems have work again.
it was just really strange that one system continuously topped up its work every single request. While the others struggled.
if it was just lack of tasks, I would expect all systems to struggle. And strange that the one system that was getting work just fine was getting all new work _0 and _1, where the struggling systems were only getting resends. Very odd.
Locality scheduling. If you take a look I suspect you'll notice that the systems that were getting differently serviced had different base frequencies in their existing stock.
Ok, so why didn't those systems download the necessary base frequencies which were available? As far as I can tell there's nothing I've done which prevents the server sending me any frequency. If I already have the necessary data for work "A" but the server only has work "B" available, shouldn't the server send me the data for B instead of letting the system sit idle?
The mysteries of locality scheduling. I have 10GB of existing base frequencies on the daily driver yet it has been downloading non-stop for the past hour new base frequencies.
We seem to be running a new series of tasks after a fairly abrupt change - which probably explains the temporary 'outage window' while they were setting up the changeover.
Take a look at the task names. The former series tasks were _O2MDFS2_Spotlight_. The new series have the "S2" bit changed to "S3".
Some of my hosts were getting the former 'original' tasks (not resends) during this time, some were getting resends and some were getting the 'no work available' message - intermittently with resends. Then the new series with the new data downloads started flooding in.
This all looked a bit 'unusual' so I put my work cache size back to a low setting to see what might eventuate. I have enough on board to wait for the dust to settle :-).
These new S3 tasks are running two minutes longer than the old S2 tasks. 7-8 minutes needed now.
I wonder if they will reward larger credits.
They won't. E@h credit reward is based on the predetermined "Estimated computation size" only, and these have the same 144,000 GFlops estimate as before, so they will get the same 1000 credits.
that estimated flops size really needs to be increased to a more representative value of actual Flops required, at least double.
i also notice that these new S3 tasks seem "less optimized" than the previous run. maybe requiring more CPU support than the previous run. I see a noticeable dip in GPU utilization, which probably explains (at least in part) the longer run times.
pretty hefty VRAM use too, about 2.5GB. RIP to the 2GB cards
why is this host not properly
)
why is this host not properly requesting GW GPU work?
https://einsteinathome.org/host/12803486/log
all of my hosts are on the same "venue" with the same settings, yet only one system seems to be consistently getting new work. I can't find what's different. they are all identical software wise, just different hardware.
_________________________________________________________________________
I'm in the same boat with my
)
I'm in the same boat with my daily driver. I can't get GW work on it. Yet the other two hosts have had no issues maintaining their caches.
Ian&Steve C. wrote: why is
)
That is odd.
I am assuming you and Keith are running the "Pandora" client on all Linux systems. Are the parameters identical on all systems?
Is the odd system out running Windows or Linux?
What happens if the odd system out (Linux) starts running the stock "All in One"?
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Well it’s moot now that the
)
Well it’s moot now that the admins turned GW back on. So all the systems have work again.
it was just really strange that one system continuously topped up its work every single request. While the others struggled.
if it was just lack of tasks, I would expect all systems to struggle. And strange that the one system that was getting work just fine was getting all new work _0 and _1, where the struggling systems were only getting resends. Very odd.
_________________________________________________________________________
Locality scheduling. If you
)
Locality scheduling. If you take a look I suspect you'll notice that the systems that were getting differently serviced had different base frequencies in their existing stock.
Ok, so why didn't those
)
Ok, so why didn't those systems download the necessary base frequencies which were available? As far as I can tell there's nothing I've done which prevents the server sending me any frequency. If I already have the necessary data for work "A" but the server only has work "B" available, shouldn't the server send me the data for B instead of letting the system sit idle?
_________________________________________________________________________
The mysteries of locality
)
The mysteries of locality scheduling. I have 10GB of existing base frequencies on the daily driver yet it has been downloading non-stop for the past hour new base frequencies.
And that is for a cache size of ten tasks.
We seem to be running a new
)
We seem to be running a new series of tasks after a fairly abrupt change - which probably explains the temporary 'outage window' while they were setting up the changeover.
Take a look at the task names. The former series tasks were _O2MDFS2_Spotlight_. The new series have the "S2" bit changed to "S3".
Some of my hosts were getting the former 'original' tasks (not resends) during this time, some were getting resends and some were getting the 'no work available' message - intermittently with resends. Then the new series with the new data downloads started flooding in.
This all looked a bit 'unusual' so I put my work cache size back to a low setting to see what might eventuate. I have enough on board to wait for the dust to settle :-).
Cheers,
Gary.
These new S3 tasks are
)
These new S3 tasks are running two minutes longer than the old S2 tasks. 7-8 minutes needed now.
I wonder if they will reward larger credits.
Keith Myers wrote:These new
)
They won't. E@h credit reward is based on the predetermined "Estimated computation size" only, and these have the same 144,000 GFlops estimate as before, so they will get the same 1000 credits.
that estimated flops size really needs to be increased to a more representative value of actual Flops required, at least double.
i also notice that these new S3 tasks seem "less optimized" than the previous run. maybe requiring more CPU support than the previous run. I see a noticeable dip in GPU utilization, which probably explains (at least in part) the longer run times.
pretty hefty VRAM use too, about 2.5GB. RIP to the 2GB cards
_________________________________________________________________________