I was worried it was just my hosts, but it's not - for example this host is doing the same according to its log. Same BOINC version, different OS (windows vs. linux).
Also I have found that one of my single-GPU hosts isn't doing it - the only difference I'm aware of is that it's on 7.0.44 (all the others are 7.0.64 or .65).
Neil,
This got my attention as the host you linked to is one of mine and looking at my other machines its not the only one displaying this behaviour.
Seven of my hosts are running Boinc 7.0.64 and including the one you linked to I actually have four of the seven contacting the server every minute, the others are: 3641777, 6550900 and 5647876 all these machines are single gpu with the same preference settings, work caches and venue, they are on the same network and have the same ISP.
As for the remaining three this host and this are again single gpu machines with the same preferences etc. but have smaller work caches and a different venue and are also on a different network and ISP. No problem with this pair.
The final machine my laptop does not have a gpu and is used on both networks with no problem.
I think it would be safe to assume that the problem is not ISP or network related :-) and is more likely due to a quirk in the cache size settings within Boinc, the affected machines have caches of either 1 or 1.25 days (can't remember, will have to check). whereas the machines that behave are set at 0.75 days (all with a max. additional buffer of 0.10 days). That said, a quick look back at tbret's original post, he has a cache of only 0.50 days...but then his machine was actually requesting and getting additional work whereas mine are just saying "hello" regularly and repeatedly!
I will try Beyond's suggestion tomorrow and either upgrade/downgrade Boinc on one of my affected hosts to see what happens and will lower the cache size on another then report back.
I will try Beyond's suggestion tomorrow and either upgrade/downgrade Boinc on one of my affected hosts to see what happens and will lower the cache size on another then report back.
If you haven't tried it yet just restart Boinc, I experienced the same thing about a week ago or so when it repeatedly contacted both Einstein and Albert@home, tried a few things that did not work and then just exited Boinc completely and restarted again, haven't seen any problems since with version 7.0.64.
I will try Beyond's suggestion tomorrow and either upgrade/downgrade Boinc on one of my affected hosts to see what happens and will lower the cache size on another then report back.
If you haven't tried it yet just restart Boinc, I experienced the same thing about a week ago or so when it repeatedly contacted both Einstein and Albert@home, tried a few things that did not work and then just exited Boinc completely and restarted again, haven't seen any problems since with version 7.0.64.
I tried restarting BOINC on the 2 boxes that had the problem, they started downloading more work before too long (WAY too much work). So far no problem since I've switched to 7.1.3 (knock on wood). BTW, my queue size on both machines was 0.4 days with no additional queue set.
....
Seven of my hosts are running Boinc 7.0.64 and including the one you linked to I actually have four of the seven contacting the server every minute
...
As for the remaining three this host and this are again single gpu machines with the same preferences etc. but have smaller work caches and a different venue and are also on a different network and ISP. No problem with this pair.
...
I think it would be safe to assume that the problem is not ISP or network related :-) and is more likely due to a quirk in the cache size settings within Boinc, the affected machines have caches of either 1 or 1.25 days (can't remember, will have to check). whereas the machines that behave are set at 0.75 days (all with a max. additional buffer of 0.10 days). That said, a quick look back at tbret's original post, he has a cache of only 0.50 days...but then his machine was actually requesting and getting additional work whereas mine are just saying "hello" regularly and repeatedly!
I will try Beyond's suggestion tomorrow and either upgrade/downgrade Boinc on one of my affected hosts to see what happens and will lower the cache size on another then report back.
Gavin: Thanks for the feedback, also Beyond for the info! Given I twiddled my cache from 0.25 to 1.00 days it does seem very possible it's related to some mix of scheduler changes in BOINC, cache sizing and (possibly) the long run time of BRP5 tasks. Still, assuming it's a bug, like most of them I'm sure it'll be 'obvious' once the cause is found :).
As a control I'll leave my hosts alone for now (although it's pretty clear with 1300+ BRP5's to process, I'll have to do 'something' in the next week or so!).
So we have two issues here, let me address them one by one:
1) fetching more work than seems reasonable: we are looking into it, we will probably lower the quota as a workaround
2) clients contacting the server more often than is reasonable with requests to fetch NO work at all. There have been reports at Seti@Home about this, and some rather strange workaround, quoting Eric Korpela on the boinc-dev list:
Quote:
[...] people have reported that it goes away when they select
"read config file", even if they don't have a config file.
The problem I had with the four hosts constantly contacting the server appears to be resolved. Following on from the suggestion by Holmis I exited Boinc and restarted each of those hosts in turn, 2 hours have now past and these machines have returned to normal operation.
Thank you Holmis.
Now I just need to work out why this host keeps randomly trashing small batches of GW tasks...
2) clients contacting the server more often than is reasonable with requests to fetch NO work at all. There have been reports at Seti@Home about this, and some rather strange workaround, quoting Eric Korpela on the boinc-dev list:
Quote:
[...] people have reported that it goes away when they select
"read config file", even if they don't have a config file.
Thanks for the update, and confirmed this works! I did this on one of my affected hosts and it stopped the requests. I also tried stopping and restarting BOINC on another host, and (as Holmis suggested) that worked too.
I also subsequently performed a manual 'Update' on both hosts to see if it would re-trigger the polling issue, but it didn't.
On the other issue, any interim suggestions for those of us with too much work to process? Strategy here is probably to process as much as possible, then abort stuff that won't make the deadline (although this means slow turn-arounds for my wingmen).
Thanks for the update, and confirmed this works! I did this on one of my affected hosts and it stopped the requests. I also tried stopping and restarting BOINC on another host, and (as Holmis suggested) that worked too.
Mine worked for a while too and then started up again. After switching versions it hasn't reoccurred.
Quote:
On the other issue, any interim suggestions for those of us with too much work to process? Strategy here is probably to process as much as possible, then abort stuff that won't make the deadline (although this means slow turn-arounds for my wingmen).
I would think it's far better to about them now. There's no problem with aborting tasks, they go right back into the queue. If you hang on to them it'll be 2 weeks before you abort and then they'll still go back into the queue. Better to do it now IMO.
On the other issue, any interim suggestions for those of us with too much work to process? Strategy here is probably to process as much as possible, then abort stuff that won't make the deadline (although this means slow turn-arounds for my wingmen).
I would think it's far better to about them now. There's no problem with aborting tasks, they go right back into the queue. If you hang on to them it'll be 2 weeks before you abort and then they'll still go back into the queue. Better to do it now IMO.
I agree, aborted tasks are instantly put in the resend list and will be quickly sent to other hosts that will be able to process and return them faster than your host that has to do hundreds of them fist... Not only this is good for the wingmen it will also reduce the number of WUs that will be kept in the DB servers.
It would be better also to abort the WUs most recently received until leaving a reazonable amount of the older ones in your cache. Just sort the tasks by deadline and abort first the ones with longer deadlines until you have a doable amount of tasks. (The idea of keeping the ones with shorter deadlines is that the wingmen of those tasks were waiting for them much more time than the the wigmen of the others and if you abort them now they will go to the botton of the list of another host who is going to take even more time).
RE: I was worried it was
)
Neil,
This got my attention as the host you linked to is one of mine and looking at my other machines its not the only one displaying this behaviour.
Seven of my hosts are running Boinc 7.0.64 and including the one you linked to I actually have four of the seven contacting the server every minute, the others are: 3641777, 6550900 and 5647876 all these machines are single gpu with the same preference settings, work caches and venue, they are on the same network and have the same ISP.
As for the remaining three this host and this are again single gpu machines with the same preferences etc. but have smaller work caches and a different venue and are also on a different network and ISP. No problem with this pair.
The final machine my laptop does not have a gpu and is used on both networks with no problem.
I think it would be safe to assume that the problem is not ISP or network related :-) and is more likely due to a quirk in the cache size settings within Boinc, the affected machines have caches of either 1 or 1.25 days (can't remember, will have to check). whereas the machines that behave are set at 0.75 days (all with a max. additional buffer of 0.10 days). That said, a quick look back at tbret's original post, he has a cache of only 0.50 days...but then his machine was actually requesting and getting additional work whereas mine are just saying "hello" regularly and repeatedly!
I will try Beyond's suggestion tomorrow and either upgrade/downgrade Boinc on one of my affected hosts to see what happens and will lower the cache size on another then report back.
RE: I will try Beyond's
)
If you haven't tried it yet just restart Boinc, I experienced the same thing about a week ago or so when it repeatedly contacted both Einstein and Albert@home, tried a few things that did not work and then just exited Boinc completely and restarted again, haven't seen any problems since with version 7.0.64.
RE: RE: I will try
)
I tried restarting BOINC on the 2 boxes that had the problem, they started downloading more work before too long (WAY too much work). So far no problem since I've switched to 7.1.3 (knock on wood). BTW, my queue size on both machines was 0.4 days with no additional queue set.
RE: .... Seven of my hosts
)
Gavin: Thanks for the feedback, also Beyond for the info! Given I twiddled my cache from 0.25 to 1.00 days it does seem very possible it's related to some mix of scheduler changes in BOINC, cache sizing and (possibly) the long run time of BRP5 tasks. Still, assuming it's a bug, like most of them I'm sure it'll be 'obvious' once the cause is found :).
As a control I'll leave my hosts alone for now (although it's pretty clear with 1300+ BRP5's to process, I'll have to do 'something' in the next week or so!).
Hi! So we have two issues
)
Hi!
So we have two issues here, let me address them one by one:
1) fetching more work than seems reasonable: we are looking into it, we will probably lower the quota as a workaround
2) clients contacting the server more often than is reasonable with requests to fetch NO work at all. There have been reports at Seti@Home about this, and some rather strange workaround, quoting Eric Korpela on the boinc-dev list:
As strange as it seems, you may want to try this.
Cheers
HB
The problem I had with the
)
The problem I had with the four hosts constantly contacting the server appears to be resolved. Following on from the suggestion by Holmis I exited Boinc and restarted each of those hosts in turn, 2 hours have now past and these machines have returned to normal operation.
Thank you Holmis.
Now I just need to work out why this host keeps randomly trashing small batches of GW tasks...
RE: 2) clients contacting
)
Thanks for the update, and confirmed this works! I did this on one of my affected hosts and it stopped the requests. I also tried stopping and restarting BOINC on another host, and (as Holmis suggested) that worked too.
I also subsequently performed a manual 'Update' on both hosts to see if it would re-trigger the polling issue, but it didn't.
On the other issue, any interim suggestions for those of us with too much work to process? Strategy here is probably to process as much as possible, then abort stuff that won't make the deadline (although this means slow turn-arounds for my wingmen).
RE: Now I just need to
)
Perhaps best move to another thread (as I've already sidetracked this one over the polling issue) but this looks suspicious:-
RE: Thanks for the update,
)
Mine worked for a while too and then started up again. After switching versions it hasn't reoccurred.
I would think it's far better to about them now. There's no problem with aborting tasks, they go right back into the queue. If you hang on to them it'll be 2 weeks before you abort and then they'll still go back into the queue. Better to do it now IMO.
RE: RE: On the other
)
I agree, aborted tasks are instantly put in the resend list and will be quickly sent to other hosts that will be able to process and return them faster than your host that has to do hundreds of them fist... Not only this is good for the wingmen it will also reduce the number of WUs that will be kept in the DB servers.
It would be better also to abort the WUs most recently received until leaving a reazonable amount of the older ones in your cache. Just sort the tasks by deadline and abort first the ones with longer deadlines until you have a doable amount of tasks. (The idea of keeping the ones with shorter deadlines is that the wingmen of those tasks were waiting for them much more time than the the wigmen of the others and if you abort them now they will go to the botton of the list of another host who is going to take even more time).