Something that I thought should be mentioned is this: Always make a point of checking the list of results for each of your machines, under "Your Account".
The last week has been a nightmare for me, as I noticed S@h results listed on the server did not correspond to the number of WUs in my primary machine's cache. Eventually, machine cache was zero, and BOINC told me (Under "messages" tab) that I was getting no more work from the project.
Checking the results list web page showed I had exceeded the daily quota, according to the project, as I had a list of over 100 results apparently waiting to be returned or processed. In a period of 12 hours, this list had grown to nearly 400.
Clearly, something was wrong!
Eventually, the problem was cleared by detaching and re-attaching both machines. The nightmare came as the result of other account issues. Thankfully, these have been sorted.
Anyway, to re-iterate: Check the list of results for your machines! This happened on a different project, but I feel it's likely it is happening across the board.
My pending credit is again creeping up to the 1000 mark.
Two days ago I saw a computer with 1025 WU's, most of them outstanding and some for over a week. It was a day or two since the specific machine reported and yet it was still receiving WUs each day.
I also had a look at the benchmark and it was not much different from mine, so it must have a very large cache setting to be able to hoard so many WUs. The time it took to complete WUs also was about the same as mine.
The problem is that machines like that causes a lot of problems with pending results, other crunchers having to wait weeks for credits and of course WUs being resent out as they expire on those machines.
Is there anything one can do to prevent this? Maybe like LHC is handling their quotas?
It is normal for it to creep up after outages, because the 5.2 - 5.5.x clients backed off up to a week. So it will take several days for computers to start talking again.
I saw this thread, and had to
)
I saw this thread, and had to have a read.
Something that I thought should be mentioned is this: Always make a point of checking the list of results for each of your machines, under "Your Account".
The last week has been a nightmare for me, as I noticed S@h results listed on the server did not correspond to the number of WUs in my primary machine's cache. Eventually, machine cache was zero, and BOINC told me (Under "messages" tab) that I was getting no more work from the project.
Checking the results list web page showed I had exceeded the daily quota, according to the project, as I had a list of over 100 results apparently waiting to be returned or processed. In a period of 12 hours, this list had grown to nearly 400.
Clearly, something was wrong!
Eventually, the problem was cleared by detaching and re-attaching both machines. The nightmare came as the result of other account issues. Thankfully, these have been sorted.
Anyway, to re-iterate: Check the list of results for your machines! This happened on a different project, but I feel it's likely it is happening across the board.
My pending credit is again
)
My pending credit is again creeping up to the 1000 mark.
Two days ago I saw a computer with 1025 WU's, most of them outstanding and some for over a week. It was a day or two since the specific machine reported and yet it was still receiving WUs each day.
I also had a look at the benchmark and it was not much different from mine, so it must have a very large cache setting to be able to hoard so many WUs. The time it took to complete WUs also was about the same as mine.
The problem is that machines like that causes a lot of problems with pending results, other crunchers having to wait weeks for credits and of course WUs being resent out as they expire on those machines.
Is there anything one can do to prevent this? Maybe like LHC is handling their quotas?
It is normal for it to creep
)
It is normal for it to creep up after outages, because the 5.2 - 5.5.x clients backed off up to a week. So it will take several days for computers to start talking again.