Side effect of optimized app

zagadka
zagadka
Joined: 29 Apr 06
Posts: 12
Credit: 17,088
RAC: 0
Topic 191194

Without having done anything to make this happen I have just noticed that the BOINC client has downloaded more than 1 WU when it last requested work. Right now I have 6 WUs on my computer (1 ready to report, 1 preempted, and 4 ready to run). Is this an expected result from running one of the optimized apps (S41.06)?

archae86
archae86
Joined: 6 Dec 05
Posts: 3,150
Credit: 7,113,984,931
RAC: 613,476

Side effect of optimized app

Quote:
Without having done anything to make this happen I have just noticed that the BOINC client has downloaded more than 1 WU when it last requested work. Right now I have 6 WUs on my computer (1 ready to report, 1 preempted, and 4 ready to run). Is this an expected result from running one of the optimized apps (S41.06)?

Your general preferences have a value set for:
Connect to network about every:
(determines size of work cache; maximum 10 days)

The expected time to complete your pending work is estimated using your host's current benchmark ratings, the estimated work content of the individual WU's as downloaded by the project, and a "result duration correction factor" which learns after a while if you are actually getting work done faster or slower than expected.

You can view the result duration correction factors for your hosts by going to
project|your account|computers on this account|computer ID nnnnn.

Running a faster ap does not change the first two, but does cause your result duration correction factor to drop below 1, possibly substantially. My Gallatin is currently at 0.29 and my Pentium M at .16.

If you are running more than one project, an additional issue is whether your short and long term debt are near balance. If far out of balance, the client will run your pending jobs of the over-resourced project down to zero, wait a while, then request new work. You'll find it requests too much, so a big burst arises this way. Somewhat less out of balance, it will avoid pre-fetching for a while, then request too much, so still rather bursty. When near balance (I've had good luck adjusting short and long-term debt to be within 200 of each other), at equilibrium the client requests new work very shortly after the pending work drops below the desired level, typically from a fraction of a second to a few integer seconds. This generally results in prefetch of just one WU at a time, though if the server goes unavailable for a couple of hours, the catchup will generally be a bigger burst than correct, so a period of no prefetch will follow.

Above are my observations and some reading regarding behavior using Trux tx36 calibrating client, running a SETI and Einstein work load, both with heavily optimized science aps. I'd welcome corrections, or additional observations from others.

If you'd like a smaller queue, I suggest you go to:

project (any)|your account|general preferences|Edit preferences
and alter your "Connect to network about every" parameter to a lower value.

If you are using the Home|work|school groupings, you must change this preference in that section, otherwise just change the default preference.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.