Correction: one of the figures in my data block below was wrong: sequence 502 took 27,352.58 seconds (it was out of order on the results page - that's what comes of collecting data manually). With that corrected, the error comes down to 0.6% - much better. Further results in the dedicated thread Gary has started.

Err ..... guys, I've had an 'end of the week' brain fade .... :-)

Could anyone please remind me of the why/how/who/what/where about the 1200 sky grid points, and thus derived the 0.000206 factor to multiply by the frequency squared to get the task cycle period? [ I'm reviewing the RR coding from tip to toe ].

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter.Blaise Pascal

Err ..... guys, I've had an 'end of the week' brain fade .... :-)

Could anyone please remind me of the why/how/who/what/where about the 1200 sky grid points, and thus derived the 0.000206 factor to multiply by the frequency squared to get the task cycle period? [ I'm reviewing the RR coding from tip to toe ].

Cheers, Mike.

It was I who proposed the specific value of .000206 for the quadratic whose general form you suggested. It was in no way derived from skypoints and such, but just a curve fit to data spanning much of the frequency range processed to date.

Bikeman posted some material on how skypoints and such appeared to form a basis for this relation. For a small subset of data he examined, he suggested this way of looking at things implied a slightly lower value (maybe about .0002045?--memory fades). Another implication of his assessment was that the real relation would jump at intervals of 10 Hz, but not at even 10 Hz boundaries, but slightly above (where the use of skygrid actually changes over). A third implication was a possible noise factor arising from variation in the actual number of skypoints for a result, which seems generally to be near 1200, but with variation whose nature I've not seen characterized.

Meanwhile, back at my big-picture curve-fit way of looking at things, I've done more (unposted) fitting work to check whether a slightly lower value would fit better. For two big data sets in the 790 frequency range added to my previous reference, the answer I got was for one about .000206, and the other about .0002055--not enough different to respin.

Unfortunately as we go to both higher frequencies and higher sequence numbers, the error potential of having this number a bit wrong is growing, especially for results near peak value several cycles above zero.

Where the project to consider incorporating the cyclicity into their credit and completion-time estimates, their code might have access to actual skypoints and such, and dispense with the quadratic. However for individual end-user work of the form your Ready Reckoner supports, obtaining those numbers would be an additional data collection, and in particular not available for those trying to help another participant by just peaking at their task list.

Bikeman posted some material on how skypoints and such appeared to form a basis for this relation. For a small subset of data he examined, he suggested this way of looking at things implied a slightly lower value (maybe about .0002045?--memory fades). Another implication of his assessment was that the real relation would jump at intervals of 10 Hz, but not at even 10 Hz boundaries, but slightly above (where the use of skygrid actually changes over). A third implication was a possible noise factor arising from variation in the actual number of skypoints for a result, which seems generally to be near 1200, but with variation whose nature I've not seen characterized.

Here is the actual post where Bikeman made comments, including the observation that for the case he looked at the implied value of the multiplier in question was .0002044. However, on rereading his post, it appears that value is a fit to the relative line counts of just three skygrid files near 740, so I'm not sure there was enough evidence for a confident revision from that.

I guess the (minuscule) difference in the approximation of the is constant comes from the "step-function" effect where the same sky-grid (and supposedly the same period) applies to a 10 Hz wide frequency band.

I wonder whether a better approximation of phase would be:

period ~ (ceil(freq /10)*10)^2 * c

where c is the constant we were talking about.

Also, currently, we do the curve fitting based on

phase = frac(seq_no / period)

If we were extremely ambitious, we would sum up estimations for every individual of the ca 1200 skypoints. The first skypoint would have a "phase" as described by the formula above (e.g. 0 for the very first skypoint at a pole) , while the last skypoint of a WU would be close to the phase of the next WU,

phase' = frac((seq_no +1) / period)

Therefore, I think it would be natural to do the curve fitting wiuth a "mid-point" estimate of the phase for the workunit, say

phase = frac((seq_no +0.5) / period)

What do you think?

Maybe it would also be nice to include in the RR a chart using the alternative runtime model that uses a slighly simpler function:

T(phase) = Tmin + (Tmax-Tmin) * (2*phase-1)^2

(no sine function, and we can just do linear regression on

Another possible source of error: we've all been talking as if the identification number in the WU name was an exact statement of the frequency. But as Pooh Bear 27 pointed out in this post, some versions of the app display a different, more precise value as their command line parameter. Are we guilty of reading too much into a two decimal place (rounded to the nearest 0.05) approximation?

I wonder whether a better approximation of phase would be:

period ~ (ceil(freq /10)*10)^2 * c

where c is the constant we were talking about.

In general I like to use forms that have a backing in theory and a confirmation in observation.

While my contribution here has only been on the curve-fitting (observation) side, I'd like to see confirmation of significant revisions.

But confirming the step-function effect from data may be difficult, and your understanding of the actual behavior of the ap seems likely to be correctly identifying this step-function.

So my suggestion to Mike would be to adopt a 10-Hz step-function revision using the bin midpoint. It it not the case that the actual boundary of the steps would not be at 10-Hz round frequencies but slightly above? And how much is slight?

While I was typing Richard Haselgrove commented on frequency accuracy. But if I understand Bikeman's attribution of the effect to line-count in the skygrid file and total number of skypoints, then frequency matters only to the level of attributing the work unit to the correct 10 Hz-wide bin.

I have no current comment on your:

Quote:

T(phase) = Tmin + (Tmax-Tmin) * (2*phase-1)^2

I've not understood this one, nor undertaken any observation comparison.

Perhaps this is the place to confess that my professional work in the last ten years of my career (extracting useful improvement information from gigabytes of somewhat noisy production data for a certain major semiconductor manufacturer) left me always wanting to back automated methods such as least-squares or linear regression with direct graphic investigation of individuals. Outlier treatment is crucial, and least-squares unaltered in particular serves as a outlier amplifier rather than an outlier attenuator.

- it uses a primary derivation from Gauss' least squares method, without any polynomial approximations to sine. Bikeman is right that a quadratic will approximate a sine. While that works very well in the 'bowl', it strays in the 'wings'. My previous algorithm suffered particularly from small differentials in the bowl causing a large wing 'flap'. Since we want to grasp the whole sine excursion I'll stick with the slower entire sine evaluations. [ Generally if you 'center' on the bowl you'd use a power series with even exponents, if you centre on the wings you'd use a power series with odd exponents. The Javascript 'Math' library will likely be using much the same.... ]

- on the face of it, for 'well behaved' point sets it will show only minimal differences in predictions compared to V6A : +/- ~ 100 seconds in times and ~ third decimal place in variance. It seems to cope better than V6A with Peter's wobbly 4.07 to 4.11 set shown earlier, and similiar test cases. I will look into some sort of 'auto' outlier identification and their suppression, and think some more as per Peter's comments on that. The issue is jamming some fancy math functionality into Javascript, but I'm slowly accumulating some layers of that! :-)

- it will report on closer phase cases where before it refused. You still need to be careful with interpreting really close sequences though. It should only really vomit badly for some low point count solutions : say two points with close runtimes for consecutive sequence numbers, or big runtime differences for close phases. The algorithm can't rescue all crappy data set types so: if the plot doesn't look right -> don't use it! :-)

- as per Gary's suggestion there's a big button to toggle the showing of all points in their original cycles or mapping back into the first cycle.

- the reporting area has a new button for brief/verbose styles.

- either of these new buttons indicate the current mode setting, not what you'll get if you press it.

- the prev/next sequence area has another output below the actual sequence number showing the how that maps back into the first cycle. This can yield fractional sequence numbers - hence it stimulated my question about the period.

- as Richard tells me ( PM ) there can be problems with the lower page layout when re-sizing is done. I'll look into this. The graph per se won't resize without a re-draw ( library limitation ) and I'll have to take care with cross-browser/browser-version issues of widths. Marked down for RR_V8!

- no error is reported on estimates as I've yet to decide upon the best indicator. I'll probably use the absolute values of the residuals, expressed as a percentage of their measurements, then average them over all points? For RR_V8 .....

Now I'm going to think quite alot further about your very helpful observations! I hadn't picked up on that frequency comment of Pooh Bear's ( well spotted ) .... and the frequency ceil/step/bin/midpoint concepts are very worthy. :-)

I'm going to ease up on any new RR release/development for a few weeks as I've a big maths exam in 10 days!

But, as usual, ..... nag me if something blows up! :-)

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter.Blaise Pascal

I've not understood this one, nor undertaken any observation comparison.

He's how I came up with this:

The major working-hypothesis behind all this is that the declination ("sykcoordinates latitude") of a skypoint is the major factor for the runtime needed to process it. Not quite true but close enough, hopefully.

So I went along and did some standalone, offline experiments, running the E@H client with single-point skygrids of my own, plotted runtime against declination and the result looked very close to

runtime(delta) = T0+ c*(sin(delta))^2

When you then go further and calculate the declination of a skypoint for a workunit at phase ph , then some equations later the final formula for runtime is

runtime(ph) = T0 + B*(2*ph-1)^2

So here the quadratic function is not meant as an approximation to the sine function, it's supposed to be the "real thing", suggested by the experimental evidence from single-skypoint timing tests.

Now, a few simplifications and assumptions were made here, so the sine-based runtime model might still be a superior fit, even if theory suggests a quadratic function.

## Correction: one of the

)

Correction: one of the figures in my data block below was wrong: sequence 502 took 27,352.58 seconds (it was out of order on the results page - that's what comes of collecting data manually). With that corrected, the error comes down to 0.6% - much better. Further results in the dedicated thread Gary has started.

## Err ..... guys, I've had an

)

Err ..... guys, I've had an 'end of the week' brain fade .... :-)

Could anyone please remind me of the why/how/who/what/where about the 1200 sky grid points, and thus derived the 0.000206 factor to multiply by the frequency squared to get the task cycle period? [ I'm reviewing the RR coding from tip to toe ].

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter. Blaise Pascal

## RE: Err ..... guys, I've

)

It was I who proposed the specific value of .000206 for the quadratic whose general form you suggested. It was in no way derived from skypoints and such, but just a curve fit to data spanning much of the frequency range processed to date.

Bikeman posted some material on how skypoints and such appeared to form a basis for this relation. For a small subset of data he examined, he suggested this way of looking at things implied a slightly lower value (maybe about .0002045?--memory fades). Another implication of his assessment was that the real relation would jump at intervals of 10 Hz, but not at even 10 Hz boundaries, but slightly above (where the use of skygrid actually changes over). A third implication was a possible noise factor arising from variation in the actual number of skypoints for a result, which seems generally to be near 1200, but with variation whose nature I've not seen characterized.

Meanwhile, back at my big-picture curve-fit way of looking at things, I've done more (unposted) fitting work to check whether a slightly lower value would fit better. For two big data sets in the 790 frequency range added to my previous reference, the answer I got was for one about .000206, and the other about .0002055--not enough different to respin.

Unfortunately as we go to both higher frequencies and higher sequence numbers, the error potential of having this number a bit wrong is growing, especially for results near peak value several cycles above zero.

Where the project to consider incorporating the cyclicity into their credit and completion-time estimates, their code might have access to actual skypoints and such, and dispense with the quadratic. However for individual end-user work of the form your Ready Reckoner supports, obtaining those numbers would be an additional data collection, and in particular not available for those trying to help another participant by just peaking at their task list.

## RE: Bikeman posted some

)

Here is the actual post where Bikeman made comments, including the observation that for the case he looked at the implied value of the multiplier in question was .0002044. However, on rereading his post, it appears that value is a fit to the relative line counts of just three skygrid files near 740, so I'm not sure there was enough evidence for a confident revision from that.

## I guess the (minuscule)

)

I guess the (minuscule) difference in the approximation of the is constant comes from the "step-function" effect where the same sky-grid (and supposedly the same period) applies to a 10 Hz wide frequency band.

I wonder whether a better approximation of phase would be:

period ~ (ceil(freq /10)*10)^2 * c

where c is the constant we were talking about.

Also, currently, we do the curve fitting based on

phase = frac(seq_no / period)

If we were extremely ambitious, we would sum up estimations for every individual of the ca 1200 skypoints. The first skypoint would have a "phase" as described by the formula above (e.g. 0 for the very first skypoint at a pole) , while the last skypoint of a WU would be close to the phase of the next WU,

phase' = frac((seq_no +1) / period)

Therefore, I think it would be natural to do the curve fitting wiuth a "mid-point" estimate of the phase for the workunit, say

phase = frac((seq_no +0.5) / period)

What do you think?

Maybe it would also be nice to include in the RR a chart using the alternative runtime model that uses a slighly simpler function:

T(phase) = Tmin + (Tmax-Tmin) * (2*phase-1)^2

(no sine function, and we can just do linear regression on

t(alpha) = Tmin + (Tmax-Tmin)* alpha

where alpha = (2*phase-1)^2

CU

Bikeman

## Another possible source of

)

Another possible source of error: we've all been talking as if the identification number in the WU name was an exact statement of the frequency. But as Pooh Bear 27 pointed out in this post, some versions of the app display a different, more precise value as their command line parameter. Are we guilty of reading too much into a two decimal place (rounded to the nearest 0.05) approximation?

## RE: I wonder whether a

)

In general I like to use forms that have a backing in theory and a confirmation in observation.

While my contribution here has only been on the curve-fitting (observation) side, I'd like to see confirmation of significant revisions.

But confirming the step-function effect from data may be difficult, and your understanding of the actual behavior of the ap seems likely to be correctly identifying this step-function.

So my suggestion to Mike would be to adopt a 10-Hz step-function revision using the bin midpoint. It it not the case that the actual boundary of the steps would not be at 10-Hz round frequencies but slightly above? And how much is slight?

While I was typing Richard Haselgrove commented on frequency accuracy. But if I understand Bikeman's attribution of the effect to line-count in the skygrid file and total number of skypoints, then frequency matters only to the level of attributing the work unit to the correct 10 Hz-wide bin.

I have no current comment on your:

I've not understood this one, nor undertaken any observation comparison.

Perhaps this is the place to confess that my professional work in the last ten years of my career (extracting useful improvement information from gigabytes of somewhat noisy production data for a certain major semiconductor manufacturer) left me always wanting to back automated methods such as least-squares or linear regression with direct graphic investigation of individuals. Outlier treatment is crucial, and least-squares unaltered in particular serves as a outlier amplifier rather than an outlier attenuator.

## Thanks for the replies

)

Thanks for the replies everyone. I was rolling it all around in my head and it wouldn't settle! :-)

Firstly I've finalised RR_V7A.

- it uses a primary derivation from Gauss' least squares method, without any polynomial approximations to sine. Bikeman is right that a quadratic will approximate a sine. While that works very well in the 'bowl', it strays in the 'wings'. My previous algorithm suffered particularly from small differentials in the bowl causing a large wing 'flap'. Since we want to grasp the whole sine excursion I'll stick with the slower entire sine evaluations. [ Generally if you 'center' on the bowl you'd use a power series with even exponents, if you centre on the wings you'd use a power series with odd exponents. The Javascript 'Math' library will likely be using much the same.... ]

- on the face of it, for 'well behaved' point sets it will show only minimal differences in predictions compared to V6A : +/- ~ 100 seconds in times and ~ third decimal place in variance. It seems to cope better than V6A with Peter's wobbly 4.07 to 4.11 set shown earlier, and similiar test cases. I will look into some sort of 'auto' outlier identification and their suppression, and think some more as per Peter's comments on that. The issue is jamming some fancy math functionality into Javascript, but I'm slowly accumulating some layers of that! :-)

- it will report on closer phase cases where before it refused. You still need to be careful with interpreting really close sequences though. It should only really vomit badly for some low point count solutions : say two points with close runtimes for consecutive sequence numbers, or big runtime differences for close phases. The algorithm can't rescue all crappy data set types so: if the plot doesn't look right -> don't use it! :-)

- as per Gary's suggestion there's a big button to toggle the showing of all points in their original cycles or mapping back into the first cycle.

- the reporting area has a new button for brief/verbose styles.

- either of these new buttons indicate the current mode setting, not what you'll get if you press it.

- the prev/next sequence area has another output below the actual sequence number showing the how that maps back into the first cycle. This can yield fractional sequence numbers - hence it stimulated my question about the period.

- as Richard tells me ( PM ) there can be problems with the lower page layout when re-sizing is done. I'll look into this. The graph per se won't resize without a re-draw ( library limitation ) and I'll have to take care with cross-browser/browser-version issues of widths. Marked down for RR_V8!

- no error is reported on estimates as I've yet to decide upon the best indicator. I'll probably use the absolute values of the residuals, expressed as a percentage of their measurements, then average them over all points? For RR_V8 .....

Now I'm going to think quite alot further about your very helpful observations! I hadn't picked up on that frequency comment of Pooh Bear's ( well spotted ) .... and the frequency ceil/step/bin/midpoint concepts are very worthy. :-)

I'm going to ease up on any new RR release/development for a few weeks as I've a big maths exam in 10 days!

But, as usual, ..... nag me if something blows up! :-)

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter. Blaise Pascal

## Slight change with RR_V7C one

)

Slight change with RR_V7C one can enter/vary a sky grid density value.

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter. Blaise Pascal

## RE: I have no current

)

He's how I came up with this:

The major working-hypothesis behind all this is that the declination ("sykcoordinates latitude") of a skypoint is the major factor for the runtime needed to process it. Not quite true but close enough, hopefully.

So I went along and did some standalone, offline experiments, running the E@H client with single-point skygrids of my own, plotted runtime against declination and the result looked very close to

runtime(delta) = T0+ c*(sin(delta))^2

When you then go further and calculate the declination of a skypoint for a workunit at phase ph , then some equations later the final formula for runtime is

runtime(ph) = T0 + B*(2*ph-1)^2

So here the quadratic function is not meant as an approximation to the sine function, it's supposed to be the "real thing", suggested by the experimental evidence from single-skypoint timing tests.

Now, a few simplifications and assumptions were made here, so the sine-based runtime model might still be a superior fit, even if theory suggests a quadratic function.

CU

Bikeman