S5R3

Astro
Astro
Joined: 18 Jan 05
Posts: 257
Credit: 1000560
RAC: 0

I'm all out of questions (for

I'm all out of questions (for now).

My intentions were: (displayed in a "there's a hole in the bucket, dear Liza", kind of way)

1) If I were going to run a project, then I'd be collecting data.

2) if I were collecting data, then I'd want to be getting all the meaningful/useful data right from the git/go.

3) if I were collecting meaningful/useful data, then I'd need to label the data in a universally accepted fashion.

4) didn't want an "end if" statement. (small programming joke included for free)

thanks for your assistance.

tony

archae86
archae86
Joined: 6 Dec 05
Posts: 3153
Credit: 7162554931
RAC: 608215

As Bikeman has requested, I'm

As Bikeman has requested, I'm posting this periodicity comment here rather than in the 4.25 beta thread. However the evaluation of relative speed of small numbers of results computed with a new ap or otherwise revised conditions is crucially dependent on accounting for this periodicity, so I hope beta thread participants interested in execution time will find their way here.

Here is a fresh look at ap 4.15 execution times on one 3.006 GHz E6600 WinXP Core 2 Duo host. I've called "sequence number" what Richard here has called "task number".

The points for a frequency near 643 form about two and three-quarters cycles, so show the characteristics pretty clearly. For sufficiently low frequencies, the period is so short that the waveform is severely undersampled, so it would be easy not to recognize the periodicity at all.

However, as Bikeman pointed out, you should not assume the peak to valley ratio for another system is the same as mine. Also variation in non-Eistein activity on a system will put more noise in the cycle than seen here. This system is not my daily runner, and has a quite stable configuration and load.

I've collected together in one graph periodicity data for several points in the frequency range. All data are from S5R3, though not all from the same host. (two of my hosts, and at least one of Richard's hosts contributed).

The relationship of period to frequency seems quite orderly--wrinkles in the implied curve are more likely caused by imprecision in my estimates than anything else.

With this curve, and the observation that sequence number zero seems to start at the peak of the cycle, you can estimate where in the cycle a result lies, at least for frequencies above 250 or so.

Astro
Astro
Joined: 18 Jan 05
Posts: 257
Credit: 1000560
RAC: 0

RE: OK, you've spurred

Message 73324 in response to message 73323

Quote:


OK, you've spurred more questions. It's not my fault. LOL

So from this chart, it's probably nearly accurate to say "I can expect to have to collect 32 consecutive samples (at a min) to fully represent a wu series with a freq of 383"?? Am I reading/understanding this right?? This goes towards my decision to stick with 4.15 or not. I.E Is it possible to get something useful out of running it?

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2143
Credit: 2905045060
RAC: 690725

To reply to an earlier

To reply to an earlier question:


(direct link)

Since the start of the S5R3 run, my home BOINCview (6 hosts, 19 cores in total) has logged 1126 Einstein tasks. These are the frequencies that I've worked on, and the maximum 'task number' or 'sequence number' I've encountered for each frequency. The absolute highest number I've logged is 398, but since that was only four days ago (Jan 16), I wouldn't rely on it being an upper bound at only 25% of the way through the run - the frequencies on issue seem to be increasing with time.

And as regards starting a new timed observation run - I think you've chosen exactly the wrong weekend to come back and start doing that! Judging by the failure of the hot loop, my expectation is that 4.15 will be replaced sooner rather than later, but it won't be replaced by 4.25. Perhaps the most useful thing to do at this moment is to be ready to speed-test the successors to 4.25 as they come out, and report the results from a variety of CPU architectures.

Astro
Astro
Joined: 18 Jan 05
Posts: 257
Credit: 1000560
RAC: 0

So, from what I'm both

So, from what I'm both figuring out and being told, is that meaningful data collection towards any "specific" goal is pretty pointless ATM. But, running the "beta" application might still help in determining the worthiness of the app itself. So, If I do work, the most helpful way would be to be running the "beta" app, but not have any expectations of contributing any meaningful data about either app. OK Decision time ahead.

Brian Silvers
Brian Silvers
Joined: 26 Aug 05
Posts: 772
Credit: 282700
RAC: 0

RE: Judging by the failure

Message 73327 in response to message 73325

Quote:
Judging by the failure of the hot loop, my expectation is that 4.15 will be replaced sooner rather than later, but it won't be replaced by 4.25. Perhaps the most useful thing to do at this moment is to be ready to speed-test the successors to 4.25 as they come out, and report the results from a variety of CPU architectures.

I hope that is the case. While I'm sure it is of great benefit to the project to correct the graphics problem, I don't know if this large of a performance drop on the already struggling / perhaps slowest platform would be a good "PR" move... I guess the first app I'd want in replacement of this is to do the graphics the new way, but go back to the old sin/cos code, as all indications are that the SSE piece won't be ready in short order, so it would be better, IMO, to get only the graphics fix out there. This of course assumes that they can't help the compilers along with the linear method and that the former method isn't somehow slower now due to something else... I know...not likely, but I'm a "think the worst-case scenario" kind of guy... :-)

BTW, I didn't mean any offense by stating I was expecting Bikeman to come along. I had noticed that he had been posting, so obviously he was awake...

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6578
Credit: 304077790
RAC: 254508

RE: The relationship of

Message 73328 in response to message 73323

Quote:

The relationship of period to frequency seems quite orderly--wrinkles in the implied curve are more likely caused by imprecision in my estimates than anything else.

With this curve, and the observation that sequence number zero seems to start at the peak of the cycle, you can estimate where in the cycle a result lies, at least for frequencies above 250 or so.


Looks like a 'gentle' quadratic - perhaps you could try a log/log plot and see if you get a line? Then check the slope and intercept? [ If y = a*(x^n) then log(y) = log(a) + n*log(x) ]

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

archae86
archae86
Joined: 6 Dec 05
Posts: 3153
Credit: 7162554931
RAC: 608215

RE: Looks like a 'gentle'

Message 73329 in response to message 73328

Quote:
Looks like a 'gentle' quadratic - perhaps you could try a log/log plot and see if you get a line? Then check the slope and intercept? [ If y = a*(x^n) then log(y) = log(a) + n*log(x) ]


Good call. The data on a log/log plot fit a straight line pretty well.

For the points in hand, a period estimate:

per = .000206*freq^2

fits the data to within the likely error of my period estimates.

As the estimates with appreciable accuracy only span about half a decade, I'd not put too high a bet on the proposition that it is truly square law, but for estimating purposes having a functional form is handy.

This is not a formal fit, by least-squares or any other method. I just tried the initial suggestion that it might be square law, and fiddled the multiplier until I liked the residual distribution.

Brian Silvers
Brian Silvers
Joined: 26 Aug 05
Posts: 772
Credit: 282700
RAC: 0

RE: As the estimates with

Message 73330 in response to message 73329

Quote:
As the estimates with appreciable accuracy only span about half a decade, I'd not put too high a bet on the proposition that it is truly square law, but for estimating purposes having a functional form is handy.

Days like this are when I wish I had stuck to Chemistry / Physics... It's been far too long for me doing "basic" math (aka "business math") to follow along with the bouncing ball here...at least not completely...

All you math nerds, I salute you... :-)

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6578
Credit: 304077790
RAC: 254508

Well I'll be a really

Well I'll be a really annoying nerd and reveal why I think a power law, about quadratic, is a likely guess. Not having seen the source code, this explanation could be total rubbish though .... :-)

I have a copy of Peter Saulson's book 'Fundamentals Of Interferometric Gravitational Wave Detectors' where he describes the general forms of the algorithms for signal detection. Most are based upon the matching of a presumed signal template with the actual data.

This often involves calculating some variety of correlation function that is evaluated for each of a series of time offsets between the data and the template [ it is an integration, over the time domain, of a product of the two waveforms ]. A 'good' match gives a higher value for this summation than adjacent choices of time offset - noise tends to cancel out ie. lessening the integral's value, whereas correlated values tend to boost the summation [ there will be some trigger value of the signal/noise ratio to achieve 'significance' ]. The point is that faster varying functions ( higher frequency waveforms ) need finer increments in the time offset used, that is more correlation integrations to perform. Otherwise the 'sweet spot' where the correlation is maximal may be missed if you shift along by too much to the next choice of time offset. So for a given total span of time offsets to test those said correlations, the number of such tests goes linearly with frequency.

The other aspect is probably accounting for Doppler shifting in the received signal. It's a bit messy, but the geometry dictates that the angular resolution ( in the sky ) of the 'synthetic aperture' that we are creating by our calculations is inversely proportional to the frequency being searched for. This is a generic property of 'beam antennae'. Anyhows for a given total angular span ( say pole -> equator -> pole ) the number of angular intervals to examine goes linearly with frequency too. Else sky positions with a candidate signal may be missed - and there's not much point using increments finer than your base resolution.

The upshot is that if I double the frequency I have to halve the step-along in angle that is used, thus doubling the total number examined, and for each of those I have to double the number of time offset correlation integrations. Otherwise I might miss a signal in either the angular or time domain. This is quadratic behaviour.

It won't be exactly quadratic, of course, as there's other aspects that aren't frequency bound, but the fit below seems to pass the 'Mark I Eyeball' test... :-)

Cheers, Mike

( edit ) NB. The variation in execution time with angular position ( pole equator ) is a separate issue/pattern. As spherical polar co-ordinates is the chosen method of designating sky positions then there is greater overlap in sky area/solid angle near a pole compared with the equator. Basically you can skip through the polar areas quicker ( bigger steps ) as prior or later runs in adjacent longitudes will catch/overlap. [ Try cutting some ~ spherical fruit up into say 8 slices with 4 cuts in the vertical plane through the core but each rotated ~ 45 degrees, and note/examine how a slice is fat in the middle and gets narrower to each end. Then eat it, to achieve full mathematical value .... :-) ] From memory I think it was late '06 or early '07 that the all-sky search strategy/pattern was adjusted to account for this behaviour - thus speeding up the entire science run analysis.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.