Fourier Stuff - Part 3

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6590
Credit: 318749181
RAC: 408119
Topic 195784

Let's carry over from the previous thread. Any questions about anything Fourier ?? :-) :-)

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6590
Credit: 318749181
RAC: 408119

Fourier Stuff - Part 3

Discrete Convolution With discrete data one can convolve/correlate directly without Fourier involvement. Just shift along the relative timing of records/templates by some offset before multiplying the corresponding entries and adding to a total :

Specific calculations for an offset of five are as follows :

..... which you'll recognise as the inner product of two vectors. This is the discrete version of 'common area under two curves' earlier illustrated for continuous variables.

But you have to do that inner product with each offset, and then see which offset gave the 'best' match :

No surprises that the unshifted version gave the peak correlate. A couple of points come to mind.

I've correlated something with itself ( auto-correlated ) so all I'm really calculating is how the pattern varies across it's own profile.

The data was random, and if I label/consider the template as the 'signal' and the shifted version as 'noise' then the signal to noise ratio seems to be about two here ( compare the magnitudes of the values : 411 at zero offset with the next highest at -200 ). Lesson : don't get all excited/whoopee/bingo about low SNR's, these things commonly/routinely happen with random inputs. It's called statistical variation, meaning that the process of sampling any data set that has no trend preference ( that's a meaning of 'random' ) will give you this day in, day out. No biggie*. But you've gotta know what Mr Random will serve up normally before you can place true significance, otherwise one chases the ghosts that are the products of human imagination. It's a little like staring at the white noise pattern on a TV set when the roof aerial has fallen off : your mind will 'invent' patterns within the picture and slide that towards a preferred topic. See also star groupings into constellations, cloud shapes and human faces on Mars hillsides in the right light ..... if we had a greater cultural interest in dog poo then we'd be seeing that all over the solar system. :-) :-)

As mentioned this is generally not as efficient as the Fourier route into the frequency domain for comparisons there.

Cheers, Mike.

* NOTE WELL : I could say "the signal exceeded the noise by 100% !!" and use language tricks to sound real excited, emphatic and very plausible that I've hit paydirt on some desired trend. This is known as lying - no less - by using statistics because I didn't give you the back story on the context of the production of the figures. I can pretend that my conclusions mean something simply on account of involving mathematics to present a viewpoint. This works especially well upon those who haven't the first clue on the relevant techniques and are forced to either accept 'authority' or attempt rebut without the appropriate intellectual/language tools. This alas is as common as mud across alot of contemporary discussions like economics, accident trends, healthcare, demographic shifts and even changes to train timetables ( to quote a recent example in my home state of Victoria ).

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6590
Credit: 318749181
RAC: 408119

Note On The SNR For GW

Note On The SNR For GW interferometers there is a working definition called the 'horizon distance' being : that distance to an optimally oriented ( face-on binary system ) and located ( directly overhead the detector ) source which yields an SNR of 8. Other positions and orientations lower that distance. That doesn't mean an SNR of 8 will/won't be accepted for any detection nor that a source within/beyond that distance can/can't be heard, but is a way of characterising an IFOs performance and/or is a standard measure for discussion purposes. I've mentioned earlier that the 'gold standard' for LIGO is an SNR of 20, which is very stringent indeed.

Here's a thing ( thank you Brad Osgood from Stanford On iTunes ) : take MATLAB's random number generator and populate a 1-D matrix/vector with say 15 elements in total. I'll use the previous post's numbers and take absolute values ( so they're all positive ) :

Then convolve that pattern with itself :

by normalise I mean divide the whole pattern by a constant related to it's 'total area under the curve'. This doesn't change the final conclusions here, but stops the vertical scale going silly .....

Yeah. The Gaussian curve appears. This is a demonstration of the Central Limit Theorem - colloquially known as the 'law of large numbers' - which says that just about any phenomenon if extended to many repeated instances will comply with the exp[-x^2] statistics. One can 'prove'* this by starting with some large number of independent identically distributed quantities/variables, say the steps of a drunkard's walk, apply the rule for combining probabilities for independent events ( you multiply them ) but express in an alternate domain via Fourier where you will convolve instead. Anyway if you get a Gaussian in one domain then it's also a Gaussian when transformed to the other. Note that I could have started with any other random pattern and wound up eventually converging to the bell shape.

What this demonstrates in practice is that : if I keep convolving some fixed template with successive stretches of data then a Gaussian type spike will appear to the extent/degree that those data stretches contain something approximating my template. The better the template/data fit the quicker to appear and the sharper the spike will be. Vice-versa. This holds regardless of detailed method - direct convolution as above, Fourier trickery etc. If all of the data segments were 'pure noise' ( whatever that exactly means ) then it's unlikely that a spike will rise and stay up no matter how many times I convolve to a given template. If the frequencies of repetition along the horizontal/time axis don't sufficiently match b/w template and data then there will be a 'slide' in the convolution/matching process as they get progressively more out of phase with one to the other. The reason why the spike popped up so easily in the above example is because the frequency ( of something with respect to itself ) and phase ( of something with respect to itself ) were exact matches.

[ clever punters may detect an issue with the total length, in time, over which convolution will be applied. So if I have 20 consecutive hours of data do I match the whole lot in one go, or 20 lots of 1 hour at a time followed by an averaging of those 20 segments? I don't fully understand the pros and cons here but I imagine (a) the convolution worsening due to phase sliding for close template/data frequencies over a single 20 hour stretch but (b) on the other hand slicing to smaller lengths reduces SNR ie. detection sensitivity otherwise. I'll need to research/explore where the happy 'Three Bears' solution is .... :-) ]

Now if you are trying to tune in to a certain pattern then the more crappy/random inputs that are mixed in at the receiver then the harder time you will have being assured that the desired signal reception actually occurred. So at the start of analysis you don't know if the signal is there - that's why you're analysing - so you proceed onwards and find out if a spike appears.

One needs to decide what 'a spike' is though, especially the height of the spike's tip/peak vs the overall level of the numbers. There was considerable rabble rousing with early resonant bar detectors on this exact point : do you decide in advance of analysis what is significant, or do you look afterwards and 'construct a case' depending on what one finds. To alleviate this type of issue - being charged with retrospective wishful thinking rather objective measurement - then with the IFO's analysis there is : a preset SNR of 20 ( that's exceptionally rigorous ) and all manner of checks & double blinds ( eg. Big Dog ). It is crucial to understand the enormous difference between retrospective wishful thinking - I call them 'just so stories' - and rigorous triggers determined in advance. It's a key method in not fooling yourself ..... and probably the hardest thing for laymen to grasp on account of it discarding 'intuitive' input, like we do with everyday decisions. Intuition is fine for constructing/discovering the questions to address but is an incipient disaster when applied to analysis of results ( you'll 'find' the trend you prefer rather than what's really there ). In my view that's the main reason why there is generally a theory vs experimental 'divide' in most sciences. It's extraordinarily hard to isolate intuition from objectivity inside a single head. Precious few have done it perfectly eg. Enrico Fermi. Beware fields of inquiry where this issue is not even acknowledged ......

Time is the key. For LIGO data it's the case that detection chances go like SQRT(length_of_interval), thus to double that ( all else the same ) then you quadruple the elapsed time of the data run to analyse. Prolonged lock is a really important goal at the IFO's, not simply because you may miss something during an outage but also because long stretches can allow quieter signals to be detected with later analysis.

You can combine - indeed E@H has - separated shorter intervals ( say there was lock lost from a ground tremor causing a gap in the record ) into longer stretches BUT you have to preserve phase as best as possible. Meaning you have to know exactly when one interval ends and another begins, lest you convolve a signal with itself but slid by some fraction of a period ( say if one half cycle, then it'll zero out ). I'd assume any longer gaps between records could confound that, plus higher frequencies ( ie. shorter periods ) more so. If one's signal templates include a frequency derivative ( something is spinning up or down ) then any gap is going to accumulate a phase shift on that assumption. Hence the time standard adopted for the whole shebang has to have any errors well below other tolerances in the matter. From memory I think LIGO uses GPS timing signals with an uncertainty of nanoseconds. But I've read articles discussing the development of an 'in house' standard to avoid any problems with GPS availability ( by that I think is meant not only some GPS system/hardware failure, but deliberate signal degradation for other reasons ).

The one saving grace of the whole deal is that the noise is 'random'. By which I specifically mean that over time it has no especial trend/preference in direction ( neither enhancing nor diminishing ) the signal from the phenomena of interest. Well, if you remove or minimise systematic effects, instrument artifacts et al. My favorite analogy here is the seashore : as the tide moves over the course of hours, many waves will come and go, but gradually the average excursion of positions ( say the highest point/line reached by the water ) will drift. Even though any given wave may vary & splash way more than the average moves. Here the SNR is well below unity.

Cheers, Mike.

* The gag is that the mathematicians go with it because physical experience confirms it, whereas the experimentalists rely on it because the maths demonstrates it must be so. Everybody 'buys' it .... ;-)

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6590
Credit: 318749181
RAC: 408119

RE: The better the

Quote:
The better the template/data fit the quicker to appear and the sharper the spike will be. Vice-versa. This holds regardless of detailed method - direct convolution as above, Fourier trickery etc.


Sorry. Misleading wording. By 'quicker to appear' I definitely meant with respect to the number of times convolution is applied to get a given SNR, obviously regardless of the specific time-based efficiency of the calculation method used to perform a given convolution. My whole thrust in this discussion has been that FFT is better than DFT which is better than case by case time domain convolution, so on the face of it my original wording looks stupid .... :-) :-)

Evidently I could take the original 'template', do some Fourier transform to the frequency domain - multiply that by itself five times - and finally transform back to get that bottom Gaussian in the time domain!

I should have been more explicit by pointing out that in the manner I've ( simplistically ) presented : one period/cycle is one convolution. So what you've just seen is a demo of merely five signal cycles using a perfectly matching template and bang I've got a neat & clear Gaussian spike. We at E@H are routinely convolving over, ooooh ..... probably millions of cycles ..... to try and get the magic needle in a haystack. Note carefully that what we mean by 'noise' isn't necessarily 'random non-gravitational rubbish' but could also be other genuine GW signals that aren't being selected by the chosen template of immediate interest. Some other template could pick up some other true source from the same data sets. One can distinguish several simultaneous conversations in the one room of people ( ie. the behaviour is linear ), assuming you know what the 'voices' are of course. This is why one major axis in the search parameter space is frequency - thus your work units have a frequency tag. The naming of the current GW tasks as 'bucket' refer to a signal shape type .... which we are repeatedly convolving on a per frequency basis. Thus a frequency choice implies a period ( of return of a regular signal to repeat again ), and hence a time axis 'marker' to select convolution intervals.

I can't over-emphasise how impractical many of these inquiries would be for the GW researchers in the absence of the 'supercomputing facility' that E@H represents.

Cheers, Mike.

( edit ) While a stretch of data can contain many 'real' signals, it is still true ( or at least overwhelmingly likely ) that each signal behaves towards some other as : 'over time it has no especial trend/preference in direction ( neither enhancing nor diminishing ) the signal from the phenomena of interest.' I guess it's reasonable to not assume correlation b/w different sky sources, of the continuous waves from separated binary systems we seek. Hence a signal in one context/template becomes a fraction of the background 'noise' for another .....

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Vit
Vit
Joined: 7 Jan 08
Posts: 23
Credit: 393350695
RAC: 0

Is png or gif prohibited

Is png or gif prohibited here? Gray dots near curves is not looking good. Sure you are good in math and know why this dots appears and why jpeg is not best compression type for such kind of data.

Gundolf Jahn
Gundolf Jahn
Joined: 1 Mar 05
Posts: 1079
Credit: 341280
RAC: 0

I don't see any grey dots in

I don't see any grey dots in the pictures of this thread.

And there are gifs here as well, for the animated pictures.

Gruß,
Gundolf
PS: Laptop screen, 1024x768 pixel, 32 bit colour depth; IE6

Computer sind nicht alles im Leben. (Kleiner Scherz)

Vit
Vit
Joined: 7 Jan 08
Posts: 23
Credit: 393350695
RAC: 0

Lets take for instance the

Lets take for instance the last image, it's url is http://downloads.mikehewson.id.au/conv_5.jpg Content type is image/jpeg. And I just checked this image with GIMP and gamma shows me I am not right - these dots are not gray, but colored.

Animated image is gif surely, but it also has specific artifacts, so I guess it had been created from jpeg images.

Alex
Alex
Joined: 1 Mar 05
Posts: 451
Credit: 507288261
RAC: 80371

In the MIT-News I found an

In the MIT-News I found an article about a new FFT.
Under some circumstances, the improvement can be dramatic — a tenfold increase in speed.
Here is the link:
http://web.mit.edu/newsoffice/2012/faster-fourier-transforms-0118.html
I mean it can be of interest here.

Alexander

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6590
Credit: 318749181
RAC: 408119

RE: In the MIT-News I found

Quote:
In the MIT-News I found an article about a new FFT.
Under some circumstances, the improvement can be dramatic — a tenfold increase in speed.
Here is the link:
http://web.mit.edu/newsoffice/2012/faster-fourier-transforms-0118.html
I mean it can be of interest here.


Hello Alex! Thank you for that information. :-)

They are describing an efficiency boost, but only if the signal spectrum is dominated by a low fraction of spikes and is elsewhere pretty quiescent. You could describe such a scenario as having a high signal-to-noise ratio : plenty of interest to listen to and not much 'static'. That way they can discard/pre-empt the processing of the quiet frequencies. So it's not a 'plain' transform preserving all the underlying components, but includes a filtering aspect. The E@H WU's effectively do that in separate steps, first a plain FFT and then examine the results to report upon the highest strength candidates in frequency.

I don't think that helps for GW signals as these have high noise components that typically have rather more power than the 'true' signal. This you could label a low signal-to-noise ratio. So one strains to listen for the right tune when a lot of crashing and bashing is going on! My favorite analogy is the seashore : compare the gradual creep up/down the beach of the tidal movements to the tremendous vigor of the moment to moment wave action.

So the statement that

Quote:
"In nature, most of the normal signals are sparse"


is no doubt true, but alas we at E@H are searching for the exceptions to that signal style. So I think we'll have to await a more precise meaning of

Quote:
"Even if that number k is starting to get close to n — to all of them being important — this algorithm still gives some improvement over FFT."


Cheers, Mike.

( edit ) I've followed some links and found that k * log(n) applies here - it is n * log(n) for FFT - so this is better to the extent that k n. So I guess the message is that their algorithm is not worse than the FFT.

( edit ) That assumes that if n < k then k * log(n/k) < n but I'd have to sit down and work on that expression .... :-)

( edit ) Doh! Naturally log(n/k) < (n/k) ie. the logarithm function 'flattens' ratios as demonstrated by slide rules which convert a multiplication into an addition. Silly me .... :-0

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6590
Credit: 318749181
RAC: 408119

I should mention these

I should mention these arguments use 'order of' notation, O(something), where 'something' is some functional expression. This can be a complex topic, but it's a bit like a game of poker where you rank hands : an Ace high flush beats a King high flush etc. The underlying parameters - n and k in this discussion - are assumed to get sufficiently large so that 'lesser' functions can be ignored. There are many functions that 'muck about' with low n but explode as n increases eg. the factorial function. So it's a way of describing and comparing the general character, especially as regards efficiency or time to compute, of alternate methods.

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.