# Fourier Stuff - Part 1 Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6,057
Credit: 112,576,090
RAC: 70,963

### Integration - alas we're

Integration - alas we're gonna have to do this. But the good news is I have lot's of pictures today to assist our natural intuition though. I tackled MATLAB and Paint and One Note and Math Input Panel and Snipping Tool and .... :-) :-)

It's another way of adding up an infinite number of tiny little bits. With the series that we looked at last time we were summing up a total of a finite number of items, and the list of items was indexed by the natural or counting numbers. A first item, a second item, a third item etc. If you like, the terms within the series was a function upon ( a subset of ) the natural numbers, and so was the series total because that depended on how many terms we chose to include. So with that type of sum we could see how the total behaved as the number of terms went beyond any bound ie. 'in the limit of an infinite number of terms'. Did the total 'converge' or not? Think of shopping trolleys and supermarket checkouts .....

See if you can visualise the set of all real numbers between zero and one including those endpoints. A number line is a good image with zero being on the left side and one on the right :

What we will do is take a function defined on that interval. Thus for each real number within that domain I have defined another in the range - which is another set, maybe a set of real numbers but it could be anything really. Perhaps complex numbers, but let's stay real for the moment.

Now we can draw above/below each number in the interval [0,1] the value that the function takes. This is of course called the 'graph' of the function. But I'll also draw a line from the horizontal axis to the function's curve. Note that this may be above or below the x-axis depending on the sign of f(x) for that given x. I've only drawn in a few examples :

So each line from say 0.175 on the x-axis to f(0.175) on the curve is infinitely thin. True 'geometric' lines have no thickness ( meaning the thickness is defined, but the value is zero ) but I want an answer that 'adds up' all the f(x) values across the interval. If I did that with a 'simple' understanding of that I'd wind up with an infinite answer if I just looked at the f(x) values, or a zero answer if I just looked at the width of each line. What I will finally get, by the procedure formalised by Bernhard Reimann, is the area between the curve and the horizontal axis :

Can you imagine those vertical lines getting more & more numerous ( to infinitely many ) and thus closer & closer together ( the gaps between go to zero ) and merging to form an area? This process gives us a sense of the 'size of a function across an interval' compared to another function :

Indeed for a finite interval to integrate across, the integral is essentially the average value of the function. We add in area below the x-axis as of opposite sign to those regions above the x-axis. To do this Reimann Integral I will use limiting arguments, thus I start with some finite number of non-infinitesimally thin boxes strung along b/w the x-axis and the curve of f(x). In doing so I am approximating an infinitesimal thickness by a small width box thus :

But to do that I have to first define a partition on the interval [0,1] which is a finite sequence of some numbers in that interval with the following property :

0.0 = a(0) m if and only if a(n) > a(m) ].

Now I chop up the interval on the x-axis with this partition and form two separate sums. One underestimates the area and the other overestimates the area ( I have actually cheated a little with this diagram, but it doesn't matter ) :

I define a refinement of this partition as another sequence having at least the numbers of the a(n) sequence but with a minimum of one extra value inserted somewhere within. This new value will not be equal to any of the terms of a(n). By bunging in one ( or more ) extra points this way will have the effect ( generally ) of increasing the lower area estimate while decreasing the upper area estimate :

Refinement certainly won't increase the upper estimate or decrease the lower estimate. By now you may be sensing the punchline, which is that if I keep refining the partitions on the given interval ( with 0.0 and 1.0 being called the endpoints of the integration ) the sum of the upper areas becomes closer to the sum for the lower areas and the 'true' area of the curve caught between the two estimates. So if we allow refinements with the number of cuts to 'go to infinity' then the common number approached is deemed 'the area under the curve'. That's the plan at least, as some non-integrable functions won't behave well enough to allow that to happen.

In this description I've left out a host of important operational detail. But one especially worth a mention is that when refining the [0,1] interval I must eventually chop it up as fine as possible. In technical terms this means that the maximum width ( supremum actually ) of the a(n)-to-a(n+1) sub intervals - over all such sub-intervals for a given partition - must tend to zero as the interval refinements proceed. I can't keep refining indefinitely while keeping some particular sub-interval untouched by finer cuts, so that guarantees that ultimately each and every point in the [0,1] interval has an area term which it contributes to the total. This is what I mean when I say that the summation we are performing is indexed by the real numbers along the interval. The notation I use to indicate a Reimann integral obtained as above is as follows :

where the big 'S' on the far left is Latin/Greek/Whatever meaning summa or 'summation'. You could read this whole expression out loud as 'add up all the products of the height f(x) times the widths dx across the interval between A and B'. Every time you see this integral symbol just remember that it represents a limiting process, and you can't always do it for any arbitrary function. But there are some established general rules covering cases like continuous and bounded functions, and a host of stuff of an 'improper' nature that I won't go into. Anyway the 'A' and the 'B' marked at the bottom and top respectively of the integral sign are the endpoints or limits of the integration - start integrating at 'A' and go towards 'B'. The thing to the right of the summa, the integrand, is the function itself either in abstract form 'f(x)' or if it has some formula then write that in. The far right 'dx' in this case reminds us that we are indexing our sum with a named variable interval. One can do many variations upon this integral theme and it is so important to keep track of our indexing set(s).

There are some obvious properties of integrals that one can prove ( we won't ) like these :

[aside]I've been demonstrating the definite form of a function's integral. There is an indefinite integral which answers the question 'what function(s) would I differentiate to get f(x)?' The notation is just like as above but without any limits ( A's or B's ) mentioned. This is because of The Fundamental Theorem of Calculus which states that differentiation and integration are inverse/reverse/opposite operations upon a function ( NB : not an inverse function, but an inverse operation on one ). As the derivative of a constant is zero, and the derivative of a sum is the sum of the derivatives then : the indefinite integral ( if it exists ) is unique up to an additive constant.[/aside]

All this brings up a far deeper point though. The set of all natural numbers is infinite. The set of all real numbers is infinite - even if you only look at those real numbers in the interval [0,1]. However there is a special sense in which the real numbers are 'more infinite in number' that the set of natural numbers ( or integers or rationals for that matter ). The real numbers are un-countable. You cannot line up any interval set of real numbers and associate each real number with a corresponding natural number. Well you can try but with any counting method you propose I will always be able to say 'Ha ha, you missed some ... '. In fact you will always miss infinitely many. The proof of this is elegant and beautiful etc, but way too arcane for us here. Suffice to say that indexing our sum with the real numbers, even by using series indexed by natural numbers in a limiting process, is a more 'complete' view of a function on some interval. I could match the real numbers using the real numbers as they are the same sort of infinity, indeed the set of reals [0,1] has the same number of elements as [1,2] with the obvious choice of association being 'add one to each member of the first set ...'.*

Coming up soon : More pfaffing** with integrals ( it gets easier ), a quick return to complex number properties, and a discussion of certain properties of vectors in three dimensions because there is an outstanding analogy for Fourier Stuff within.

Cheers, Mike.

* Just to annoy you I'll throw in that [0,1] also has the 'same number' of elements a [0,10] !! Why? Well using the rule 'multiply an element in [0,1] by 10 to get an element in [0,10]' gives me a one-to-one association of each member in the first set to a member of the second .... :-):-)

** The 'p' is silent. It means more stuffing about. :-) :-)

I have made this letter longer than usual because I lack the time to make it shorter. Blaise Pascal Matt Giwer
Joined: 12 Dec 05
Posts: 144
Credit: 6,891,649
RAC: 0

### RE: @Matt : I'm doing a

Quote:
@Matt : I'm doing a general backgounder, 'Fourier for common (wo)man' so to speak. So I start the elevator for those that maybe did math only up to mid high school say. AFAIK E@H abounds with terrific FFT algorithms. One can express the sine and cosine as power series, where convergence is aided by alternating signs in the terms, but the real mathematical trick/beauty here is with i ( square root of minus one & next week ) that converts rising/decaying exponents to oscillating ones.

It has been my impression when trying to do to much with an FFT that the computational burden approaches that of a DFT. I assume the detection here is pushing the limits of doability. I estimated E@H grabs some 200M of RAM after detaching an older 512MB machine from the project. I have fooled with SETI@H from the beginning and its FFTs have never put this kind of a burden on a machine.

OTOH maybe I am missing something. I brought this up in the crunching group when I first started but I "solved" the problem before there were many responses/interest in the issue.

As you say, I got ahead of the program. My interest in this exposition would be qualitative in the computational burden. The texts I have consulted all start with showing the huge computational burden (as though it still had to be done by hand) of the DFT then say something like "fortunately there is the FFT" and never look back.

I am also curious as to the actual duration, period or whatever of the signal expected. I mean for binaries, identical mass, perfectly circular is obvious but anything off of that would differ from a sine wave by whatever GenRel says it should which could be something preposterous so to speak. It is very late at night for me if that is an excuse for the following but a supernova should require so many Feynman diagram particles that there should be a gravity wave starting before the collapse.  Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6,057
Credit: 112,576,090
RAC: 70,963

### Integration - try this

Integration - try this :

which states that for a given function ( assuming all integrals exist ) I can split the integration over an interval to a sum of integrals over adjacent/contiguous sub-intervals. [ Don't confuse this with adding the integrals for different functions over the same interval. ]

Also integration can take place with infinite endpoints/limits so that the following notations may make sense :

this is hereby achieved by first finding an expression for the integral with some finite endpoints and then applying another limiting procedure ( the first was the Reimann slicing deal for the given fixed endpoints ) so that the endpoints increase without bound. As Buzz LightYear says 'to infinity and beyond' - and seeing where that leads. Of course the outcome depends on whether such a limit does indeed exist. Remember that when you see the 'summa' notation used this represents ( at best ) a number, or a function upon number(s)*, as the result of a valid limiting procedure as per prior discussion. If it didn't work out, the Reimann limiting procedure fell over for some reason(s), then the expression is undefined. [ Alas the word 'limit' is being used in two senses : one as the procedure or result from a limiting argument ( epsilons et al ), and the other as an endpoint of an interval. Sorry, but it's common usage. ]

For a given integral it's a fair question as to 'what does it mean anyway?' As it geometrically represents an area on a graph plot then that means : 'what physical quantity might said area represent in a given problem?' I'd said earlier it is 'essentially the average value of the function' - hinting that when comparing two functions you could divide each function's integral by the common x-axis width ( the length of the interval over which the integrations were performed ) and that would give an average 'presence' of the function ( accounting for sign changes too ).

With the signals we deal with at E@H the following integral ( over one cycle of a periodic signal ie. from t = 0 to t = T, where T is the wave's period ):

represents the energy/power within the wave received. The summation of the absolute value squared of the excursions of the wave amplitude from 'null'. If this integral exists then this is a sufficient pre-condition for the existence of the Fourier 'transform' ( whatever exact form you're after, discrete/continuous/whatever ). It is called 'square integrability' or similiar. Actually we all live, allegedly and mathematically speaking, in an 'L2 Hilbert Space' - the 'L' is that Lebesgue guy I mentioned earlier and Hilbert is another famous math-head. What this physically means is that the signal/wave has finite power ( or finite energy per cycle ). So that's a perfectly reasonable limitation. I, for one, would be quite happy to only receive such signal types. Thus one can model physically finite powered signals with Fourier Stuff and not be too concerned about the validity of the math-machinations. Believe it or not : if you have square integrability then the rest follows - and this is essentially a brilliant conclusion from Twentieth Century math research!

We will later come to this following beast :

called Rayleigh's Identity ( he produced this result in another context ). This essentially states that a signal's total energy (LHS) is the sum of the energies of it's Fourier components (RHS) - as exp[i * THETA] has a magnitude of one, then it is a given C(k)'s magnitude that prescribes a particular k'th component's mojo. It's actually an infinite dimensional form of Pythagorus' Theorem - the square of the longest side is the sum of the squares of the shorter sides !?! But more on that next time. :-) :-)

[ plus I've snuck in 't' and 'dt' - meaning time - for the variable in the integration domain. This would thus require 'k' to be frequency in inverse time. ]

Complex numbers - I will remind - have a real part and an imaginary part. For any given complex number :

z = x + y*i

I can form it's complex conjugate, a fancy way of saying let's form another complex number from this one by changing the sign of the imaginary part. So 'y' becomes '-y' :

z = x - y*i

where here I use the underline to indicate/annotate that z is the complex conjugate of z. In the complex exponential form this means that if :

z = r * exp[i * THETA]

then

z = r * exp[-i * THETA]

or that the phase/angle with the positive horizontal x-axis is reversed. This is because :

sin(-THETA) = -sin(THETA)

and

cosine(-THETA) = cosine(THETA)

and

exp[i * THETA] = cos(THETA) + isin(THETA)

thus

exp[-i * THETA] = cos(-THETA) + isin(-THETA) = cos(THETA) - isin(THETA)

Sine is called an odd function because the function's value ( in the range ) changes sign when the argument ( in the domain ) changes sign. Conversely cosine is called an even function because the function's value ( in the range ) does not change sign when the argument ( in the domain ) changes sign. The effect for our complex exponential when THETA changes sign is neither even nor odd ( a function doesn't have to be in either category ). Geometrically what happens is that the complex number becomes reflected in the horizontal axis ( or equivalently it's phase changes sign - taking into account the issue of phase modulo 2*PI ) when we take it's conjugate :

this is actually a diagram of Fourier co-efficients for a particular function ( see later ). Note that each red circle ( that denotes a complex number point on the Argand plane ) has a corresponding mirror image with the horizontal x-axis being the mirror line. Obviously taking the complex conjugate twice gets you back to the original number - the mirror image of the mirror image. The diagram also emphasises that these co-efficients, being the multipliers of our base sinusoid functions, combine two factors - the magnitude of the (co-)sinusoidal contribution ( distance from the origin ) plus the phase offset ( how far anticlockwise or clockwise you wind around from the positive x-axis direction ) indicating when to begin that 'wave component' with respect to other contributors. Think of the phase offset, I called it RHO(k) earlier, as a lead/lag/handicapping of one component within the group that combine to give our total signal.

The especial reason I bring this up is that we will be modelling real valued functions using summations of complex valued ones. So if the LHS of an equation - f(t) - is real the the other side must be too. How so? Well by assuring that all the imaginary parts of the complex numbers on the RHS, when added together, will come to a zero total. A way of achieving that is for every complex number on the RHS to have it's complex conjugate there also**. Because :

z + z = x + y*i + x - y*i

= 2 * x

which is a real number ( where I'll nag you again about taking care of the meaning of '+' and '-' in context ). Actually I can re-write as :

x = (z + z) / 2

Likewise :

z - z = x + y*i - ( x - y*i )

= 2 * y * i

with

y = (z - z) / (2 * i )

I'll introduce the idea of negative frequency now. If 'k' is positive then :

sin[2 * PI * k * t]

will behave in the 'usual' way as we expect the argument/phase of the sine function ( 2 * PI * k * t ) to increase as t does. Suppose k is deemed as negative then the phase will decrease as time increases. I reckon this is the simplest interpretation of negative frequency : either the wave 'runs backwards' in time or runs forwards with a reversed profile. Take your pick. We denote that mathematically by letting k be a negative number. Later on, in our summations we will have terms like*** :

k'th term = C(k) * exp[i * (2 * PI * k * t)]

for every k ranging from some large ( infinite ? ) negative integer to an equally large ( infinite ? ) positive integer. Note that the C(k)'s in general will be complex numbers too! For each such term having a given positive k value there will be another with the same magnitude k value but negative :

minus-k'th term = C(-k) * exp[i * (2 * PI * (-k) * t)]

and the requirement to get a real number only ( imaginary part zero ) when we add the k'th term to the minus-k'th term will be :

C(k) = C(-k)

in English phrasing : 'the Fourier co-efficient of a positive frequency component is the complex conjugate of the Fourier co-efficient of the corresponding negative frequency component'.

Let's show this :

k'th term + minus_k'th term

= C(k) * exp[i * (2 * PI * k * t)] + C(-k) * exp[i * (2 * PI * (-k) * t)]

= C(k) * exp[i * (2 * PI * k * t)] + C(k) * exp[-i * (2 * PI * k * t)]

= C(k) * exp[i * (2 * PI * k * t)] + (C(k) * exp[i * (2 * PI * k * t)])

= 2 * Re(C(k) * exp[i * (2 * PI * k * t)])

= some real number

as we are adding a complex number to it's own conjugate! :-)

In the mean time, I've revved up my ( rather old ) version of 3ds Max to try and get some animations going! Could be good. We are actually getting closer to Fourier constructing a waveform or three, I'm just trying to lay enough groundwork for that to make reasonable sense when we get to that .... :-)

Cheers, Mike.

* A function is fully defined by specifying all values taken in the range associated with each and every value in the domain. The reality that I may have to do some complex arithmetic, calculus included, to form such correlations doesn't change that aspect. If, at least in principle, a valid method exists to form the function's value in the range from any given argument in the domain - then that's OK.

** Naturally you may be quite rightly suspicious about going to all this trouble to introduce complex numbers into the system, only to have their imaginary parts 'zero out' of the matter in the end. What can I say? I hope you'll be rather intellectually richer for the pain, many general statements look beautifully simple and symmetric when based upon the complex number notation, and there are many other applications where you want to keep the imaginary part in the results. Quantum mechanics for instance ..... :-) :-)

*** Where did the RHO(k) go? It's still there, effectively part of the C(k) now. Let :

C(k) = A(k) * exp[i * RHO(k)]

so that :

C(k) * exp[i * (2 * PI * k * t)]

= A(k) * exp[i * RHO(k)] * exp[i * (2 * PI * k * t)]

= A(k) * exp[i * (2 * PI * k * t + RHO(k))]

because when you multiply two exponential functions together you add the exponents. So C(k) keeps track of the amplitude of the wave we are adding in - via A(k) ( which is thus purely real ) - and the phase adjustment/offset is still followed via RHO(k). There will in general be a different C(k) for each k. The key to Fourier Stuff for a given f(x) is to find all of the C(k)'s aka ..... ( drum roll, trumpet fanfare ) .... The Fourier Co-efficients .....

( edit ) Whoops ....

the large symbol on the RHS is 'sigma' used to indicate a finite sum of terms ( this would correspond to SUM[-k,+k] type of notation I indicated earlier ) but where we have taken limits to an infinite number of terms!! This is a series limit. So both sides of this equation represent limiting values ....

I have made this letter longer than usual because I lack the time to make it shorter. Blaise Pascal Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6,057
Credit: 112,576,090
RAC: 70,963

### Vectors in 3D. Let's work up

Vectors in 3D. Let's work up from 1D though. The number line again :

I've included what is called a unit vector. The traditional notation is i but this is not the square root of minus one ( that is purple! ). This points along the direction of the positive x-axis, the base/tail of the vector is at zero and the head is at one ( drawn above the line to make the graphic clearer, but think of it as overlying the line ). Hence 'unit' means of length one. Any other vector along this line can be expressed as :

u = a * i

where a is any real number. With negative values of a available, I interpret such as flipping the vector over to point left-wards, say a = -1 :

However you could use some other unit vector and still produce all possible vectors in the line. Pretty straight forward are the following operations with :

u = a * i

v = c * i

then :

u + v = (a + c) * i

u - v = (a - c) * i

p * u = (p * a) * i

the results of which are some vectors remaining confined to the line, so we have the property of closure when performing these operations.

The ( ordinary Cartesian, not Argand ) plane :

since there's two dimensions we need two unit vectors ( again drawn to the side of the axes to make the graphic clearer, but think of them as overlying their respective axes ). So we keep i along the x-axis, and bung in j along the y-axis. Again each has it's tail at zero and it's head at one along the respective axis. Lengths are one. So

u = a * i + b * j

covers all possible vectors in this 2D space, with a and b ranging over all real numbers. However that + ( or - ) is another case of 'operator overloading' where the meaning depends on context. Generally it means add/subtract the vector on the left to the one on the right. This brings up the issue that i and j are orthogonal vectors. This means that you can't express one as a multiple of the other. So the following make no sense :

i = p * j

j = q * i

for any real values of p or q what-so-ever, not even zero. Thus with i and j having this mutually orthogonal property the addition of multiples :

a * i + b * j

is an irreducible expression in the same manner as discussed for complex numbers. So here + means associate the values on the left and right. You could regress/progress to an ordered tuple notation (a, b) if you like instead. Now if I have some u and v such that :

u = a * i + b * j

v = c * i + d * j

then we define addition, subtraction and multiples in the obvious ways :

u + v = (a + c) * i + (b + d) * j

u - v = (a - c) * i + (b - d) * j

h * u = (h * a ) * i + (h * b ) * j

where the orange colored operators [ + and - and * ] refer to operations on real numbers. Sorry to painfully point such detail out, but it remains a fact that we base the properties of vector operations ultimately upon those of real numbers, with the added twist of orthogonality. As before I won't necessarily BBCode/colorise in future to hint the correct use as per context*.

With the extension from one dimension to two we now allow another operation upon two vectors. The result of this operation is a single real number, thus closure doesn't apply. I'll call it the inner product but it is known by other names like 'dot product'. Plus I'll use a generally uncommon notation for this purpose, but it's OK :

= |u| * |v| * cos(PHI)

where PHI is the angle b/w the two vectors when they are placed with their tails at a common point such that u can be rotated in an anti-clockwise sense into the direction of v, |u| and |v| being the lengths of each vector :

and thus is some real number because it is the product of three other real numbers - two magnitudes and a cosine**. Some general properties :

=

because cos(PHI) = cos(2*PI - PHI).

= |u|^2

so 'the inner product of a vector with itself is the square of the length of the vector' because PHI = 0 ( the angle of a vector with itself ) implies cos(0) = 1

so specifically for a unit vector, say our basis vectors, = = 1

Now if two vectors are at right angles then PHI = PI/2 and cos(PI/2) = 0 hence

= = 0

These last two properties of a basis set when combined - lengths are one, and each is mutually orthogonal - is called orthonormality.

Now let p, q and r be any three vectors such that :

p = q + r

then for any s

= +

so 'the inner product of the sum is the sum of the inner products', and the inner product scales with the vector lengths too ie.

= a *

I haven't mentioned a quite special vector, which in a sense doesn't really seem like one at all, because it has no length and no direction. We call it the null vector and in any number of dimensions it simply is the sum of all the basis vectors with the multipliers being zero :

0 = 0 * i + 0 * j

where

|0| = |0|^2 = = 0

in fact for any u at all :

= = 0

So it sounds a bit weird but the null/zero vector is orthogonal to every vector by the above definitions, even though we struggle to find an angle to draw!! This is not inconsistent. It just means that 'orthogonality' encompasses those non-zero vectors at a mutual right angle, plus the additional case where at least one is the null vector. [ I flog such detail here due to my personal history learning this stuff ie. I wish someone had stated such properties more plainly to me long ago .... :-) ]

The requirement for linear independence within our set of basis vectors {i, j} can now be stated as follows :

a * i + b * j = 0

IF AND ONLY IF

a = b = 0

which in rough English means 'as soon as you move off the origin along the direction of one basis vector, you can't get back to the origin by using the directions of others' PLUS 'the only way to wind up at the origin using some combination of basis vectors alone, is by never leaving the origin in the first place'. So you can't write :

i = -(b/a) * j

OR

j = -(a/b) * i

because we would either be dividing by zero ( a or b in the denominator ) or implying that one of our unit vectors was the null vector ( a or b in the numerator ). Hence that encapsulates the idea that any of the basis vectors cannot be a multiple of others ie. it does not lie to any degree along the same directions, indeed is at right angles! Whew! :-)

Now let's take one of our 2D vectors :

u = a * i + b * j

and then form the inner product of both sides with i :

=

and evaluate :

= +

[ ...... because of linearity of inner product addition ]

= a * + b *

[ ...... because of scaling of inner product with vector length ]

= a * 1 + b * 0

[ ...... because of orthonormality of our basis set ]

= a

and likewise :

= b

The moral being : if you want the component of a vector along some basis vector's direction then form the inner product between said vector and that basis vector. This is also known as taking the projection along that basis vector because it tells you how many multiples/fractions of the basis vector form the 'shadow' of our vector along that basis line. The shadow of one basis vector along another is of zero length and is a single point - like an absolutely vertical stick with the Sun precisely overhead.

Vectors in 3D and more dimensions? Same patterns and ideas but just with more basis vectors. Mutatis mutandis. You lose the ability to visualise the totality above 3D of course. Which is why we define such geometric concepts by using algebra, as above, so that we may extend our lower dimensional scenarios sensibly and reliably.

So what's the gig? What the heck does this have to do with Fourier? Well, we will use the following great analogy :

- a function in the time domain, our signal f(t), will be a vector

- we will form a set of basis vectors from certain complex exponential functions

- we will use a single variable, frequency, to distinguish these basis functions amongst themselves

- there will be potentially an infinite number of these basis functions ( one per frequency choice )

- that basis will be orthonormal

- we will Fourier synthesise by expressing f(t) as a linear sum of basis vectors

- the Fourier co-efficients will be the projections/components along said basis 'directions'

- we will Fourier analyse ( ie. discover the co-efficients ) by using the inner products of our function with the respective basis functions

It is because of this analogy, which still exists with other representations but is harder to express as cleanly, that I have chosen to go via the route of complex numbers. There's only one key thing missing - I have yet to define the inner product as applied to functions, in order that they may collectively behave*** like a space of vectors!! Which is next time ..... :-) :-)

Cheers, Mike.

( edit ) The reason I defined the inner product while specifying the direction of the definition of the included angle PHI, even though it made no difference to the value, was that the 'other' vector product ( cross product ) does have such a directional dependence. It's a good habit ....

* For those of you who, like myself, think that C++ is a keen & cool language to use then : effectively we've played the role of a compiler trying to disambiguate ( choose the correct function call from analysis of context ) the syntactic sugar that user defined operators represent.

** Because one can define an inner product in terms of vector components along axes, say with :

u = a * i + b * j

v = c * i + d * j

then

= a * c + b * d

which some of you may be more familiar with, then one can start with that and use the above to define angle :

PHI = arcos[/(|u| * |v|)]

which reads 'PHI is that angle which has the cosine given by ....'. Which I think is a neat way of looking at the concept of 'angle'. Cleverly if u and/or v is the null vector then their length(s) is/are zero, and because it/they appears in the denominator of our angle expression then -> the whole expression is mathematically undefined! Which makes terrific sense as how can you form an angle b/w a point and a vector, or between two points? Like some prior derivations here, it's all consistent and the same in the end! :-)

*** One warning though : don't try to visualise outside of the analogy. Meaning it is doubtful that one can 'see' what it means for two functions to be mutually orthogonal, say. Or that it would be fruitful to do so. Even if it is explicable on algebraic grounds. Don't bother. You don't need to ...

I have made this letter longer than usual because I lack the time to make it shorter. Blaise Pascal tullio
Joined: 22 Jan 05
Posts: 1,994
Credit: 32,256,599
RAC: 642

### In the LOGO language the

In the LOGO language the turtle is a vector which can exist in both 2D and 3D. See "Turtle geometry" by Harold Abelson and Andrea DiSessa, MIT Press (1980). It includes an approach to general relativity.
Tullio ML1
Joined: 20 Feb 05
Posts: 336
Credit: 55,127,935
RAC: 1,174

### RE: In the LOGO language

Quote:
In the LOGO language the turtle is a vector which can exist in both 2D and 3D. See "Turtle geometry" by Harold Abelson and Andrea DiSessa, MIT Press (1980). It includes an approach to general relativity.
Tullio

That is some fast Turtle!

Does Terry Pratchett know about it?

:-)

Sorry, just couldn't resist...

This is all shaping up to be rather a good lecture... Hope you're clear of the Australian storm down-unda.

Keep searchin',
Martin

See & try out your OS Freedom! Linux Voice
The Future is what We all make IT [url=http://www.gnu.org/copyleft/gpl.html](GPLv3 Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6,057
Credit: 112,576,090
RAC: 70,963

### RE: This is all shaping up

Quote:
This is all shaping up to be rather a good lecture...

Thanks. I think I have to present a little bit more complex number stuff, discuss frequency choices for base functions, quickly scan over the Fourier method(s) landscape and then derive some very important/core/key integral results! Then I'll run through some of the text-book-ish classic examples : square wave, sawtooth, step or Heaviside function, and the Gaussian. Probably then move on to convolution and like operations ...... :-)

Quote:
Hope you're clear of the Australian storm down-unda.

We're fine and thank you for asking Martin. We've received alot of 'run-off' to date from the more northern weather activity but no more than a nuisance by comparison.

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter. Blaise Pascal Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6,057
Credit: 112,576,090
RAC: 70,963

### Complex Numbers Redux The

Complex Numbers Redux The product of a complex number and it's complex conjugate is :

z * z = r * exp[i * THETA] * r * exp[-i * THETA]

= r^2 * exp[i * (THETA - THETA)]

= r^2 * exp = r^2 * 1 = r^2

which is the square of the magnitude of either. Earlier I had mentioned the 'cycle' :

i , -1, -i , 1, i , -1, -i , 1, i , -1, -i , 1 ...

formed by starting at i and repeatedly multiplying by i on the Argand plane. Note : -i = -i * 1 = -i * (i/i) = (-i^2)/i = 1/i

Multiplying by i is thus a rotation of PI/2 anticlockwise, as also :

i = 1 * exp[i * PI/2]

[ ie. cos(PI/2) + i * sin(PI/2) = 0 + i * 1 ]

and you know that multiplying in this (r,THETA) exponential form of complex numbers means that you multiply the radii and add the angles/phases. So it's not such a silly cycle after all, you just need another dimension ( albeit with imaginary lengths ) to understand! I'd also mentioned that whatever notation/formulation of Fourier Stuff you use : you always need two (co-)sinusoids, one shifted by PI/2 with regard to the other, to represent all signal types - because when sine is zero then cosine is (minus) one and vice-versa. No amount of finite division/multiplication will convert one zero ( uniquely, to be exact ). So complex numbers, in the manner as outlined, neatly encapsulate this need! Well, that's the beauty that I see in it anyway ... :-) :-)

Periodic Functions A function is deemed periodic if the following holds over the entire domain of the function, with T a fixed real number called the period :

f(t + T) = f(t)

which means that you can slide the function ( or the axis ) back/forth a distance T and get the same shape/profile of the function. As I can do that as many times as I like, this also implies :

f(t + nT) = f(t)

where 'n' is any integer. OK. So our choice of basis functions for a given f(t) with it's particular period T is now rather obvious. The basis functions should have period T. But there's more - the 'one period, many frequencies' rule.

Say I have a chap come and knock on my front door every day at midday. The period of this behaviour is 24 hours and the frequency is once-per-day. However I could have a lady who comes and knocks every twelve hours, at midday and midnite. She will also be at my front door with the daily fellow at midday, both knocking. Now round up a well trained llama who knows all about time, specifically eight hourly intervals. We could set this beast up to come and knock at midday, 8pm and 4am. The llama hence will also be there at every midday with the previous two visitors. Next add in a monkey who has a six hour habit of midday-6pm-midnite-6am ...... etc :-) :-)

I make this dopey analogy to emphasise that (a) each is actually a distinct beast but (b) they come at some same specific instant and (c) they re-arrive after a common interval. That common interval ( whence they all come back to the front door to knock at the same moment ) is the period of the slowest varying cycle. If I took an interest in my front door at times other than middays, I'd not be seeing or hearing at least one of my set of knockers [ yes, I know ..... but this is a kid-friendly forum, right? :-) :-) ]. I couldn't add in a 48 hour habit as that must miss every second day and thus misses out on the mutual re-union - if I'm after a consistent 24 hour behaviour overall.

Frequency is the reciprocal of the period, hence our fundamental frequency is :

1 / T

for instance if the period is ten seconds then the frequency is 0.1, meaning a tenth of a cycle goes past every second and thus it takes 10 whole seconds to go around a full circuit. If the frequency is five then each cycle takes only 1/5 = 0.2 seconds. Etc. Now, as we now have discovered :

k / T

( where k is an integer ) will also recur every T seconds. For k > 1 these are generically termed 'harmonics'. Even negative 'k' values are OK too, as outlined previously. They'll just knock on the door every midday, but they are coming from the future and heading to the past! *

Thus the set of complex exponential functions that we will use to sum together in Fourier fashion to produce our periodic signal f(t), with period T, is :

exp[i * 2 * PI * k * (1/T) * t ] = exp[i * 2 * PI * (k/T) * t ]

and we will keep the magnitude and the phase of a given k'th contribution in a complex number multiplier :

C(k) = A(k) * exp[i * RHO(k)]

so that the entire k'th frequency dependent term in our Fourier series is :

C(k) * exp[i * 2 * PI * k * (1/T)] = A(k) * exp[i * RHO(k)] * exp[i * 2 * PI * (k/T) * t ]
= A(k) * exp[i * 2 * PI * (k/T) * t + RHO(k)]

and thus the entire series is :

SUM[-K,+K](C(k) * exp[i * 2 * PI * (k/T) * t])

or in more formal notation :

where we have one base function per frequency value**, and 'K' is the limiting value of the number of basis functions to include. This could be some finite number or infinite in our earlier defined sense***. For a vector space the dimensionality is the cardinality of any basis set, and for a given vector space all basis sets have the same number of elements.

Now a curious but nonetheless important point arises : what of k = 0 ? That is the term ( noting that 0 = -0 of course ! ) :

C(0) * exp[i * 2 * PI * 0 * (1/T) * t]

= C(0) = A(0) * exp[i * RHO(0) ]

Firstly for real valued signals then C(0) = C(-0) or

A(0) * exp[i * RHO(0) ] = A(-0) * exp[-i * RHO(0) ]

remembering the A(k)'s are all defined as real numbers, thus

exp[i * RHO(0) ] = exp[-i * RHO(0) ]

which can only be true if RHO(0) = 0 ( or that plus/minus some 2 * PI multiple, but that's all the same again ). So C(0) is a purely real number, there is no phase offset****, and it is independent of any period/frequency of your signal. It simply represents a constant adjustment of the signal value above or below which all other fluctuations take place.

depending upon your area of expertise this may be given a special name, like in electrical engineering it's called the 'DC component' as opposed to the 'AC component'. But beware that while we are using time based signals with frequency components in our discussion, Fourier Stuff is applicable well beyond that. You could Fourier decompose a photograph's pixel properties into spatial periods and frequencies, and along two directions for that matter. You could look at electron probability distribution functions in 3D crystals. Even all the various sources of 'noise' at the LIGO IFO detectors can be studied as above, like the harmonics of the wall AC power supply at 60Hz. Etc. If it's periodic you can use Fourier, and often if it's not periodic too.

Cheers, Mike.

* When you borrow your best friend's time machine, simply say that you'll have it back to them in no time ...... :-) :-)

** To be a devil's advocate, suppose we attempted to insert into our series a term with a non-integral multiple of the fundamental frequency. Say with frequency s = 0.9 * (1/T) or s = SQRT(2) * (1/T). What then? Well, for any choice whatsoever of phase offset then T seconds later that component would not return to the same functional value thus, unless otherwise adjusted, f(t + T) = f(t) no longer holds. So our representation on the RHS is not actually equal to the function being modeled on the LHS. Could an adjustment occur? Well, yes. Make C(s) = 0 ( or C(-s) = -C(s) which says the same thing ) thus that frequency component will have no effect on the sum. That is : why bother including s in the first place .... :-)

*** If you didn't know in advance how many terms you needed, one could assume an infinite series and then subsequent analysis/calculation would show the C(k) terms above some k value would be zero thereafter. After all, the number of terms to include in the case of modeling a sinusoid as our signal would be ...... one! [ Or two, if you go to negative frequencies while using the C(k) = C(-k) aspect.] The very same sinusoid would be the 'base' function and the 'complex' multiplier would be 1 ...... a trivial case for sure, but consistent! :-)

**** You can view this as the phase from which you measure all other phase offsets.

I have made this letter longer than usual because I lack the time to make it shorter. Blaise Pascal Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6,057
Credit: 112,576,090
RAC: 70,963

### For functions we need to

For functions we need to define the inner product to analogise with vector spaces. Don't panic if you don't follow the detail here, think in broad strokes.

Try the following imagery to help your understanding. Take some periodic signal as a function of time :

and via Fourier Analysis find the C(k)'s which are thus plottable as follows. Think in 3D ( this is not an actual exact example ) :

........ the red axis is the frequency in units of 1/T, +ve coming towards and -ve going away.

........ adding in the Argand plane orthogonal to the frequency axis and at frequency value of zero.

........ put in the C(k)'s that were calculated, which have a magnitude and a phase remember, as arrows : each with their backside glued to the frequency axis where appropriate ( ie. at k/T ) and the tip pointing perpendicularly away from the frequency axis. As f(t) is real, C(0) will be somewhere along the line of the x-axis ( meaning it has no imaginary component and maybe C(0) = 0 ). Also please note the general conjugacy of C(k) with C(-k). If you like, the projection of each such k'th arrow onto the Argand plane at k = 0 will give the magnitude and phase for that C(k) :

See if you can visualise such a 'double-ended hairbrush' where there is an infinite number of positive and negative frequencies !! :-) :-)

Of course one can graph the going's on in a planar fashion, like |C(k)| vs frequency, or |C(k)|^2 vs frequency or whatever. Even though you may be only interested in the power spectrum ( per frequency ), don't forget Mister Phase is in the mix ! Mr Phase is especially important for systems where an 'output' signal somehow influences an 'input' signal ( or feed-forward for that matter ), or the signal in one interferometer arm is going to be compared to that within another.

OK, if anyone has stayed up to speed ( even vaguely ) so far .... well done!! :-) :-)

Time to ( finally ) bite into some specific examples I think. I'll generally just plonk the functions into the Fourier Machine from now on - so that I'll tend to avoid integrals and sums and notation-itis - and thus visualise the results of the transforms and discuss that.

Cheers, Mike.

( edit ) Actually the formula I've given for the anti-derivative of the exponential is generally too bold, but it'll do for us here ...

I have made this letter longer than usual because I lack the time to make it shorter. Blaise Pascal Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6,057
Credit: 112,576,090
RAC: 70,963

### OK let's get dirty with an

OK let's get dirty with an actual example. I've chosen the square wave or 'box' function which looks like this :

and this function, after one thing and another, can be seen to have the Fourier co-efficients of :

- the constant, no-frequency, component C(0) is equal to one half. Which is a relief as : given that the function spends one-half-of-a-cycle at one and the other-half-of-a-cycle at zero, then 0.5 is the average of those two levels. So this confirms our prior expectation that the 'DC component' is the average value above/below which all other variations take place. Nice cross-check against the calculations too .... :-)

- while ( non-zero but ) even values of k give C(k) = zero, the odd values of k give C(k)'s as a diminishing sequence of sign-alternating values.

Remember that each C(k) is the multiplier of it's respective time varying complex exponential. Perhaps we can visualise this better by looking at the accumulating effects of the first handful of partial series sums ( ie. a finite number of terms added together ) :

- which if I've animated correctly should show the progressive addition of terms from k = 0 to k = 21, then back down to k = 0, and forever cycling. Note that as the box function has sharp pointy bits then we can't use a finite series to exactly represent it ( discontinuity in derivatives ... ), but only successively approximate. An infinite series will do, while remembering that we don't actually add up an infinite number of terms - but use the limiting process ( epsilon et al ) to find what is the boundary that the finite series gradually approach. You can see that the box outline is pretty well formed after only relatively few finite approximations, and the difficult bits that will take the 'longest' to converge are the top and the bottom of the 'cliff' profiles. Those corners need ( infinitely ) high frequencies. I mentioned earlier about finite energy per cycle, which is still achieved here as : while arbitrarily high frequencies are included they are progressively ( and savagely! ) penalised by the magnitude of the C(k)'s - either zero if even k, or like 1/k for odd k's AND WITH ALTERNATING SIGNS so the next item partly nobbles the last one. Neat magic huh? :-)

So if you can imagine/hypothecate an otherwise quiet gravitational universe but with some bloody great grumpy giant perpetually opening and shutting some massive 'gravitational door', thus producing propagating 'box' waves then : the Fourier decomposition into frequencies would have the above features/patterns. Of course we don't have such a scenario, however we reckon we have 'thumper' events like that. Say two enormous ( multi-solar mass ) objects swirling around each other in close proximity. One day we may hear them .........

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter. Blaise Pascal