Fourier Stuff - Part 1

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6,578
Credit: 306,914,202
RAC: 178,279
Topic 195552

Fourier Stuff ( FS ) - as E@H is essentially a part of the LIGO and pulsar signal processing pipelines then I thought I'd explain some of the deeper detail about the mathematical basis of such things ( a sequence of mini-tutes, in fact ). Please feel free to enquire, correct etc as you please ( but keep in mind the thrust is an entry-level casual discussion, for which we will avoid the rigour police ). I've been reading and thinking about this for a while, so we'll kick this off with the New Year! :-) :-)

Now about one half of the computation in an E@H work unit is in producing a 'fast Fourier transform' on some data set, with the remainder doing something with it ( comparing with candidate signal templates and what not ). I'm not going to describe the algorithm specifics ( I'm not smart enough or embedded in right loop for that ), so this is a 'backgrounder dealing with the principles'. Also the topic is a fantastically rich network of ideas, so I'll try to keep to some core thread and thus will be ignoring ( or only giving lip service to, or bare mention of ) some pretty clever sub-topics. Like most of mathematics then because of connections you have to limit some discussion, or suffer never finishing. But the lessons are definitely applied in some way, shape or form at E@H.

In mathematics it is often the case that one creates a function from other functions. A function ( for our purposes ) generally being a list of values with some label or index, like a series of stock prices for some company as the trading days go by. The 'index' for the data is the date, say, and the 'value' is the price at each date. The 'function' is the totality of such indexvalue associations ( for those computer minded this is a bit like an array, except that there strictly can be an infinite number of array elements and not necessarily indexed by integers ). You can define a function by a list/table outlining such associations, or if there is some rule ( an algebraic formula to compute the a given value from a given index ) then that may do nicely also. The set from which indices are drawn is the domain of the function, the entire set of values taken on is the range ( or technically the image ) of the domain as 'mapped' by the function.

Now one may look at a function and ask 'what other functions could that be the composition of?' Hence there are two procedures that are inverses of each other - synthesis ( add pieces together to make a whole ) and analysis ( break down into pieces ). In Fourier work this is a linear procedure so that it uses sums ( series or integrals ) with properties like 'the integral of the sum is the sum of the integrals', or if you like one can swap the order of operations ( summation/integration ) and get the same result ( so these operations commute ). This fairly naturally follows alot of real world behaviours where, roughly speaking, the sum of several voices can be distinguished as contributions from separate voice boxes even though heard at the same time in a crowded room, and not as some weird single voice speaking in separate but simultaneous tones ( although there are some birds whose syrinx does exactly that ! ). So to the extent that phenomena can be modelled along such a linear fashion FS is very, very useful. I use 'Fourier Stuff' as a term for the whole shebang, whereas 'Fourier Analysis' is the typical phrase even though 'analysis' is only one part of the deal.

The original problem Fourier dealt with was heat flow around a circular ring. But before we get to that let's consider a related matter which might be an easier introduction to the synthesis/analysis paradigm ( 'cos Fourier has trigonometry which can be daunting for some ie. deserves some preparation ). This is a closer look at what it means to combine functions or break one down into parts.

Imagine working in a shop or a bank and giving change for some transaction. You have a set of notes and coins which have to add to a certain value. And while the way to do that may not be unique, you'd be thinking along the lines of 'a two dollar note/coin with 20 cents plus 50 cents plus 10 cents and a five, which is $2.85 altogether'. This is about the right idea we want, but not quite. We want what is called a basis set which has the following properties :

- I can make up any amount that I please ( or any legal tender at least, however defined )

- each element in the basis set cannot be expressed in terms of any combination of the others according to the particular operation we will use to combine them ( so this is where the currency example falls over, as say 10 cents is equal to the sum of two fives )

- having expressed any value as the sum of some combination of basis amounts ( which can mean several of each single basis amount ) then the way to do this is unique ( ignoring the order in which I list the particular sub-amounts ).

Note that these properties are with respect to some choice of basis set, if you change the set then you have to re-examine and re-think. For those in the know the subject of 'vector spaces' will come to mind here.

Let's look at a beast called a power series. No it's not some sort of arrangement of electrical batteries, but a sum of terms each of which is a power of some number. Call that number 'x' : our generic variable used to represent some element within the domain of a function. So

x^[0] = 1

x^[1] = x

x^[2] = x * x

x^[3] = x * x * x

x^[4] = x * x * x * x

....

x^[k] = x * x * x * x ..... x * x * x * x [ with a total of k x's listed ]

etc ....

'^' means 'to the power of' or 'how many times do I multiply x by itself?' - with the answer being written between the square brackets '[' and ']'. The '*' is our ordinary operation of arithmetic multiplication, and I'm sure you know what '=' means ( the thing on the left side is equal to the thing on the right ). A good half of understanding maths is coping with the notation ( how you write things down ) and all I've done is introduced a SHORTHAND way of writing 'multiply something by itself some specified number of times'. It goes by the glorious word exponentiating with that 'k' number being the exponent. Obviously. ( rolls eyes ... ) !! :-)

Here's a crucial point. Yes I can take one of these power functions say x^[5] and express it in terms of the others - that would be by using multiplication, say

x^[5] = x^[3] * x^[2]

{ think about that, whether I multiply five times in the one go is the same as doing it three times and then another two }

But when we attempt to add any one or more of these basis functions to try and get another of them, in fact that will fail. Can't be done. In more complicated language this is termed linear independence, but remember that phrase refers to a particular set of functions and a given operation used to combine them.

A power series is some combination of these guys to make up some function, in general annotated thus :

f(x) = a_0 * x^[0] + a_1 * x^[1] + a_2 * x^[2] + ... + a_k * x^[k] + .....

What have we got? On the left is 'f(x)' which is generally read out as 'a function of x', meaning this is one of those functions where we have an overall 'rule' to give us a 'value' ( the f(x) ) for each 'index' ( the x ). What is the rule? Answer : everything on the right side of the '='. It's a recipe. I have the ingredients, that whole ( infinite ) set of power functions. I have the specific rule for this function - the precise combination of each, that is all those 'a_1' and what-nots. Each is a number. It means, generally, how many times do I multiply each x^[k] by before I add it to the others in the mix. So I have 'a_0' lots of x^[0], 'a_1' lots of x^[1], 'a_2' lots of x^[2] etc right up to 'a_k' lots of x^[k] - and beyond. We use this 'k' based notation to illustrate the general pattern of the terms. AND I haven't said whether the sum is of a finite or infinite number of terms in the recipe. [ Relax about infinity ].

My key point is : if we agree in advance to use these power functions, and if we agree that the operation to combine them ( into such a series ) is addition, then all I need to do to uniquely specify a particular function represented thus is to list all the 'a_k' values. Whew!

So this is what is generally meant by : the action of combining them is 'synthesis' or construction, whereas the act of breaking up some function into sub-parts is 'analysis'.

Only : FS doesn't use these power functions ( although arguably could be represented as such ) but the sine and cosine functions as the basis set, or a very clever amalgam of those two called the complex natural exponential function. Of which more later ..... :-)

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6,578
Credit: 306,914,202
RAC: 178,279

Fourier Stuff - Part 1

Why aren't power functions 'good' for our purposes? We want to study repeating signals. Pulsars - either electromagnetically or with gravitational waves - have rhythm, hence their name. So if you sit and listen to one on your radio receiver ( or via a gravitational wave aerial ! ) you will see the 'voltage' go up and down, up and down, up and down .... in an apparently endless cycle. So do power functions behave like this? Can we use some summation of power functions to model or mimic a repeating signal? Err, no .... [ you can tell that by the rhetorical construct of the question :-) ]

I'm thinking of a number, non-negative to be definite, we'll call it 'x'. I multiply it by itself and the result is smaller than x. I do that again and the result is smaller again ... rinse/repeat and it keeps getting smaller ... what can one say about x ?

x < 1

Second case. I have a positive number called 'y' and I keep multiplying it by itself and the result gets bigger. You'd be right if you thought of :

y > 1

OK, third case. A number 'z' ( again non-negative ). Now we keep getting the same answer, the original z value, no matter how many times we multiply it by itself. Thus :

z = 1

OR

z = 0

[ did you expect the zero case ? All of those power functions go through 0 and 1 .. :-) ]

Now try the case of -1. What do you get with successive multiplications ?

-1, 1, -1, 1, -1, 1, -1, 1, -1 ........

So we've got (a) smaller (b) bigger (c) the same and (d) stuffed if I know. The upshot is that if I do choose to attempt to represent some function f(x) as some power series expression, that is only valid with certain restrictions ( meaning : to make sure that stuff on the left side of the '=' sign is truly equal to stuff on the right side ). By the process of adding up infinite sums ( which I am avoiding an explanation of ) then I have to specify what is variously called an 'interval of convergence' or 'radius of convergence' or similiar phrase. It doesn't necessarily work for any arbitrary 'x' value, when trying to represent ( analyse, decompose ) any 'reasonable' but arbitrary function.

[ as a hint/snifter : Fourier doesn't work all the time either, but is easier to cope with. Basically if a function will not flip off to infinite values ( especially as the 'x' value or index gets large ), nor will vary too wildly ( rate of change of the function in some interval ) then you have good chance of obtaining a valid Fourier representation. ]

So having discussed power functions in order to introduce the synthesis/analysis scenery, let's put them to the side. An evident thought comes to mind ( drum roll .... ) : if we want to represent repeating functions using some combination of basis functions, THEN why don't we choose a set of basis functions that have a repeating character in the first place ?? Ta da!! :-0

OK, stop stalling Mike, let's do trig. Trig is triangles, but much more besides. Spend some time staring at this diagram :

[ It's a copy of the original PacMan design template. A circle with mouth .... :-) ]

which defines the size of angles in a way called the radian measure, an alternate scale to degrees ( not temperature ) for instance, as follows :

THETA = ARC_LENGTH / RADIUS = b / r

A ratio of two lengths. Now you know that the full circumference of a circle is 2 * PI * r, so since an arc length is some fraction of a full circumference then we have :

THETA = ( 2* PI * r ) / r = 2 * PI ( full circle, otherwise called 360 degrees )

THETA = ( PI * r ) / r = PI ( half circle, otherwise called 180 degrees )

THETA = ( 0.5 * PI * r ) / r = PI / 2 ( half of one half of a circle = quarter of a circle, otherwise called 90 degrees )

[ PI / 4 = 45 degrees, PI / 3 = 60 degrees, PI / 6 = 30 degrees ...... ]

or indeed any other intermediate value. Think about it. Open up the THETA 'jaws' to some suitable part of a circle and divide the arc length by the radius. BTW if the radian measure of an angle is 1.0 then that corresponds to ~ 57.3 degrees. From now on I want you to think in radian measure, please. Note that a bigger or smaller circle with the same size gape/angle of the jaws will have the radius ( whatever that is ) divide out of numerator and denominator.

If you go for a walk around a circle, what will you eventually come back to? The same spot! This is the source of all the regularity that follows. Adding or subtracting ( walking in the opposite sense around the circle ) 2 * PI worth of angle gives the same answer for any calculation or expression that relies on angle. By tradition going around anti-clockwise is said to increase the angle and going around clockwise will decrease. It's one of those historical accident things, it could have been designated in the converse sense. But it wasn't, so there! You see this a lot in maths, like drawings of number lines which go more positive to the right, choice of right-handed sense for 3D Cartesian co-ordinate systems etc. Anyway, if I say minus 8 * PI worth of angle that means 'go around 4 full times clockwise'.

Next time : the sine and cosine functions.

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6,578
Credit: 306,914,202
RAC: 178,279

Are readers following the

Are readers following the play here? Any questions? I'll make today's entry and then leave it for a week or so to (a) prepare for the next level of explanation and (b) give you a chance to ponder/absorb. :-)

Now I do hope that you are getting onto the idea that 'signal analysis' at E@H is trying to find some regular repeating signal in amongst alot of other stuff recorded at the receivers. So you could phrase it as either 'garbage elimination' or 'signal extraction' because of the mixture of signal with noise. If you knew precisely in advance exactly what the signal was then life would be a lot simpler ( in fact 'templates' are educated guesses at precisely that ). So to clean up an old Led Zeppelin studio track one has the big advantage of knowing what Led Zeppelin is supposed to sound like. Why? Because it's an artificial noise that we construct ( sorry to put it that way LZ ). So we need an efficient protocol to step through many, many natural ( non-man-made ) signal possibilities embedded within some track of data. Hence Fourier.

Sines and cosines. I sense trepidation and furrowing of brows. Relax. If you can cope with radian angle measure then you'll be OK for these guys. More ratios of two lengths, the idea being to represent the shorter sides of a triangle as a fraction of the longest side. Check out this diagram :

A circle is defined as that set of points being some fixed distance from a single designated 'center' point. I'm going to deal with a circle that has a non-zero radius ( r > 0.0, otherwise we're just talking of a single point ! ) then the ratios x/r and y/r are always going to be defined, as the denominator is never zero. Note that r is the length of the longest side, so both of those ratios will always be less than 1 in absolute size. Look at this triangle with a very small angle near the origin :

thus x is about the same as r, so x/r ~ 1.0 ( but slightly less because r > x ). While y/r is close to 0.0 . If THETA is zero then x/r = 1.0 and y/r = 0.0 exactly ( the triangle 'flattens' to two horizontal lines overlying each other and the x-axis ). Look at this triangle with nearly a right angle ( THETA that is ) closest to the origin :

This time x/r is about 0.0, and y/r is just less than 1. So if THETA is PI/2 ( = 90 degrees, remember ) then x/r = 0.0 and y/r = 1.0 exactly ( another 'flat' triangle of two vertical lines on the y-axis ). Try a triangle with THETA = PI/4 ( = 45 degrees, but I'm going to stop reminding you of these degree conversions soon ) and sorry about the white-out :

as x and y are equal here, x/r = y/r ~ 0.7071 ( = 1/SQRT[2] actually ). We can continue this assessment not just for angles b/w 0.0 to PI/2, but right around the circle. By convention we designate the positive x-axis as the zero direction for THETA, more positive THETA anti-clockwise and more negative THETA clockwise as discussed earlier. For any arbitrary THETA, outside of the 'first quadrant' you just look at the x and y co-ordinates to get the magnitude of sine and cosine :

However we remember to keep track of whether x and y are positive or negative. Thus as the cosine function is x/r the 'sign of the cosine' is whatever the sign of x is. Likewise as the sine function is y/r then the 'sign of the sine' is whatever the sign of y is. But r is always considered as positive as it's the distance from the origin [ with co-ordinates listed as (0,0), that is where the horizontal x-axis crosses over the vertical y-axis ]. The result is this :

and you will notice that I've been sneaking in the label 'cosine' or 'cos' to represent the ratio x/r, and the label 'sine' or 'sin' for the ratio y/r. They are 'pure' numbers without any units. Much like the PacMan diagram having a bigger circle, with the same given jaw angle then x, y and r all increase in proportion - by design really - so these ratios are constant.

Hence if you quote an angle, I can read back ( or calculate ) what the sine and cosine of the angle is. So that means sine and cosine are 'functions of the angle'. Clever punters will note that sine and cosine have the same common shape but sine is shifted ( PI/2 ) to the right with respect to the cosine. Indeed :

sin( THETA ) = cos( THETA - PI/2 )

which you could read as 'the sine of angle is whatever the cosine was 90 degrees ago' .... :-)

Shift either function by PI and you will get the negative of said function. However you can't make a sine from a cosine without the shift. Where sine is zero then cosine is ( minus ) one, and vice-versa. No amount of multiplying by a non-zero & non-infinite number will get you one from zero, or zero from one. For Fourier Stuff : however you express it, or hide it, mathematically you still effectively need both to cover all possible signal candidates. Well the good news is that we now having cyclic/rhythmic functions that neither fade away nor grow out of bounds - as far as the eye can see, because everything repeats when you go any multiple of 2 * PI ( complete laps around ) either side - to the left or right ! :-)

Often the angle is called the phase, and from the point of view of signal processing you could work with sines alone or cosines alone - remembering to shift appropriately, effectively masquerading one for the other by putting a +/- PI/2 as required inside the function brackets. Or you could use sines and cosines together, knowing that what one covers the other will catch. This would be perfectly fine. However there is a notation/number-system that helps to keep them together in the one 'package' very conveniently.

Which is the subject of the next installment : complex numbers.

[ they're not nearly as bad as the name suggests :-) ]

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Matt Giwer
Matt Giwer
Joined: 12 Dec 05
Posts: 144
Credit: 6,891,649
RAC: 0

Are you saying there are no

Are you saying there are no usable FFT algorithms applicable to this problem? If so that would explain the RAM usage.

ML1
ML1
Joined: 20 Feb 05
Posts: 347
Credit: 86,471,245
RAC: 2,697

RE: Are you saying there

Quote:
Are you saying there are no usable FFT algorithms applicable to this problem? If so that would explain the RAM usage.

Hey, please don't spoil a good educational story. I rather like the hand drawings! ;-)

Jumping way-way ahead as a bit of a teaser... The FFTs computed are a known fixed size depending on the of points taken and the resolution. Hence, the best compromise has been made to break the search down into reasonable units.

It's more a question of how hard and how finely you want to search vs the possibility of finding anything more in the ever finer detail...

Keep searchin',
Martin

See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6,578
Credit: 306,914,202
RAC: 178,279

@Matt : I'm doing a general

@Matt : I'm doing a general backgounder, 'Fourier for common (wo)man' so to speak. So I start the elevator for those that maybe did math only up to mid high school say. AFAIK E@H abounds with terrific FFT algorithms. One can express the sine and cosine as power series, where convergence is aided by alternating signs in the terms, but the real mathematical trick/beauty here is with i ( square root of minus one & next week ) that converts rising/decaying exponents to oscillating ones.

@Martin : the FFT's here are discrete because of the data type/acquisition, which I was getting to ( several kilometers ) later. That does have compromises. Teaser ! :-) As for the hand drawings : did you know Wikipedia copyrights ( or lefts ) a drawing of a triangle ???? Naturally I could laboriously attribute each drawing ...... so the big/little circles are the top of the office bikkie tin and my pencil holder respectively. You'll deduce those aren't transparent as I never quite get the centre over the axes origin - until I decided to draw them in another order. Circle first, axes second. And it took me 20 minutes, that I'll never get back, to learn how to use the office scanner to do color [ thanks Canon, it's the un-annotated button with a light-green diamond-shaped glyph next to the 'B&W' labeled one. Obviously ]. YES to RTFM, but NO for user interface design !! Low tech just rocks ..... :-) :-)

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6,578
Credit: 306,914,202
RAC: 178,279

Complex numbers. It's an

Complex numbers. It's an extension of the number types that you already know : counting or natural numbers = {1, 2, 3, 4, 5 ... } -> integers positive and negative {-3, -2, -1, 0, 1, 2, 3 } -> fractions {p/q where : p and q are integers, q non-zero } -> 'real' numbers. Think of real numbers as those expressible in decimal form, like 123.456, but that the digits on the right of the decimal point may go on forever.* There's an incredible history to this progression of number sets, each containing the previous set but adding some extra ones. Mainly to help solve newer problems.

What is the square root of minus one? That is, what number when multiplied by itself yields minus one ? None of the above sets contain that number, but for one reason or another we'd like to know. So we have invented a new single number which works for this problem : i

i ^2 = i * i = -1

i is like no other number you have met. It is not a combo of any before it, like p/q being built from p's and q's. Zero was like that when it was first invented. Engineers tend to call it j. All that matters is that it solves this equation :

x^2 + 1 = 0

What must 'x' be here ? => x is i . No more and no less.

But once you allow that, then great stuff happens. We crack open an entire class of problems. This is good, this is progress. :-)

Let's do the keep-multiplying-by-itself trick again, this time using i :

i , -1, -i , 1, i , -1, -i , 1, i , -1, -i , 1 ...

[ NB. -i * i = (-1) * i * i = (-1) * (i^2) = (-1) * (-1) = 1 ]

I'll return to this 'cycle' later, with a much better answer than 'stuffed if I know' too. As it's quite pivotal, please try to understand why it happens that way. Also I have snuck in a few other 'obvious' behaviours like :

1 * i = i

(- 1) * i = -i

or generally if x is any real number :

x * i = xi = ix

which gives me the answer to : what is the square root of minus 16? Answer :

4i * 4i = 4 * 4 * i * i = 16 * (i ^2) = -16

so 4i is the go for that one. Now some of you are probably thinking : this is a bit lame, you're just playing with symbols, Mike's slipped a cog or five, where does this lead us? ....etc. You'd not be alone there, in fact when first presented i was called ( fanfare ) The Pure Imaginary Number. Of itself, or plain multiples thereof, it doesn't really fly very far at all.

Hence the set of complex numbers. Now this is the real deal, and even includes all of the above number types as subset(s). But a complex number is not a single number. Each specific complex number is a pair of numbers, if you change any one of the pair then you have a new complex number. An ordered pair in fact.

x + y*i

here both x and y are real numbers. The 'x' by itself is the real part of the complex number, and the y ( the y by itself not the entire product with i ) is called the imaginary part. [ So, annoyingly, the imaginary part is actually a real number ... ie. the multiplier of the only imaginary number - which is i. It's not terribly important, but may depend upon the terminology of whom-so-ever teaches you .... ]

Now here arises a potentially very deceptive problem with the above notation. When I first learnt about complex numbers, it took me a good month or so to work this out on my own ( and wow, was I annoyed that it had not been pointed out to me ! ) The + symbol used here DOES NOT MEAN AN OPERATION OF ADDITION. In fact the + does not mean any operation in the sense that you are used to. It means, nor more or less, 'link the real part on the left with the imaginary part on the right' or 'I now marry this real guy to this imaginary girl'.

So if I have two bananas plus three apples what have I got? The best you can sensibly say is 'five fruit', but we won't do that. We'll simply accept that it is an irreducible phrase. So 'two bananas plus three apples' is just 'two bananas plus three apples'. I don't talk of either 'five banapples' or 'five appananas'. Again, it's an ordered pairing.

Now you could write a complex number this way :

y*i + x

Some do. It's not a hanging offense, as it is still clear which are the real and imaginary parts. Hint : the one with the i is the imaginary part.

The set of real numbers is the subset of complex ones by setting y to zero, the set of (pure) imaginary numbers likewise by setting x to zero. You can annotate and work with complex numbers entirely in a more formal ordered tuple format :

(x, y)

but it has to be clear from context that it is a complex number being represented, and who is who ( real first, imaginary second ). My personal preference is to use 'x + yi ' with the above important understandings of the meaning of the symbols. I suppose because I was taught that way. As we will be using i alot from now on, I'll try to always use magenta/boldface so we don't get confused and avoid maybe problems with font rendering with different browsers et al. To spot the difference b/w the '+' used as a 'linker' of complex number sub-parts versus ordinary/prior use, then you need to look either side and deduce from there. I won't be BBCoding that. Is it simply two real numbers on right and left, or do I have a real on one side and an imaginary on the other? As I mentioned before, alot of trouble one can have with maths is surmounting the notations.

Next time : are there operations to combine complex numbers together, and can you graph them? Don't you just love rhetorical questions? :-) :-) :-)

Cheers, Mike.

( edit ) * Well, they can do that on the left side too. However if you use decimals as a definition/representation then one has to be a tad careful and do things like : agree that 1.0 and 0.9999.... ( the 9's repeat infinitely ) are the same number ( one ). Side note : I say decimal, but generally what is/is-not a recurrent/repeating pattern in the representation does depend on the number base used. So 'one-third' in base ten is 0.333333..... ( recurring ) but 0.1 ( non recurring ) in base-3/ternary ! I mention this especially as one ought to distinguish the concept of a given number - 'one-third-ness' or 'divide-into-three-equal-parts-please' - from how you write it down. :-)

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6,578
Credit: 306,914,202
RAC: 178,279

This is an aside, you can

This is an aside, you can safely ignore it as you please. I couldn't bear to pass it by though.

There is a power series representation for exp[x] ie. 'e' to the power of x. As follows

exp[x] = 1 + x + x^2/2 + x^3/6 + x^4/24 + .....

the general form of the terms is :

x^n / n!

where n! is the product of all integers from n down to 1 ( otherwise known as 'factorial of n' ). The sum starts with n = 0 and 0! = 1 by definition.

Cosine and sine have power series representations too :

cos(x) = 1 - x^2/2 + x^4/4! - x^6/6! + x^8/8! ....

sin(x) = x - x^3/3! + x^5/5! - x^7/7! + .....

see what happens if I put ix instead of x :

exp[ix] = 1 + ix + (ix)^2/2! + (ix)^3/3! + (ix)^4/4! + .....

= 1 + (ix)^2/2! + (ix)^4/4! + .....

+ (ix)^1/1! + (ix)^3/3! + (ix)^5/5! + ....

( re-ordering the terms listing all the even power terms, then all the odd power terms )

= 1 - x^2/2 + x^4/4! - x^6/6! + x^8/8! ....

+ i[x - x^3/3! + x^5/5! - x^7/7! + ..... ]

( evaluating what the powers of i become )

= cos(x) + isin(x)

Neat huh! :-)

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6,578
Credit: 306,914,202
RAC: 178,279

Operations on complex

Operations on complex numbers. Look at these complex numbers :

z_1 = x_1 + iy_1

z_2 = x_2 + iy_2

Typically the letter 'z' is used in complex algebra. So 'z_1' means 'this symbol represents a complex number with the real and imaginary parts as per the right side of the equals'. Addition and subtraction are easy, just add/subtract the corresponding real and imaginary parts :

z_3 = z_1 + z_2 = (x_1 + x_2) + i(y_1 + y_2)

z_4 = z_1 - z_2 = (x_1 - x_2) + i(y_1 - y_2)

where I'll ( laboriously ) repeat the warning about the meaning of the operator symbols : reliance upon context to properly understand. The + here means 'add two complex numbers together' :

z_1 + z_2

The + here means 'add two real numbers together' :

x_1 + x_2

y_1 + y_2

The + here means 'link the real part on the left with the imaginary part on the right' :

(x_1 + x_2) + i(y_1 + y_2)

and thus for the '-' symbol. Which I can also use in the linking sense :

x_1 - iy_1 = x_1 + i(-y_1)

Multiplication and division are defined for complex numbers ie. z_1 * z_2 and z_1/z_2 but with the x + iy notation the results (a) look horrible (b) require a side track/loop of explanation and (c) we won't need for our purposes anyway. I will define these operations for you but only after we have a look at an equivalent notation for representing complex numbers. But for that we need to look at graphing complex numbers ( not functions of complex numbers, but the numbers themselves ).

here I've shown what looks to be the standard two-dimensional real Cartesian plane. It isn't. The vertical axis is imaginary numbers. A point in this Argand plane is a single complex number with a real horizontal co-ordinate and an imaginary vertical co-ordinate. You still get Pythagorus and sine/cosine though. Now the point P, representing a complex number z, can be indicated in two equivalent ways:

z = x + iy

OR

z = r * exp[i * THETA]

Whoa! What's this? Some chaps came up with the following identity, meaning 'we define the following to be so'* :

exp[i * THETA] = cos(THETA) + i * sin(THETA)

where exp[something] is 'e', the natural logarithm base ( about 2.718281828 ... ), raised to the power of 'something'. And because it is a 'sum', in the linking sense, of a real number and an imaginary one then 'exp[i * THETA]' is a complex number. Now there's a deep rabbit hole on this one, which I will avoid explaining. It's a rabbit warren in fact. For this thread of explanation please take it on trust that several things apply, such as the following ( non-exhaustive ) list:

exp[0] = 1

exp[-p] = 1 / exp[p]

exp[p] * exp[q] = exp[p+q]

exp[p] / exp[q] = exp[p-q]

this is true regardless of whether p and q are real, imaginary or complex ( remembering to think carefully about what the '+' operator means in specific cases ). It's a way of saying what I said earlier about the x^[5] power function ie. you can break down a product into additive factors or exponents**.

d/dx(exp[x]) = exp[x]

this means the derivative of the exponential function is the function itself. Indeed it is the only function for which this is true. Actually you can use this quality to derive all others about exp[x], say :

d/dx(exp[kx]) = k* exp[kx]

[ k being a constant with respect to the variable we are differentiating against -> x ]

As integration is the 'reverse' of differentiation then 'the integral of the natural exponential is the natural exponential' as well ( ignoring limits of integration, some constants .. ). Looking back at the graph you will see that :

Re(z) = r * cos(THETA)

Im(z) = r * sin(THETA)

where the notation 'Re(something)' means the 'real part of something', and 'Im(something)' means the 'imaginary part of something'. Now you could have guessed that from the graph as I am really re-stating what it means to be a sine or cosine function :

cos(THETA) = x/r

sin(THETA) = y/r

PLUS Pythagorus :

r^2 = x^2 + y^2

but in the context of the vertical axis measuring 'imaginary' lengths. So let

z_1 = r_1 * exp[i * THETA_1]

z_2 = r_2 * exp[i * THETA_2]

then :

z_1 * z_2 = r_1 * r_2 * exp[i * (THETA_1 + THETA_2)]

z_1 / z_2 = (r_1 / r_2) * exp[i * (THETA_1 - THETA_2)]

Generically r is called the magnitude of the complex number and THETA the phase. Multiplying two complex numbers means multiplying the magnitudes while adding phases. Dividing complex numbers means dividing one magnitude by the other, and subtracting phases. Note that I still need two real numbers to specify a single complex number, for either 'x + iy' or 'r * exp[i * THETA]', and I can swap from one to the other using trigonometry.

Next time : expressing rhythms using (co-)sinusoids, and a first look at Fourier

Cheers, Mike.

*Well, you could start with that and derive other stuff, or begin with the other stuff and come back to this. It's all consistent ... :-)

** Indeed the logarithm function is the answer to the question 'what is the exponent that I am using to represent a certain number as a power of a given base?' So

p = base_b^[q]

and

q = logarithm_to_base_b(p)

are really the same thing. It's a question of emphasis - which variable do you isolate on the left side with everything else on the right. Thus 'eight is two cubed' is an intellectually identical statement as 'three is the logarithm of eight to the base two'. Though to do that unambiguously you need a couple of pre-conditions ( or restrictive definitions ) on the functions - one-to-one and onto. You get inverse functions that way, returning you to the same number via jumping from domain to range and then back again - along the same 'link of assocation' ( one link of the totality that define the function ) between a given pair of numbers, each within the respective domain/range set. It's a reverse gear ....

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6,578
Credit: 306,914,202
RAC: 178,279

(Co-)sinuoidal rhythm. We've

(Co-)sinuoidal rhythm. We've been looking at THETA - the phase or angular offset around the circle. Let it be a function of time in the following sense :

THETA(t) = 2 * PI * t + RHO

where RHO ( pronounced 'row' ) is some constant value for a given rhythm. If t = 0 then

THETA(0) = RHO

so this tells us where THETA lies when we started our clock. Of course the moment when we start said clock is arbitrary too, so of itself such a constant phase offset isn't terribly important generally. But if we have many rhythms simultaneously on the go then the relative offsets between them assume great significance. If you like we could choose one rhythm to have a RHO value of zero and all the others have theirs relative to that.

The factor of 2 * PI scales our time values, measured in seconds say, to angle as measured in radians. We could leave it out but then we'd have to remember that as a sine/cosine repeats in 2 * PI radians our pattern ( whatever that is ) goes likewise. By putting the 2 * PI in to multiply the time as above, then a frequency of ONE ( cycle per second or Hertz ) brings us back to the same THETA value with every elapsed second. All I've done is re-assert that one cycle around a circle means 2 * PI radians.

So given that we think of time as a linear variable then having

sin[THETA] = sin[2 * PI * t + RHO]

or the equivalent for cosine ( PI/2 phase shifted ) gives us the mathematical model of a signal that oscillates. Which is really the whole point of including the sine/cosine into the mix. Now we want to allow some choice of possible frequencies using this representation ie. some going around the circle faster and others slower. Let's introduce a variable, which I'll call 'k', to indicate the frequency in time :

sin[2 * PI * k * t + RHO]

If I increase k, keeping else constant, then the entire expression within the square brackets increases too. Meaning that a higher frequency has more cycles per second or a greater change in phase for a given increment in time. That's obviously what we intend by the use of a frequency variable.

There is one more aspect - the amplitude or magnitude of the oscillation :

A(k) * sin[2 * PI * k * t + RHO(k)]

where I've expressed that magnitude as a function of k to emphasise that we'll be mixing signals on a per-frequency basis. AND I've snuck in the possibility that RHO might depend on the frequency too. So we might want to have so much of one frequency, more of another, and less of some other etc. And each of those with a specific time offset for the moment when the sinusoid is up-going through the zero along the signal ( vertical ) axis. Indeed without A(k) and RHO(k) - the ability to vary the recipe according to frequency - there would be little point in mixing signals at all.

Let's tip our toes into the Fourier pool with an example :

a so called 'square wave', call it f(t). This is like an ideal electrical switch, say, going on and off with the current or voltage being the vertical axis. The transitions being instantaneous. The general attempt at formulating this as a sum of sinusoidal functions looks ( initially at least ) something like this :

f(t) = SUM_OVER_ALL_FREQUENCY_CHOICES[A(k) * sin[2 * PI * k * t + RHO(k)]]

where I've been deliberately vague about what is the exact range of frequencies, or indeed how they are indexed ( ie. what are the k values? ) and summed. There is a number of options here related to the problem space you wish to model. Basically it's a matter of integers/discrete vs. smooth/continuous numbers on either side of the equals sign. More on that later. What we need for Fourier work is to find the actual A(k)/RHO(k) when we are presented with some particular f(t). A(k)/RHO(k) is the generic concept of 'transform', a representation in frequency space ( ie. 'k' indexed ) of a function in the time domain ( ie. 't' indexed ). We'll deal with a better way of expressing A(k)/RHO(k) later.

Now sine and cosine, or the complex exponential amalgam, are continuous and differentiable functions. This means that at all points on their curves there are no breaks and they have a well defined ( and specifically unique ) slope. There is a fundamental theorem or three that says the finite sum of such continuous/differentiable functions is also a continuous/differentiable function. So with a discontinuous function, which the square wave is because at the 'cliff' it can only take on one value ( say, the top or the bottom of the cliff ), then it is also not differentiable at such a point of discontinuity. Continuity is a pre-condition ( necessity ) but not a guarantee of ( sufficiency for ) differentiability. The short answer is that with a finite sum of sinusoids, however many I put on the right side of the equals, cannot truly equal any discontinuous function on the left side of the equals.

[long aside - a smooth function is one that has continuous derivatives to some degree. Meaning that I can take the derivative some N times and have that Nth derivative be a continuous function. An infinitely smooth function has no limit on N. Sine, cosine and our complex exponential are infinitely smooth. So our requirement on f(t) and it's representation is actually stricter than mentioned above. If LHS = RHS for some definition of LHS and RHS then D_n(LHS) = D_n(RHS) - where D_n means nth derivative - must also be true for all valid n in the problem. You can't have the situation where the nth derivative on one side being defined ( if it is ) and not equal to the valid nth derivative of the other side. So if our f(t) is discontinuous to any degree of differentiation we have a potential problem when we use sines/cosines in the Fourier representation because they differentiate forever. This is one of those side tracks .... for the extra curious see 'uniform continuity', 'mean square convergence', 'convergence in the mean' and 'Lebesgue integration' ]

Look at this triangular/sawtooth function :

[ I think I'm gonna have to bite the bullet and get my Matlab out to produce the pictures from now on .... :-) ]

It's always continuous, but it is not differentiable at every point. The exception/problem is the pointy tips which have no unique slope. The value of the function converges to the one value if approached from either side of the tip. But when approaching the tip from one side the slope is some definite positive value, whereas approaching from the other it is some definite negative value. ( Actually the derivative of the sawtooth looks like some square wave, where the cliffs of the square wave correspond to the ambiguity of the slope at the triangle's tips ).

To solve this issue reasonably we have to get our hands soiled a bit, with some messy detail in the topic of limits. That is, we try to define what it sensibly means when we speak of 'infinite sums' or 'infinite series'. We will wind up saying that progressively finer detailed series approximations will more closely approach the shape of a discontinuous or non-differentiable function that is : the more detail expressed within the approximation gives a closer fit. Which is next time. :-)

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6,578
Credit: 306,914,202
RAC: 178,279

I could spend a decade on the

I could spend a decade on the topic of limits. This posting has a plethora of useful asides that I'll only be indicating the 'stubs' of, and there is imprecision in the natural language description that mathematic symbology avoids. So I'm leaving out slabs of important but arcane criteria.

Now, I'm thinking of the following numbers :

1, 1/2, 1/3, 1/4, 1/5, 1/6 .....

the general term in this sequence is

1/n for n = 1, 2, 3, 4, 5, 6 ........

generally a sequence is a set that I have ordered, not necessarily by magnitude or any other arithmetic criteria ( it just happens to be decreasing in this example ), but by associating each element with one of the natural numbers. It doesn't even have to have a formula to prescribe each term. But there is a first element, a second element, a third element etc. I could put this in functional notation :

f(n) = 1/n

where n E N

'E' meaning 'element of' and 'N' is the set of natural numbers. The domain of 'f' is thus the natural numbers and we'll take the range to be the real numbers ( although I could give a narrower specification ). Now can you give me a ( real ) number which is definitely less than every one of those in that sequence? Sure you can. Say -1 or -3 or -0.5 or 0.0 to name but a few. Each of these chosen numbers is called a lower bound of the sequence, and I could form another set, which I'll call L, being the set of all lower bounds of the given sequence. So for L :

(a) each element within is less than each and every f(n), and

(b) I haven't missed any real numbers that could satisfy (a)

Does L have a greatest member ie. one that is more positive ( or less negative ) than all the other elements of L? Yep. A little thought shows this has to be 0.0, this now gets called the greatest lower bound or infimum. Furthermore, in this instance, 0.0 is stated as 'the limit of the sequence as n goes to infinity'.

We could flip/invert the sense of the problem by using this sequence :

g(n) = 1 - 1/n

or

0, 1/2, 2/3, 3/4, 4/5, 5/6 .....

Now here we have an increasing sequence of numbers, which has a set of upper bounds I'll call U

(a) each element within is greater than each and every g(n), and

(b) I haven't missed any real numbers that could satisfy (a)

with the least member of U being called the least upper bound or supremum. In fact it is only when we use real numbers, and not the rationals ( fractions made up from integers ), that we can always have either an infimum or supremum ( depending upon the sense of the bounds ) for any given bounded set*. We can say 'every set of real numbers bounded below has an infimum' and/or 'every set of real numbers bounded above has a supremum'. Now it is possible for some sets that the infimum and/or supremum is a member of the set ( take all the real numbers less than or equal to 2 ). The key point is that there is a number somewhere on the real line that has the infimum/supremum property for a given bounded set. If a set is bounded above and below then it will have both an infimum and a supremum, either/both/none of which may lie in the set itself.

We generally think of limits in the sense of some sequence of numbers that gets closer and closer to some fixed number like in the above examples, i.e the limit of the sequence, but without ever reaching it in the manner that some n E N ( finite ) could be quoted for which the sequence value equals the limit.

You could, say, have the sequence {3, 3, 3, 3, 3, 3, 3 ....} where the limit is obviously 3. There are also sequences that oscillate up and down but progressively less so as n increases, the limit ( if any ) then being the value that the terms progressively 'cuddle up' to. Fourier series can be like that due to the intrinsic oscillation character of the base sine/cosine functions. The f(n) and g(n) cases above are specific examples of a general theorem that a monotonically increasing/decreasing sequence with an upper/lower bound must approach a limit. A sequence may diverge - or not converge - because of oscillation alone and not because the terms are unbounded. Like {1, -1, 1, -1, 1, -1, 1 ... } where there is no single number one that all terms can be arbitrarily close to ( 1 and -1 are accumulation points because there are sub-sequences that converge to them ).

So as above, 0.0 is the limit for f(n) and 1.0 is the limit for g(n). You could probably think of some real life examples of this sort of thing, say the speed of light for a body with mass in a particle accelerator, or an aeroplane ( normally powered ) attempting to reach some maximum altitude, a vacuum pump evacuating a chamber and so on. Actually the ebb/flow of luck at the casino table has an oscillating character that tends to a limit in the long run -> you lose that is. If you want to make money from a casino then be either an investor in one or a government that taxes one ..... :-) :-)

In Fourier work we'll be looking at numbers like that - but where each number in the sequence is a sum of other numbers ( careful, this is where confusion sneaks in ). The sum itself has a number of terms as discussed ( which may even form a sequence themselves ..... ) and we will want to know what is the tendency of these sums as we add more terms into the sums. If you like think of a series as like the progressive total at the supermarket check-out, as you add in extra items through the beeping scanner into the total, it updates to reflect the current number of items. Here however we can have an effectively bottomless/infinite trolley to draw from! Try

S(N) = SUM[1,N](1/n) = 1/1 + 1/2 + 1/3 + 1/4 + ..... + 1/n + 1/(n+1) + .... + 1/N

I explain the notation here : S(N) means 'the sum to N terms', SUM[1,N] means 'start at 1 and go as far as N', (1/n) is the pattern of each term within the sum, and the far RHS shows this expanded. You could bung this in a code loop :

// Sum to N terms
sum = 0;
for(n = 1 ; n N terms will lie closer to the limit than your choice of EPS. Because your choice of EPS can be anything non-negative/non-zero, then it's a way of saying that a sum to any n > N terms can always be made arbitrarily close to the supposed limit simply my evaluating enough terms and summing them. As the tolerance level - 'how close am I to the limit?' - gets smaller then one has to go 'further along' in summing the series to make sure you lie closer to the limit than said tolerance level. Whew!! It works and is precise and unambiguous although an utter pain to perform on each and every occasion that might be applicable. What has happened is that some general theorems/results have been proved with the EPS method and you simply note that if a certain such theorem can be applied for a given scenario then you can confidently use the theorem's conclusions without the agony of winding/travelling along a detailed EPS assessment pathway**. Which is great! :-) ]

Next up in this whirlwind tour : a chat about what it really means to integrate a function OR how to sum a series using the real numbers as indices instead of the natural numbers OR why you hate calculus! :-) :-)

Cheers, Mike.

* This doesn't work for the rationals. Take all possible rational numbers which have the property that the result when each is squared is less than 2. There is no rational number which is the least upper bound of this set. That is : the square root of two is not rational - it is irrational, and the union of the rationals plus the irrationals gives the real numbers. The set of real numbers having the supremum/infimum property is their defining quality, without which we'd still be at Middle Ages Math. A chap by the name of Dedekind used an idea now called 'Dedekind cuts' to construct the real numbers from what is effectively the supremum/infimum property. I've got a great book by another famous mathematical chap called G.H. Hardy which outlines this approach.

** See! I can't help myself with pointing out the by-ways of this topic. There are other equivalent criteria for convergence like the one used by Cauchy : if the difference between terms in a sequence has a certain pattern as n -> infinity, then a sequence will converge. Or if one examines a thing called the limes superior - the limit of successive suprema of certain sub-sequences roughly meaning 'an eventual upper bound' - plus the limes inferior ( defined mutatis mutandis ) then convergence is guaranteed if the limes superior equals the limes inferior ( thus both equal to the sequence limit ). Another way of thinking about limits with an EPS tolerance is ( with A being the limit ) the set/interval (A - EPS, A + EPS) has an infinite number of members of the sequence within it, for any non-zero & positive EPSILON. Or : no matter how close you go to A with an EPS choice, you will always have a finite number of terms of the sequence outside of (A - EPS, A + EPS). The main reason for such alternate constructs is that it may be easier ( or well nigh impossible otherwise ) to show them as valid for certain examples.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.