# What kind of calculus do E@H? What kind of data is analizated?

Ben Owen
Joined: 21 Dec 04
Posts: 117
Credit: 51,659,409
RAC: 35,163

### RE: i didn't know those

Message 7668 in response to message 7664

Quote:

i didn't know those experimental results were compared to predicted one, seems a bit strange for an experiment, like we were pretty sure of the theory...

Eoerl,

Anything involving matched filtering (like E@H) does indeed make assumptions about the signal. But here the assumptions are not very much: Just that the Earth rotates like this, orbits like that, and the detectors are at latitude and longitude such and such. And that the Doppler shifts are given by special relativity, which has been tested daily in particle accelerators for decades. So if there is a signal out there that is periodic when you subtract out the Earth's motion, E@H will find it with a certain probability as a function of signal strength.

The existence of periodic signals doesn't depend on general relativity or some particular theory of nuclear matter in neutron stars. The predicted signal strengths do vary a lot in different theories of nuclear matter, but the experimenter's attitude is "Let's just look and see if there's a signal out there, then when we find it the theorists can worry about what sort of object made it."

Barkster,

There is indeed plenty of indirect evidence for gravitational waves. In 1993 Hulse and Taylor got the Nobel Prize for a pair of neutron stars they discovered in 1975 that are spiraling closer to each other like the white dwarfs you mentioned but faster. The rate of inspiral has been tracked for 30 years and is precisely what general relativity predicts for the recoil from gravitational waves being emitted.

Hope this helps,
Ben

Eddy_Kestemont
Joined: 1 Jul 05
Posts: 1
Credit: 2,432
RAC: 0

### RE: OK, to compute an FT

Message 7669 in response to message 7665

Quote:

OK, to compute an FT the naive way, try calculating each value according to the formula, producing one result at a time. If you had N values you end up doing 2*N*N floating point multiplications.

Some bright spark noticed that you end up doing many of the same calculations many times over. If you are willing to do the calculations in a blooming awkward order, and re-use intermediate values appropriately, you can cut the number of floating point multiplications to 2*N*logN (taking log to base 2).

This means you only do (logN)/N of the work. The rest of the work you did calculating the FT was redundant.
...
The cost is an increased overhead in the data fetches as you are processing the data in a funny order and keeping back part products for re-use later. But on balance you save a lot of computer time.

Some years ago, we had to compute time autocorrelation functions of the velocities of Np (Np ~ 10^4) particles in molecular dynamics experiments. This leads to the computation of FFT transforms of time series for the velocity of each particle; the results are averaged over all particles.
A big saving in computer time was obtained by
1. computing once all the trigonometric functions (sine and cosine) needed (all samples were of same time span N ~10^3)
2. after doing the first part of the Fourier transform, reordering of the obtained coefficients was done only once after having performed the average over all particles.

Quote:

The other constraint is that the FFT only works with a sample width that is an exact power of 2, like the examples I cited.

I seem to remember there are generalized FFT algorithms build on the possibility to split the N time data in a product of prime factors which make it possible to use the same basic principles of the "classic" FFT for arbitrary numbers of data points.

Edouard Kestemont
Universite libre de Bruxelles CP 224
1050 Brussels - Belgium
ekestemo-[at]-ulb.ac.be

gravywavy
Joined: 22 Jan 05
Posts: 392
Credit: 68,962
RAC: 0

### RE: RE: The other

Message 7670 in response to message 7669

Quote:
Quote:

The other constraint is that the FFT only works with a sample width that is an exact power of 2, like the examples I cited.

I seem to remember there are generalized FFT algorithms build on the possibility to split the N time data in a product of prime factors which make it possible to use the same basic principles of the "classic" FFT for arbitrary numbers of data points.

OK, yes I fudged it slightly.

You can indeed work out an FFT regime for any prime number, so have an FFT based on base 3, 5, etc.

None of these save as much work as a base 2 FFT, as the smaller the base the more the same calculations can be reused.

In principle yes, if you had 15 data points you could combine the base 3 and base 5 FFT to get the result you wanted. The coding becomes much more complex if you use mixed bases, and I don't know of this being used in practice.

The general rule is that the more prime factors a number has, the better the saving. So 1024 has 10 prime factors (1024 is 2^10), whereas 1000 has only 6 prime factors (2^3 * 5^3). Therefore you save a lot more by adding an extra 24 data points.

Two is nice here as it happens to be the smallest prime number, so by going for powers of two you maximise the number of factors. The fact that on a deep level machines also like binary is a bonus.

I should have said that for the fastest FFT you need a power of 2.

~~gravywavy

john fisher
Joined: 3 Sep 05
Posts: 2
Credit: 1,066
RAC: 0

### As an aside,what type of

As an aside,what type of photodetector does ligo use,is it a photomultiplier tube ? I'm curious becuase i'm trying to learn the nature of the raw signal before computations. I've done some work with scintillators and pmt's so this would be interesting. thanks ahead. jack

MarkF
Joined: 12 Apr 05
Posts: 393
Credit: 1,516,715
RAC: 0

### Here is a pdf I

Here is a pdf I found.
photodetector

john fisher
Joined: 3 Sep 05
Posts: 2
Credit: 1,066
RAC: 0

### RE: Here is a pdf I

Message 7673 in response to message 7672

Quote:
Here is a pdf I found.
photodetector

Excellent,thanks much!