> What kind of calculus do E@H? What kind of data is analizated?
>
E@H uses fast fourier transforms (FFT) to look for gravitational waves. FFTs have a wide variety of applications, from digital signal processing to solving partial differential equations to algorithms for quickly multiplying large integers. If FFTs sound familiar, SETI@Home also uses FFTs in analysing radio telescope data.
Cut the guy some slack, english is probably not their native language. I think everyone here knew what was being asked, and since this is a science board, not a learn-english-grammar board, why don't you find somewhere else to give your unsought english lessons?
> > do E@H -> does E@H do
> > analizated -> analyzed
>
> Cut the guy some slack, english is probably not their native language. I
> think everyone here knew what was being asked, and since this is a science
> board, not a learn-english-grammar board, why don't you find somewhere else to
> give your unsought english lessons?
>
Yes, if you check he is from Peru and speaks Spanish.
I welcome everyone. Please know that most people here welcome everyone- no matter where they are from or what language they use. After all, Americans have elected a president that has almost as much trouble with English as I do.
+1 for the american president's troubles... (i'm not really better, but i'm french.. :-) )
in fact, i'm not sure to have understood what sort of datas were proceeded onto my computer : are they those of the "interferometers" (english translation looks like a bit strange...) in the USA, Germany and Italy ? I thought they weren't workable yet (?) Is the comparison between their respective signals already done when the data are downloaded ? and how can a FFT help to detect a clear signal among ambiant noise ?
(I think I'm getting better at this, but someone again correct me if I'm wrong.)
An FFT is a way of converting the representaion of a signal in the time domain to that of the same signal in the frequency domain.
A signal of constant (or near constant) frequency will look like a sine wave in the time domain (like on an oscilliscope), but will look like a "spike" in the frequency domain (like on a spectrum analyzer). I assume that the frequency of a gravity wave is fairly discrete (i.e. it's a narrowband signal), so as long as the amplitude of the gravity wave signal is large or "loud" enough, the frequency "spike" of the gravity wave will be rise up and be visible above the ambient noise at the same frequency.
LIGO searches for gravity waves by performing "matched filtering"... where it takes the real sensor data it receives, accounts for Earth/Sun Doppler for various points in the sky that it is "looking" at, and then matches the real results of the computations done by our computers to a predicted result (if it was known that gravity waves where in fact coming from any of those points in the sky).
A match would potentially mean that we've discovered a gravity wave, from a particular area of the sky.... the area for which the specific Doppler value was computed and used in our crunching.
"No, I'm not a scientist... but I did stay at a Holiday Inn Express."
ok thanks, think like i understood a bit of underlying science..
i didn't know those experimental results were compared to predicted one, seems a bit strange for an experiment, like we were pretty sure of the theory...
another question, maybe a bit more turned on maths : i know FT, but what is the "fast" ? is there another method to get FT ? (a faster one I presume...)
another question, maybe a bit more turned on maths : i know FT, but what is the "fast" ? is there another method to get FT ? (a faster one I presume...)
For those who don't know the FT, Fourier Transform, it converts between time and frequency. If you mapped the electric field in the antenna of a radio set, it would be all over the place, this is the E signal against time. Apply an FT to the data, and it converts the data to frequency: and you get a spike for every radio station.
We are hoping to find spike(s) for gravity wave transmitters using an efficient form of the FT
a faster one I presume...
Yes, the FFT is the Fast Fourier Transform. So, just how did they turbocharge the FT?
OK, to compute an FT the naive way, try calculating each value accordingto the formula, producing one result at a time. If you had N values you end up doing 2*N*N floating point multiplications.
Some bright spark noticed that you end up doing many of the same calculations many times over. If you are willing to do the calculations in a blooming awkward order, and re-use intermediate values appropriately, you can cut the number of floating point multiplications to 2*N*logN (taking log to base 2).
This means you only do (logN)/N of the work. The rest of the work you did calculating the FT was redundant.
For example with a 16 sample FT you'd do 512 multiplies, with FFT you'd do 128, which is only 4/16ths of the work, 25% of the FT
With a 128 wide FT you'd do 8192 multiplies, with FFT you'd do 1792. 7/128ths of the work, 6% of the FT
With a 1024 wide FT you'd do >2 million multiplies, with FFT you'd do 20,480, 10/1024ths of the work, 1% of the FT.
As you can see the advantage increases rapidly with the size of the sample.
The cost is an increased overhead in the data fetches as you are processing the data in a funny order and keeping back part products for re-use later. But on balance you save a lot of computer time.
The other constraint is that the FFT only works with a sample width that is an exact power of 2, like the examples I cited. The advantages are such that scientists tend to collect data in the appropriate sample size if the data is of the sort that invites Fourier analysis.
Alternatively, you can pad the data with trailing zeroes, but then you lose some of the performance gains by processing the padding, and introduce small errors into the results.
If you had infinite accuracy in the arithmetic it would produce the same result as the FT. In practice the results of the FT and FFT vary slightly due to the rounding errors in floating point arithmetic, but neither result is "better" than the other, both are apporximations to the true value.
seems a bit strange for an experiment, like we were pretty sure of the theory...Yes, well I guess we are in fact either going to prove or disprove the thoery. It's a cold fact of experimentation that the underlying theory may be wrong. That would suck.
But, I'm betting that there has been enough in-direct evidence as to the existence and nature of gravity waves that the LIGOs were designed with the correct theories in mind to directly detect them. Again, however, the ground based measurements have limitations that narrow the chances of the star(s) being within LIGO's detectable limits. With LIGO, I think we'll be very fortunate to find a star or pair of stars that are close enough to each other to create gravity waves that are high enough in frequency and strong enough for us to detect. Even if LIGO detects nothing, that doesn't mean the underlying theories behind LIGO's design are wrong.
LISA is supposed to extend the freq range much lower, allowing us to detect (hopefully) star pairs that much farther apart (like the white dwarf pair in J0806) and should also be much more sensitive to detecting smaller amplitude gravity waves.
"No, I'm not a scientist... but I did stay at a Holiday Inn Express."
thanks, FFT is now lighted in my mind :-) it seems someway similar to the method of the fast exponentiation, if i understood this correctly... to reply to the end of my post, i don't think the underlying theory is bound to be proves false, but I found it strange to compare experimental datas with predicted ones to judge their validity (?). thanks a lot for the explanations
What kind of calculus do E@H? What kind of data is analizated?
)
> What kind of calculus do E@H? What kind of data is analizated?
>
E@H uses fast fourier transforms (FFT) to look for gravitational waves. FFTs have a wide variety of applications, from digital signal processing to solving partial differential equations to algorithms for quickly multiplying large integers. If FFTs sound familiar, SETI@Home also uses FFTs in analysing radio telescope data.
do E@H -> does E@H
)
do E@H -> does E@H do
analizated -> analyzed
"My other computer is a virus farm."
> do E@H -> does E@H do >
)
> do E@H -> does E@H do
> analizated -> analyzed
Cut the guy some slack, english is probably not their native language. I think everyone here knew what was being asked, and since this is a science board, not a learn-english-grammar board, why don't you find somewhere else to give your unsought english lessons?
> > do E@H -> does E@H do > >
)
> > do E@H -> does E@H do
> > analizated -> analyzed
>
> Cut the guy some slack, english is probably not their native language. I
> think everyone here knew what was being asked, and since this is a science
> board, not a learn-english-grammar board, why don't you find somewhere else to
> give your unsought english lessons?
>
Yes, if you check he is from Peru and speaks Spanish.
I welcome everyone. Please know that most people here welcome everyone- no matter where they are from or what language they use. After all, Americans have elected a president that has almost as much trouble with English as I do.
I hope we all learn to listen better.
Dennis
+1 for the american
)
+1 for the american president's troubles... (i'm not really better, but i'm french.. :-) )
in fact, i'm not sure to have understood what sort of datas were proceeded onto my computer : are they those of the "interferometers" (english translation looks like a bit strange...) in the USA, Germany and Italy ? I thought they weren't workable yet (?) Is the comparison between their respective signals already done when the data are downloaded ? and how can a FFT help to detect a clear signal among ambiant noise ?
(I think I'm getting better
)
(I think I'm getting better at this, but someone again correct me if I'm wrong.)
An FFT is a way of converting the representaion of a signal in the time domain to that of the same signal in the frequency domain.
A signal of constant (or near constant) frequency will look like a sine wave in the time domain (like on an oscilliscope), but will look like a "spike" in the frequency domain (like on a spectrum analyzer). I assume that the frequency of a gravity wave is fairly discrete (i.e. it's a narrowband signal), so as long as the amplitude of the gravity wave signal is large or "loud" enough, the frequency "spike" of the gravity wave will be rise up and be visible above the ambient noise at the same frequency.
LIGO searches for gravity waves by performing "matched filtering"... where it takes the real sensor data it receives, accounts for Earth/Sun Doppler for various points in the sky that it is "looking" at, and then matches the real results of the computations done by our computers to a predicted result (if it was known that gravity waves where in fact coming from any of those points in the sky).
A match would potentially mean that we've discovered a gravity wave, from a particular area of the sky.... the area for which the specific Doppler value was computed and used in our crunching.
"No, I'm not a scientist... but I did stay at a Holiday Inn Express."
ok thanks, think like i
)
ok thanks, think like i understood a bit of underlying science..
i didn't know those experimental results were compared to predicted one, seems a bit strange for an experiment, like we were pretty sure of the theory...
another question, maybe a bit more turned on maths : i know FT, but what is the "fast" ? is there another method to get FT ? (a faster one I presume...)
another question, maybe a bit
)
another question, maybe a bit more turned on maths : i know FT, but what is the "fast" ? is there another method to get FT ? (a faster one I presume...)
For those who don't know the FT, Fourier Transform, it converts between time and frequency. If you mapped the electric field in the antenna of a radio set, it would be all over the place, this is the E signal against time. Apply an FT to the data, and it converts the data to frequency: and you get a spike for every radio station.
We are hoping to find spike(s) for gravity wave transmitters using an efficient form of the FT
a faster one I presume...
Yes, the FFT is the Fast Fourier Transform. So, just how did they turbocharge the FT?
OK, to compute an FT the naive way, try calculating each value accordingto the formula, producing one result at a time. If you had N values you end up doing 2*N*N floating point multiplications.
Some bright spark noticed that you end up doing many of the same calculations many times over. If you are willing to do the calculations in a blooming awkward order, and re-use intermediate values appropriately, you can cut the number of floating point multiplications to 2*N*logN (taking log to base 2).
This means you only do (logN)/N of the work. The rest of the work you did calculating the FT was redundant.
For example with a 16 sample FT you'd do 512 multiplies, with FFT you'd do 128, which is only 4/16ths of the work, 25% of the FT
With a 128 wide FT you'd do 8192 multiplies, with FFT you'd do 1792. 7/128ths of the work, 6% of the FT
With a 1024 wide FT you'd do >2 million multiplies, with FFT you'd do 20,480, 10/1024ths of the work, 1% of the FT.
As you can see the advantage increases rapidly with the size of the sample.
The cost is an increased overhead in the data fetches as you are processing the data in a funny order and keeping back part products for re-use later. But on balance you save a lot of computer time.
The other constraint is that the FFT only works with a sample width that is an exact power of 2, like the examples I cited. The advantages are such that scientists tend to collect data in the appropriate sample size if the data is of the sort that invites Fourier analysis.
Alternatively, you can pad the data with trailing zeroes, but then you lose some of the performance gains by processing the padding, and introduce small errors into the results.
If you had infinite accuracy in the arithmetic it would produce the same result as the FT. In practice the results of the FT and FFT vary slightly due to the rounding errors in floating point arithmetic, but neither result is "better" than the other, both are apporximations to the true value.
~~gravywavy
seems a bit strange for an
)
seems a bit strange for an experiment, like we were pretty sure of the theory...Yes, well I guess we are in fact either going to prove or disprove the thoery. It's a cold fact of experimentation that the underlying theory may be wrong. That would suck.
But, I'm betting that there has been enough in-direct evidence as to the existence and nature of gravity waves that the LIGOs were designed with the correct theories in mind to directly detect them. Again, however, the ground based measurements have limitations that narrow the chances of the star(s) being within LIGO's detectable limits. With LIGO, I think we'll be very fortunate to find a star or pair of stars that are close enough to each other to create gravity waves that are high enough in frequency and strong enough for us to detect. Even if LIGO detects nothing, that doesn't mean the underlying theories behind LIGO's design are wrong.
LISA is supposed to extend the freq range much lower, allowing us to detect (hopefully) star pairs that much farther apart (like the white dwarf pair in J0806) and should also be much more sensitive to detecting smaller amplitude gravity waves.
"No, I'm not a scientist... but I did stay at a Holiday Inn Express."
thanks, FFT is now lighted in
)
thanks, FFT is now lighted in my mind :-) it seems someway similar to the method of the fast exponentiation, if i understood this correctly... to reply to the end of my post, i don't think the underlying theory is bound to be proves false, but I found it strange to compare experimental datas with predicted ones to judge their validity (?). thanks a lot for the explanations