8 Aug 2011 14:38:06 UTC

Topic 195896

(moderation:

X:enius is a german/ french serie about science.

In this documentation they explain i.e. gravity waves & GEO600:

Language

Copyright © 2020 Einstein@Home. All rights reserved.

## X:enius: GEO 600 & search for gravity waves

)

I found this News on the GEO600 site in German and worked back to a paper in English:

Network of gravitational wave detectors

Tullio

## An update from CERN

)

An update from CERN Courier:

European GW detectors

Tullio

## The network aspect of GW

)

The network aspect of GW detection is so crucial, and has quite a different 'flavor' of importance compared with astronomy generally.

The first problem is that we have never ( well, to everyone's satisfaction ) heard a gravitational wave. So for that first instance to be then the more the merrier. How do you describe the call of a bird you've never heard before, but you need to convince both yourself and others that it really exists ? :-)

Secondly we don't 'focus' gravity waves like we do for electromagnetic stuff. We can't deliberately deflect them or bounce them about - short of having a manipulable black hole handy hereabouts. Spacetime is so 'stiff' so it requires humungous mass/energy concentrations to generate and/or affect it. Short answer is : we'd be dead if anywhere near that type of neighborhood. So we are way out in 'first-order/linear-perturbation' region. The nature of the waves ( transverse with 'spin 2' ) and thus detector design yields more-or-less omnidirectional antenna response. That's like hearing a noise but not knowing from which direction it came ( we have two ears for that reason ). Indeed you have a better chance of defining the source direction from a single detector via the 'nulls' that occur in it's response pattern. That sounds awkward, and it is, but the null lines are better defined than the rest. But one can't say it came from so-and-so because I didn't hear it! So that means localisation in the sky relies upon the relative timing - or 'coherence' - of separated data streams.

Thirdly the difficulty of maintaining science-credible interferometer lock is all too frequently a chancy thing, due to the high rate of external ( non gravitational wave ) reasons for giving machine output. Hence redundancy as well as separation is very helpful.

Fourthly even if you hop over all of the above hurdles, then the finer detail one can get gives a better characterisation ie. more parameters to bind the modelling of the source system. It's worth (re-)mentioning that E@H has already helped the theoretical aspect of the field by placing upper bounds upon what is possible.

Side notes :

(a) When reading the literature, keep in mind whether or not continuous waves ( essentially long lived signals with no large intrinsic frequency variation ) or bursts are being discussed. We at EAH are doing continuous GW's ...

(b) Prior to the Hannover event this year I really didn't understand the true impact of coherent vs non-coherent analysis ( of course, I still could be getting it wrong now .... ). Short answer : coherent is way better by far but ever so more expensive computationally. See Fourier Transforms et al. So one invents search strategies ( eg. hierarchical ) to try to up one's detection chances without also getting snowed under with never-ending or undo-able calculations.

Cheers, Mike.

( edit ) Roughly : for coherent integration to some chosen level of significance ( SNR, sensitivity, detection chance, whatever ) then the required time goes like the square root. So if you can 'confidently' detect to some degree after a certain time, then to double your yield your have to go to quadruple the time. Even with the Fast Fourier Transform, which goes like N*log(N), then that is at least eight times the work!! One saving grace, perhaps, is that doubling the sensitivity gives 2^3 = 8 times the volume of space that one can 'hear' from.

I have made this letter longer than usual because I lack the time to make it shorter. Blaise Pascal