Did LIGO detect gravitational waves from rotating neutron stars?

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6542
Credit: 287189272
RAC: 97031

Pretty much right. We can

Pretty much right. We can leverage the precisely known ( GPS established ) positions of each GW device ( Hanford, Livingston, Virgo, GEO etc ). Rough algorithm :

- draw a straight line, through the Earth, b/w some pair of detectors.

- the length of the line divided by the speed of light gives one time delay ( this also becomes an absolute constraint on the time delay b/w receptions as below, light speed being a constant ).

- align/locate the waveform features as accurately as possible in time to establish when the wave hit each detector, thus a second time delay.

- that in turn yields an angle to source about that line ( or in other words : a cone with that apex/opening angle going out to infinity ).

- iterate for each distinct pair of detectors.

- intersect the cones ( or put another way : intersect their projections - circles - onto the celestial sphere ).

- account for uncertainties that arise in the real world of measurement ( messy business ).

- on the sky map produce ( a probability distribution for ) the sky area within which the source ought lie.

- hope there is a counterpart coincident measurement from non-GW modes, as per recent NS-NS merger.

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Otubak
Otubak
Joined: 24 Dec 06
Posts: 19
Credit: 3527428
RAC: 0

When determining the position

When determining the position of a long-term source (spinning pulsar), as opposed to a short-time inspiral, there's another source of positional information: the detector moving in "circles" through space, as the earth spins and orbits the sun. Assume a neutron star rotating 50 revolutions per second, generating gravitational waves of 100 Hz. Just the earth spinning will modulate that signal's phase by roughly 4*360 degrees, strength and phase of the modulation directly depending on the sky location. The earth orbiting the sun will cause an even higher amount of modulation.

IIRC, this modulation must be taken into account when searching for continuous GW waves - each computer searching for signals only does so for a small patch of the sky, i.e. the modulation one would expect for a signal coming from that patch. Once a signal has been picked up unambiguously, it should be possible to trace it in all recordings made during the season, and thus narrowing down the amount of modulation and thus its location on the sky, but I'm not sure to what extent.

Jim1348
Jim1348
Joined: 19 Jan 06
Posts: 463
Credit: 257957147
RAC: 0

The problem is, that when you

The problem is, that when you mush them all together (as you must to average out the noise and leave only the signal), you lose the fine points about time delay.  It is not a problem when you are looking at a single event, such as the merger of two black holes, but when you are dealing with months of data, I don't know that you can do it.

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6542
Credit: 287189272
RAC: 97031

Within a work slice given to

Within a work slice given to a particular computer one can keep coherence in time, but only for the ( original real-time ) duration of the signal segment represented in that calculation task. Annoyingly the sensitivity, measured as signal/noise ratio, grows less than linear with that coherent integration time ( like the square root ). So to double sensitivity, say, one has to quadruple the computational load. The compromise is to pick the best coherent computational load that reasonably suits how the work is distributed within E@H, amongst all potential hosts. Still consider/compare the SNRs for adjacent ( in the search parameter space ) signal segments, yielding a combined statistic of significance ( incoherent ). You do get a lower SNR that way compared with a pure coherent approach of course. The hope is to catch somethings ( merely candidate signals at that point ) then do a fuller coherent search on say, ATLAS, to confirm/deny. In this approach ( hierarchical ) E@H performs it's role somewhat like a hunting dog that picks up a scent trail that in turn focusses the following pack.

This has been applied quite successfully to our pulsar EM based searches. World class results actually. Our holy grail remains a continuous GW finding.

Note that E@H has evolved with increasing personal computing options, especially the power of graphics cards ie. one can load up the newer hosts with ever more work. At some point in that progression older hosts are left behind and/or not supported. This has a bearing on 'reasonable' quorum returns/turnaround, an issue mitigated by matching similar hosts to form said quorums. Quora ? :-)

@Gary : I don't think there are any currently available WU types that we publish for early/mid 2000's hosts ? What's our legacy limit at present do you reckon ? Personally I seem to have constructed a new E@H preferred machine about every two/three years ie. five hardware 'leaps' since I joined E@H. I'm due for another about now.

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

DanNeely
DanNeely
Joined: 4 Sep 05
Posts: 1364
Credit: 3562358667
RAC: 89

Jim1348 wrote:The problem is,

Jim1348 wrote:
The problem is, that when you mush them all together (as you must to average out the noise and leave only the signal), you lose the fine points about time delay.  It is not a problem when you are looking at a single event, such as the merger of two black holes, but when you are dealing with months of data, I don't know that you can do it.

 

IIRC what the E@H search does to avoid the problem is that they don't integrate the data just once, they do it hundreds of times for different sky positions applying the proper correction to cancel out the earths rotation and solar orbit for each one to get the notional pure signal from that location if it's anything but noise.

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6542
Credit: 287189272
RAC: 97031

DanNeely wrote:Jim1348

DanNeely wrote:
Jim1348 wrote:
The problem is, that when you mush them all together (as you must to average out the noise and leave only the signal), you lose the fine points about time delay.  It is not a problem when you are looking at a single event, such as the merger of two black holes, but when you are dealing with months of data, I don't know that you can do it.

 

IIRC what the E@H search does to avoid the problem is that they don't integrate the data just once, they do it hundreds of times for different sky positions applying the proper correction to cancel out the earths rotation and solar orbit for each one to get the notional pure signal from that location if it's anything but noise.

That's right. One/more of the files downloaded is an 'epherimides' or time related positional data for celestial bodies. I think it comes from JPL, whom also keep track of the solar system's barycenter : any result will ultimately be quoted as 'viewed' from that point* in space. Thus for a given real time interval at the detectors, the parameters are two sky position numbers, source frequency and its derivative, a signal type/template to match, plus others. Now if you have an empherides and an accurate Earth clock then you know what is the ( sidereal ) orientation of the detectors ( fixed on the ground ) at any given time, or during a certain interval, with respect to a putative sky source. Or to invert that logic : if a detector(s) has received a signal and we ( some lucky E@H hosts ) validate that, then one can work back towards sky positions as previously discussed.

{ A 'long' integration time would thus encompass some change in orientation of a detector w.r.t. sky source, effectively smearing the deduced position. For example 20 minutes is 1/3rd of one hour and hence the Earth has rotated about its own axis 1/3rd of 15 = 5 degrees in that time ( but the change at a detector is latitude dependent too ). I'm not sure if, or how, that is accounted for. I don't know the relative magnitudes of all the possible error mechanisms, but I reckon twenty minutes worth of movement of the Earth along its orbital path is a lesser worry. }

As mentioned a particular actual pulsar may perhaps appear to wobble in sky position, with daily and yearly rhythms. This is parallax by another name, with the fact that the Earth has those movements around itself and the Sun being a boon rather than a nuisance. An appropriate wobble reassures us that it is a deep state sky object. :-))

Cheers, Mike.

* Very roughly : not far inside the Sun from its visible surface on the side that Jupiter is. Not the Sun's centre. The barycentre is to a first approximation one of the focii for each ellipse-ish planetary orbit.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Jim1348
Jim1348
Joined: 19 Jan 06
Posts: 463
Credit: 257957147
RAC: 0

Those corrections may be well

Those corrections may be well and good for cancelling out errors due to the earth's rotation, etc., but we need the very precise time differences between the arrival at two detectors (e.g., Washington and Louisiana).  Are they good enough to preserve that?  It is beyond me, but I think your idea of coarse/fine detection could do it, with ours being the coarse part.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5850
Credit: 110074552320
RAC: 23448806

Mike Hewson wrote:@Gary : I

Mike Hewson wrote:
@Gary : I don't think there are any currently available WU types that we publish for early/mid 2000's hosts ? What's our legacy limit at present do you reckon ? Personally I seem to have constructed a new E@H preferred machine about every two/three years ie. five hardware 'leaps' since I joined E@H. I'm due for another about now.

If by "early/mid 2000's" you are referring to say 2003 to 2006, then the answer is 'don't try from Einstein' :-).  I think there might be some people still trying to run later generation P4s but I'm pretty certain (if they could actually get enough memory into the beast for it to run) the performance would be absolutely woeful.

I think the 'legacy limit' really depends on the pool you want to swim in.

For the current GW pool (or for any CPU based app for that matter), you probably want no earlier than Ryzen for AMD hardware and for Intel, probably something like Sandy Bridge/Ivy Bridge or later.

It's a bit different for the FGRPB1G pool where the CPU is relatively unimportant, especially for modern AMD GPUs.  I always thought (and I'm still hopefully waiting) that there would be a GPU app for GW.  At the time of the open day in 2011, I remember Bruce commenting about how big GPUs would figure in the future upgrade plans for their in-house number crunching.  After that event, with GPU apps being developed for radio and gamma-ray pulsar searches, I thought that a GW GPU app would just about be a certainty by the time advanced LIGO data became available.  I always thought the GPU pulsar searches would just be a transition phase.  Hopefully it's just an extended transition phase with an eventual (if somewhat delayed) GW GPU app :-).

The oldest gear I currently run dates from the 2007/2008 period.  At that time, I purchased the parts to build 6 Q6600 core2 quad systems which still run today.  The CPUs have always been fine.  I think I've replaced caps and a particular choke on most of the motherboards.  All six of them have AMD Polaris (RX 460 to RX 580) GPUs.  They perform pretty much identically to the same GPU in a much more modern system.  I was a bit reluctant to even try the upgrade path for those old machines at first.  Two of the boards were really cooked locally from overheating chokes flanked by swollen caps.  However, I cleaned up the damage and replaced the components and the boards are still running fine a year later.

I try to repair anything that fails from an obvious cause such as swollen caps.  I thought I'd be wasting my time with the Q6600s, not because the repairs wouldn't work but because the old 1.x version PCIe slots would adversely impact GPU performance.  My fleet is pretty much 100% AMD GPUs and the PCIe generation seems to make very little difference to crunch times.

 

Cheers,
Gary.

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6542
Credit: 287189272
RAC: 97031

Hopefully helpful diagram

Hopefully helpful diagram :-) 

gw_geometry4.jpg

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.