S5 early results paper

Ver Greeneyes
Ver Greeneyes
Joined: 26 Mar 09
Posts: 140
Credit: 9562235
RAC: 0
Topic 194379

Hi everyone, I'd like to have a look at the paper, but arxiv are blocking my IP-address again. This has happened in the past and I think my ISP is using a bad range of addresses, but I've tried to contact arxiv and my e-mails were never answered.

So I was wondering, is there a chance you could point me to a mirror for the paper, or otherwise a proxy I could use to access it?

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6591
Credit: 323655001
RAC: 164025

S5 early results paper

Quote:

Hi everyone, I'd like to have a look at the paper, but arxiv are blocking my IP-address again. This has happened in the past and I think my ISP is using a bad range of addresses, but I've tried to contact arxiv and my e-mails were never answered.

So I was wondering, is there a chance you could point me to a mirror for the paper, or otherwise a proxy I could use to access it?


No problemo, try here.

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Ver Greeneyes
Ver Greeneyes
Joined: 26 Mar 09
Posts: 140
Credit: 9562235
RAC: 0

RE: No problemo, try

Message 93288 in response to message 93287

Quote:

No problemo, try here.

Cheers, Mike.


Thanks, downloading now.

(wow, that was a fast reply - I should've checked back sooner)

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6591
Credit: 323655001
RAC: 164025

I've had a brief read,

I've had a brief read, thought I'd share my impressions :

Quote:
"No statistically significant signals were found. In the 125 Hz to 225 Hz band, more than 90% of sources with dimensionless gravitational-wave strain tensor amplitude greater than 3 x 10(-24) would have been detected."


This is the frequency band which contains the 'sweet spot' of the detector ie. greatest design sensitivity. So I think ( given assumptions in the analysis ):

- there are many sources above that strain threshold. We should have heard some. We have a problem. [ Eg. there are 10 sources of which 9 should have been heard. ]

OR

- there are few sources above that strain threshold. We were unlucky/unfortunate. [ Eg. there is 1 source which we should probably have heard. ]

OR

- there are no sources above that level. The detectors have correctly reported that. [ Eg. there are 0 sources so 0 were heard. ]

Quote:
“An additional design goal is to minimize the download burden on the Einstein@Home volunteers' internet connections and also on the Einstein@Home data servers.â€


Cruncher’s please note this aspect, this explains why some desires can’t/won’t be met. :-)

Quote:
“..... ranges were guided by the assumption that a nearby very young neutron star would correspond to a historical supernova, supernova remnant, known pulsar, or pulsar wind nebula. The search also covers a small ‘spin-up’ range ...... â€


So a detection ( yeah! ) may also give a correlation with other known data on some object.

Quote:
“The post-processing has the goal of finding candidate events that appear in many of the 28 different data segments with consistent parameters.â€


Thus many crunchers will likely contribute to any notable spike in the data.

Quote:
“... the background level of false alarm candidates is expected at 10 coincidences (out of 28 possible). As a pragmatic choice, the threshold of confident detection is set at 20 coincidences, which is highly improbable to arise from random noise only†and “the false alarm probability of reaching the detection threshold of 20 or more coincidences per 0.5 Hz averaged over all frequency bands is about 10^(-21).â€


This is a brutal condition! :-)

Quote:
“hardware injections are not expected to be detected in this search, simply because they were inactive during a large fraction of the data analyzed.â€


A shame.

Quote:
“Apart from the larger parameter space searched, the present analysis is achieving roughly comparable sensitivity to [15] in spite of searching 8.5 times less data. Much of this effectiveness is due to the increased coherent integration time (30 hours vs 30 minutes), which is only possible due to the great amount of computing power donated by the tens of thousands of Einstein@Home volunteers†and “The authors thank the tens of thousands of volunteers who have supported the Einstein@Home project by donating their computer time and expertise for this analysis. Without their contributions, this work would not have been possible.â€


Yeah team!! :-)

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Ver Greeneyes
Ver Greeneyes
Joined: 26 Mar 09
Posts: 140
Credit: 9562235
RAC: 0

It says on the main page that

It says on the main page that we're currently in the process of analyzing 5280 hours of data from the 'later' part of S5 - am I right in thinking this is separate from the 660 + 180 hours of 'early' data discussed in this paper? In other words, is it fair to say we've analyzed (660 + 180) / (5280 + 660 + 180) = 13.7% of design sensitivity data with the rest in progress?

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6591
Credit: 323655001
RAC: 164025

RE: It says on the main

Message 93291 in response to message 93290

Quote:
It says on the main page that we're currently in the process of analyzing 5280 hours of data from the 'later' part of S5 - am I right in thinking this is separate from the 660 + 180 hours of 'early' data discussed in this paper?


Correct.

Quote:
In other words, is it fair to say we've analyzed (660 + 180) / (5280 + 660 + 180) = 13.7% of design sensitivity data with the rest in progress?


Well yes. Of the data on the table at present. What this paper and others discuss, or hint at, is that analysis is only really limited by resources.

When we say 'signal analysis' this technically involves 'convolution'. This is a mathematical way ( an integral ) of stepping along the data with a given waveform shape and seeing how well they match. The better the match the more 'area under the curve' common to the data and the given template will count in the result. So our computers take some given 30 hour stretch of data from the interferometers and try to align some template ( assumed waveform shape based on astrophysical thinking about rotating neutron stars ) which repeats along the time axis ( ie. at a certain frequency ) and yield a number which thus assesses the degree of overlap of the two.

So it is pattern matching and there's more than a few assumptions here. The full gory detail is spread over many published papers.

Anyhows, for us, if we remain available to E@H then even if no more data appeared from the LIGO ( or other ) arrays we could still go over the same set with different search parameters.

This approach will hopefully reward clever/reasoned/calculated guessing about what the golden needle looks like in a humungous haystack. I hope/feel one day we will go 'ouch'. :-)

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

debugas
debugas
Joined: 11 Nov 04
Posts: 170
Credit: 77331
RAC: 0

RE: No statistically

Message 93292 in response to message 93291

Quote:
No statistically signicant signals were found. In the 125 Hz to 225 Hz band, more than 90% of sources with dimensionless gravitational-wave strain
tensor amplitude greater than 3 x 10(-24) would have been detected.

This is VERY discouraging news :(
Does it mean our theory is wrong or we simply looking for wrong patterns due some mistakes in implementation ?

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6591
Credit: 323655001
RAC: 164025

I've been looking for papers

I've been looking for papers that predict the expected values for the type of objects E@H is involved with searching for. There's no shortage of ideas out there! :-)

It's model dependent. For the continuous waves we seek, the base idea is a neutron star rotating. As far as I can tell :

If it was a perfect spherical shape then it wouldn't radiate any waves. There is a concept of "nonaxisymmetric" that implies there is some mass feature on or within the star not evenly distributed as you look along the ( "North - South" ) axis. So there's a bump or a pimple or somesuch in one area and when the star rotates it is 'unbalanced' - like a car wheel can be if the tire is not fitted right. So the first assumption is how out of balance or nonaxisymmetric the star is.

Then there is how much of the rotational energy goes out in gravitational waves versus other modes of loss ( say the traditional pulsar radio signal ). This affects the rate of spinning down of the neutron star. Our search has several choices for that.

Thirdly is where is the star with respect to Earth. If it's further away then the signal is smaller. So the talk is of the presumed population in space of these stars.

From my brief browsing it seems the expected strain is about 10^(-24) and below for 'reasonable' models. With the high part of the range being more nearby and/or more 'wobbly' ones, and decreasing signal with distance and degree of symmetry.

So I reckon that means my third option : "- there are no sources above that level. The detectors have correctly reported that."

No need to be discouraged! The LIGO planners weren't expecting firm detections until Advanced LIGO. It was always understood that the design and implementation would be incremental, that it would take progressive refinement toward the fancier engineering features. Each time one of these reports is published it shows ever more experience with the processes of the project. Practice makes perfect! :-)

It would have been better if the hardware injections ( deliberate 'bumping' of the interferometers ) were more timely. That's a neat check of the implementation. Simulate a wave arrival and see if we pick it up.

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

debugas
debugas
Joined: 11 Nov 04
Posts: 170
Credit: 77331
RAC: 0

Thank You for the

Message 93294 in response to message 93293

Thank You for the explanation. So if i understood you right we can two options here to move forward:
1) think of other patterns to look for in the same data
2) fine-tune instrumentation to look for the same but weaker patters

and if i've understood you right we still should be optimistic with the 2nd option

Bruce Allen
Bruce Allen
Moderator
Joined: 15 Oct 04
Posts: 1119
Credit: 172127663
RAC: 0

RE: 2) fine-tune

Message 93295 in response to message 93294

Quote:
2) fine-tune instrumentation to look for the same but weaker patterns

Describing this as 'fine-tuning the instruments' is really not very accurate.

The construction of the LIGO detectors was completed in 1999, when the serious commissioning work began. Since that time, the strain sensitivity has increased by more than two orders of magnitude: noise evolution graph.

The road-map for the LIGO detectors includes two more significant upgrades, so that by 2014 the instruments will be one order of magnitude more sensitive than during the S5 run. This means that we can observe a spatial volume that is 1000 times larger than what was visible during S5 (the visible volume grows like the cube of the sensitivity).

Describing this evolution as 'fine-tuning' of the instrument is really not accurate! It's like saying that a Porsche 911 is just a 'fine-tuned' version of a Ford Model T.

In addition, we are continuing to improve our analysis methods. For example see this paper on improved analysis methods.

As the detectors and the data analysis methods improve, our chances of making a CW source detection go up. But in absolute terms we can't say how probable this is, because we do not know how big neutron star 'mountains' are. See Figure 5 of this paper to see some reasonable solid UPPER LIMITS on what the expected maximum strain is, as a function of the (fractional) mountain-height epsilon.

Director, Einstein@Home

Ver Greeneyes
Ver Greeneyes
Joined: 26 Mar 09
Posts: 140
Credit: 9562235
RAC: 0

It was mentioned in the paper

It was mentioned in the paper that some more sources of instrumentation noise are now understood, and were removed from the data after crunching. Has this noise been pre-removed from the data we're now crunching, or is our understanding of these sources too recent for it to affect the rest of S5? (in terms of pre-processing)

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.