It's not clear from that if he's saying mid summer for the s4-s5 transition, or that s5 won't be ready until mid summer regardless of what happens with s4. My first read was the latter, but I'll freely admit it's ambigious.
To my knowledge the S5 run has not been launched yet in the sense that the intereferometer hardware is not upgraded to the desired level yet. This is expected to be done in the middle of summer. Hence, if the S4 analysis is finished too soon, there will be no S5 data to analyze.
To my knowledge the S5 run has not been launched yet in the sense that the intereferometer hardware is not upgraded to the desired level yet. This is expected to be done in the middle of summer. Hence, if the S4 analysis is finished too soon, there will be no S5 data to analyze.
The interferometers have been collecting data since November '05, as they are now, admittedly with some problems. The issue is whether how much there is/will be of 'decent' data for signal analysis for us in the near future. The best type of data is 'triple lock' where all three LIGO's ( Hanford 2km, Hanford 4km, and Livingstone 4km ) are in optimal listening mode simultaneously. That co-incidence was about half of desired/designed goal in the period up to mid March. There is a poorly known noise source at low frequencies which seems to be related to time of day, 'traffic' if you like. They have on occasion had to reduce laser power to maintain lock but that reduces sensitivity though. My guess is that it is not clear to the project scientists right now as to what subset of the collected data is best to present to us for crunching.
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Thanx for the fresh information, Mike, I look forward to crunching the S5 data.
I forgot to say that to a certain extent the S4 data analysis can be extended anyway. The central idea is that collected data is compared against expected signal profiles ( binary inspirals say ). This is done for certain time periods to sift and attempt to uncover patterns. The longer the period of analysis the better the chance of declaring the fainter disturbances. So you choose a data set, a signal template of interest, and then crunch - perhaps producing a probabilistic comment about a 'discovery'. The same soil can be tilled in many ways for results.
The current analysis is obviously looking for expected signals, but also can put upper bounds on various astronomical phenomena. In addition they have been testing the whole 'data pipeline' by injecting hardware signals ( actually 'jiggling' the detectors if you like ) and software injections ( sending out work units with 'constructed' data ) to see if the whole analysis system is accurate and robust. A calibration of the method really, to gain further understanding, refinements and confidence in this new adventure!
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Thanks a lot Mike! Great insightful posts. I keep my fingers crossed for the project team to come up with a workable solution. Maybe it helps if all people on planet earth stop using their car for 24 hours. Maybe it helps if I admit myself to mental care, too. :-)
Cheers
Soenke
:
your thoughts - the ways :: the knowledge - your space
:
S4 Search nearing completion - Einstein in Hibernation?
)
This is the latest on that I am aware of ie. same type of search probably longer WU's.
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Well, I read that, but it
)
Well, I read that, but it doesn't say much about what to expect during the transition.
RE: Well, I read that, but
)
I agree it's not explicit, but as I read it, processing will go straight on into the S5 data they already have.
Failing that, maybe they could put up some old or test WUs and let us test an official optimised client?
Dead men don't get the baby washed. HTH
RE: Well, I read that, but
)
from Ben Owen:
We're not on S5 yet. Right now it looks like mid-summer.
RE: RE: Well, I read
)
It's not clear from that if he's saying mid summer for the s4-s5 transition, or that s5 won't be ready until mid summer regardless of what happens with s4. My first read was the latter, but I'll freely admit it's ambigious.
To my knowledge the S5 run
)
To my knowledge the S5 run has not been launched yet in the sense that the intereferometer hardware is not upgraded to the desired level yet. This is expected to be done in the middle of summer. Hence, if the S4 analysis is finished too soon, there will be no S5 data to analyze.
RE: To my knowledge the S5
)
The interferometers have been collecting data since November '05, as they are now, admittedly with some problems. The issue is whether how much there is/will be of 'decent' data for signal analysis for us in the near future. The best type of data is 'triple lock' where all three LIGO's ( Hanford 2km, Hanford 4km, and Livingstone 4km ) are in optimal listening mode simultaneously. That co-incidence was about half of desired/designed goal in the period up to mid March. There is a poorly known noise source at low frequencies which seems to be related to time of day, 'traffic' if you like. They have on occasion had to reduce laser power to maintain lock but that reduces sensitivity though. My guess is that it is not clear to the project scientists right now as to what subset of the collected data is best to present to us for crunching.
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Thanx for the fresh
)
Thanx for the fresh information, Mike, I look forward to crunching the S5 data.
RE: Thanx for the fresh
)
I forgot to say that to a certain extent the S4 data analysis can be extended anyway. The central idea is that collected data is compared against expected signal profiles ( binary inspirals say ). This is done for certain time periods to sift and attempt to uncover patterns. The longer the period of analysis the better the chance of declaring the fainter disturbances. So you choose a data set, a signal template of interest, and then crunch - perhaps producing a probabilistic comment about a 'discovery'. The same soil can be tilled in many ways for results.
The current analysis is obviously looking for expected signals, but also can put upper bounds on various astronomical phenomena. In addition they have been testing the whole 'data pipeline' by injecting hardware signals ( actually 'jiggling' the detectors if you like ) and software injections ( sending out work units with 'constructed' data ) to see if the whole analysis system is accurate and robust. A calibration of the method really, to gain further understanding, refinements and confidence in this new adventure!
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Thanks a lot Mike! Great
)
Thanks a lot Mike! Great insightful posts. I keep my fingers crossed for the project team to come up with a workable solution. Maybe it helps if all people on planet earth stop using their car for 24 hours. Maybe it helps if I admit myself to mental care, too. :-)
Cheers
Soenke
:
your thoughts - the ways :: the knowledge - your space
: