From this I conclude, that it was already for a limited range of parameters done without success. So, will it be repeated here with a much wider field of parameters? Furthermore, doesn´t these glitches - abrupt changes in rotational speed likely due to material reorientation inside the pulsar - emit GW-power?
Kind regards and happy crunching
Martin
At this point I have to caution that I'm not a physicist, but hey, I can point you to what physicists have done:
GW searches for the Crab pulsar:
A presentation with a birds-eye view of observational results,
I'm not aware of any plans to do narrow-band or even targeted searches for Einstein@Home. That's simply not the kind of search we are specializing in, E@H is better at all-sky searches or directed searches if you are looking for continuous gravitational waves.
My reading of Prix et al seems to indicate that glitching can have a detectable GW signature(s), under the curious title of 'transient continuous waves'. These are further subdivided into 'repeating' and 'non-repeating' transient continuous waves. :-)
This appears to mean that processes either causative to, or consequent upon, the glitching events generate GW emission regularities that decay in amplitude over far shorter time periods ( eg. weeks ) than the typical winding down behaviour of the spinning neutron star. That in turn depends upon the physics, or model thereof, applied to the problem.
As for all the Bayesian stuff : aarggh .... I've never really grasped Bayes .... :-(
However for us simple souls there is ( my emphasis ) :
Quote:
In addition to the fully coherent search method, we have derived the necessary formalism for a semi-coherent transient search, which could be used to perform an all-sky, all-frequency wide parameter-space transient search, for example running on Einstein@Home. More work is required to fully develop and implement this approach.
Cheers, Mike.
( edit ) As far as I can tell : one takes the 'standard' continuous wave model and looks at detection within shorter 'windows'. This necessarily affects the detection statistics, or if you like, the confidence that one can place upon a positive result from a given template actually being evidence for a real phenomenon in the sky. That is : what likelihood may be assigned to us being fooled by random/uncorrelated noise.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
I'm not aware of any plans to do narrow-band or even targeted searches for Einstein@Home. That's simply not the kind of search we are specializing in, E@H is better at all-sky searches or directed searches if you are looking for continuous gravitational waves.
The main argument here is that these searches have a computing time / data volume ratio that make these infeasible for volunteer computing. Your computers will spend more time on downloading data for a task than on computing it.
S6CasA is actually part of a larger effort that we internally call "S6Directed", i.e. an analysis of "S6" data targeting single positions (hopefully GW sources) in the sky. We started with the most promising "Cassiopeia A", and will then simply move on to the next target (there are about a dozen on our list). We are also working on improving the analysis code for this, trying to make it more efficient and sensitive.
Update on that: At present it looks like there will be numerous and substantial changes to the analysis application code which effect e.g. on the runtime behavior are difficult to foresee. We will probably have a series of short "engineering runs" to get a better understanding of the new code in the heterogeneous environment of Einstein@Home before we will continue with actual searches.
Another update: still under heavy discussion and thus quite preliminary, but it currently looks like after the current S6CasA "directed" run we will squeeze in a short (1-2 month) run where we will follow-up a few (million) candidates from the S6Bucket runs. The remaining ~45d S6CasA look a bit tight to set one up, but essentially we would just need a new workunit generator - data, application, validator etc. are still there from the original run. At least a follow-up run would fit much better in now than interrupting or having to wait for the end of the next "science run".
We will get more data for FGRP4, so the run will last longer than what is currently shown at the server status page.
However all current searches are chewing old data, and neither Arecibo nor Fermi produce enough new data to keep Einstein@Home continuously fed from these.
Work on the advanced LIGO detectors is well underway, and they should start taking "scientific" data soon. Perspectively during 2015 the focus of Einstein@Home will shift back to its original purpose, the analysis of data from gravitational wave observatories.
RE: From this I conclude,
)
At this point I have to caution that I'm not a physicist, but hey, I can point you to what physicists have done:
GW searches for the Crab pulsar:
A presentation with a birds-eye view of observational results,
http://gr20-amaldi10.edu.pl/userfiles/12-01_Marie Anne Bizouard - Observational Results from Ground-based___.pdf
A pointer to results for Crab pulsar and Vela pulsar (another well studied young pulsar and a real champion when it comes to glitching) is on page 44:
Vela: Astrophys. J. 737 (2011) 93 : http://iopscience.iop.org/0004-637X/737/2/93/
Crab and others: Astrophys. J. 713 (2010) 671 : http://iopscience.iop.org/0004-637X/713/1/671/
There might be newer studies but you will get an idea from those papers.
You will also easily find papers that discuss how/if pulsar glitches (and whatever happens before or after the glitch) might cause detectable GWs:
PhysRevD.84.023007, also on http://arxiv.org/pdf/1104.1704.pdf
http://www.phys.ufl.edu/ireu/IREU2013/pdf_reports/Corey_Bathurst_Final_Report.pdf
I'm not aware of any plans to do narrow-band or even targeted searches for Einstein@Home. That's simply not the kind of search we are specializing in, E@H is better at all-sky searches or directed searches if you are looking for continuous gravitational waves.
Cheers
HB
Well, I'm game ! :-0 My
)
Well, I'm game ! :-0
My reading of Prix et al seems to indicate that glitching can have a detectable GW signature(s), under the curious title of 'transient continuous waves'. These are further subdivided into 'repeating' and 'non-repeating' transient continuous waves. :-)
This appears to mean that processes either causative to, or consequent upon, the glitching events generate GW emission regularities that decay in amplitude over far shorter time periods ( eg. weeks ) than the typical winding down behaviour of the spinning neutron star. That in turn depends upon the physics, or model thereof, applied to the problem.
As for all the Bayesian stuff : aarggh .... I've never really grasped Bayes .... :-(
However for us simple souls there is ( my emphasis ) :
Cheers, Mike.
( edit ) As far as I can tell : one takes the 'standard' continuous wave model and looks at detection within shorter 'windows'. This necessarily affects the detection statistics, or if you like, the confidence that one can place upon a positive result from a given template actually being evidence for a real phenomenon in the sky. That is : what likelihood may be assigned to us being fooled by random/uncorrelated noise.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
RE: I'm not aware of any
)
The main argument here is that these searches have a computing time / data volume ratio that make these infeasible for volunteer computing. Your computers will spend more time on downloading data for a task than on computing it.
BM
BM
RE: S6CasA is actually part
)
Update on that: At present it looks like there will be numerous and substantial changes to the analysis application code which effect e.g. on the runtime behavior are difficult to foresee. We will probably have a series of short "engineering runs" to get a better understanding of the new code in the heterogeneous environment of Einstein@Home before we will continue with actual searches.
BM
BM
RE: .... to get a better
)
Yes. Experience has shown the ability of the E@H milieu to step upon the mines in the field. :-)
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Another update: still under
)
Another update: still under heavy discussion and thus quite preliminary, but it currently looks like after the current S6CasA "directed" run we will squeeze in a short (1-2 month) run where we will follow-up a few (million) candidates from the S6Bucket runs. The remaining ~45d S6CasA look a bit tight to set one up, but essentially we would just need a new workunit generator - data, application, validator etc. are still there from the original run. At least a follow-up run would fit much better in now than interrupting or having to wait for the end of the next "science run".
BM
BM
Hello, What´s planed after
)
Hello,
What´s planed after FGRP4 run?
Hallo Majo! Please look for
)
Hallo Majo!
Please look for this thread.
Merry Christmas and a happy New Year!
Kind regards and happy crunching
Martin
We will get more data for
)
We will get more data for FGRP4, so the run will last longer than what is currently shown at the server status page.
However all current searches are chewing old data, and neither Arecibo nor Fermi produce enough new data to keep Einstein@Home continuously fed from these.
Work on the advanced LIGO detectors is well underway, and they should start taking "scientific" data soon. Perspectively during 2015 the focus of Einstein@Home will shift back to its original purpose, the analysis of data from gravitational wave observatories.
BM
BM
What is planned after the
)
What is planned after the current BRP5 run from the Parkes radio telescope in Australia?