Einstein@home 'albert' parameters and file format questions

Roberto Virga
Roberto Virga
Joined: 22 Jan 05
Posts: 7
Credit: 3746
RAC: 0
Topic 190767

Dear Prof. Allen and E@h staff,

I'm the author of a little BOINC add-on for Linux which tries to provide some insight about the workunits being computed. I already got some info from you about the old 'einstein' client's parameters (thanks!).
However, now that you've released a new 'albert' client, I'm back with a couple of additional questions.

Question 1. Among parameters passed to the new client there are "f1dot" and "f1dotBand", e.g.
[pre] --f1dot=1.26506e-10 --f1dotBand=-1.39157e-09 [/pre]Does f1dot stands for dF/dt, the first derivative of the frequence w.r.t. time, so that in the above example we restrict the search to cases where dF/dt in the interval:
[pre](1.26506e-10, 1.26506e-10 - 1.39157e-09)[/pre]Or am I completely off-base? If this is dF/dt, are these numbers in Hz/sec?

Question 2. Each line of the result file contains 5 numbers, like:
[pre]398.9391310942 0.925531 -1.37814 -9.8901e-11 19.3705[/pre]Do they stand for the following: frequency, RA, decl., dF/dt, score, respectively? Or, if I'm guessing wrong, could you kindly provide for an explanation of what they are?

Thanks in advance for your help,

- Roberto

Ben Owen
Ben Owen
Joined: 21 Dec 04
Posts: 117
Credit: 65696253
RAC: 3076

Einstein@home 'albert' parameters and file format questions

Roberto,

Sounds like a nice guide, which brings to mind ... we never did explain the scientific novelty of the albert application, did we?

Oops. Here it is.

The main thing is that albert deals with 30 hours of data at a time rather than 10. It's rare for any of the interferometers to stay in science mode for that long - when I was running shifts just now for S5, which is supposed to be better than S4, H2 was the most reliable instrument and its record I think was 26 hours - so that data may span up to 40 hours of real time.

The reason we want to search longer data stretches is that we can get greater sensitivity. The reason we don't search longer and longer ones is computing power. We really, really need all those CPUs. The computational cost grows as a pretty high power of the length of the data segments.

One reason is related to the command-line arguments you're seeing. f1dot is indeed the frequency derivative (rate of change with respect to time) and the interval is as you guessed. You guessed right about the output, too.

The 10 hour stretches of data in the old work units were short enough that even a very strong source with a very rapidly changing frequency wouldn't change enough to affect the analysis. But over 30 hours, we do have to account for possible values of the change, and the new work units search over various values of f1dot. At one point we were thinking about adding a thermometer to the graphics so you could see it going through the fdot values while it dwelled on a single sky position. But now we go through sky positions very fast, so the thermometer wouldn't be very entertaining.

Since this new search is much more expensive than the old one, we are being more careful about how we choose the sky positions. You can see from the screen saver that the grid it covers on the sky is much more finely spaced for some work units than others. Basically, at low frequency you can get away with a very coarse grid but at high frequency you need much better resolution of the sky position. In the old work units we were overcovering by using a "one size fits all" grid, but with cheaper 10-hour data segments it wasn't a big deal like it is now.

In an attempt to keep the work units more or less the same length in spite of this, the frequency band searched is now variable to compensate.

There are some more subtle changes to how the grids are made, which you might be able to see if you look very carefully. I've got to go now, but since that's one of the things I designed I'd be glad to talk about it later.

Hope this helps,
Ben

Roberto Virga
Roberto Virga
Joined: 22 Jan 05
Posts: 7
Credit: 3746
RAC: 0

Ben, thank you very much

Message 25142 in response to message 25141

Ben,

thank you very much for taking the time to write this detailed explanation. It provided so much more information than I could have hoped for!

I have one more question. The --IFO= parameter tells the client from what interferometer the data comes from. Currently is always --IFO=LHO, but possible other values are:
GEO - GEO600
LLO - LIGO Livingston Observatory
NAUTILUS - NAUTILUS at INFN-FNL
VIRGO - VIRGO at EGO
TAMA - TAMA300 in Japan
CIT - California Institute of Technology (? - I can't find info about their interferometer)
Is Einstein@Home eventually going to use data from these interferometers as well (even though some of them are less accurate than the one at LHO)?

Thanks for your help,

- Roberto

tullio
tullio
Joined: 22 Jan 05
Posts: 2118
Credit: 61407735
RAC: 0

RE: Ben, thank you very

Message 25143 in response to message 25142

Quote:

Ben,

thank you very much for taking the time to write this detailed explanation. It provided so much more information than I could have hoped for!

I have one more question. The --IFO= parameter tells the client from what interferometer the data comes from. Currently is always --IFO=LHO, but possible other values are:
GEO - GEO600
LLO - LIGO Livingston Observatory
NAUTILUS - NAUTILUS at INFN-FNL
VIRGO - VIRGO at EGO
TAMA - TAMA300 in Japan
CIT - California Institute of Technology (? - I can't find info about their interferometer)
Is Einstein@Home eventually going to use data from these interferometers as well (even though some of them are less accurate than the one at LHO)?

Thanks for your help,

- Roberto


NAUTILUS is not an interferometer but a resonant mass antenna at supercold temperatures, similar in principle to the Weber pioneering apparatus of the Sixties.
Tullio

Jim-R
Jim-R
Joined: 8 Feb 06
Posts: 12
Credit: 4352
RAC: 0

CIT - California Institute of

Message 25144 in response to message 25142


CIT - California Institute of Technology (? - I can't find info about their interferometer)

I did a little searching, not being familiar with this one either. I believe that this is probably referring to a prototype 40m laser interferometer built by caltech and used as the prototype for LIGO.

Quoted from http://www.nap.edu/openbook/0309090849/html/110.html

"In 1980, NSF provided funds to MIT to complete the 1.5-m prototype interferometer and a technical site and cost study of a large-baseline interferometer. It also funded a 40-m prototype interferometer, which Drever and Stan Whitcomb began constructing at Caltech in 1981. The Caltech interferometer began running in July 1982 and became a testbed for the future LIGO design."

Don't know if this is the one referred to as CIT but it seems possible. I haven't found much on the net about it myself. Maybe someone more familiar with it can enlighten us.

When asked a question and you are not sure of the right answer, I've found that the best answer is always "I don't know for sure, but I'll find out!"

Desti
Desti
Joined: 20 Aug 05
Posts: 117
Credit: 23762214
RAC: 0

You are the author of

You are the author of KBoincSpy? That's a great tool, thank you!

Is it possible to add some logging functions for more projects like the result logs for seti@home?

Roberto Virga
Roberto Virga
Joined: 22 Jan 05
Posts: 7
Credit: 3746
RAC: 0

RE: Is it possible to add

Message 25146 in response to message 25145

Quote:
Is it possible to add some logging functions for more projects like the result logs for seti@home?


Funny you should ask - just last night I implemented a log for Einstein@Home. It logs both workunits and results. Its format is compatible with BoincLogX (for the 'einstein' client, I'm working with the BoincLogX/SetiMapView author to make sure that the log format will be compatible for the 'albert' client too).
To avoid the results log getting too big fast (for every E@h workunit, typically hundreds or even thousands candidates are returned), it logs only results whose F statistic is above the 2F=25, 2F=50, 2F=75, 2F=100, or 2F=125 threshold (user-selectable, 2F=75 is the default).

- Roberto

Ben Owen
Ben Owen
Joined: 21 Dec 04
Posts: 117
Credit: 65696253
RAC: 3076

Roberto, LLO will

Roberto,

LLO will definitely be used. Actually, I think you should see it now and then in this run. It's been a while since we decided this and my memory is hazy, but I think about 10% of the data in the current work units is LLO. The sensitivity isn't much worse than the 4km at LHO, and LLO had some good long stretches.

CIT is indeed the 40m prototype, but E@H won't be analyzing any data from it.

What you are seeing with all those options is basically a bunch of leftovers from code that was originally written for other purposes. E@H will be doing just LHO and LLO for a while, though when VIRGO gets up and running it should achieve comparable sensitivity and might end up on E@H in a couple of years. GEO600 and TAMA have shorter arm lengths and so can't compete on broadband sensitivity. They (at least GEO600) are working on tricks to do deep narrowband searches at high frequency, but it means they probably wouldn't be in on this all-sky search for previously unknown neutron stars.

Hope this helps,
Ben

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.