Interesting to note that LIGO Hanford is more sensitive than GEO600 on S3. Is GEO600 up to par on S4? What are some of the upgrades that are planned for LIGO Hanford, and how will they improve sensitivity?

Improvements in sensitivity get harder and harder as you get closer to the design goal. We've made two key upgrades at Hanford this summer, to the 4km instrument, to make it more sensitive and get very near our design goal for our one-year run, starting Nov 5, 05. The first was to swap out a suspended test mass, one of the core optics of the long arm of the machine. This optic was too reflective, scattering some excess light out of the arm and spoiling the heating (thermal lensing property) of the cavity. We installed a better, less reflective optic. This is described (with some nice photos) at: http://www.ligo-wa.caltech.edu/ligo_science/vent_0605.html

Furthermore, we've been boosting the laser power incident on the 4km machine (and the others, as well), in order to reduce the noise at high frequency (so-called shot noise)

Improvements in sensitivity get harder and harder as you get closer to the design goal...

Thanks, landry – looks like quite a chore swapping out a test mass! Surprising to learn that it takes about 5 weeks to reach operational pressure of 10^-9 torr, even with the partitioned enclosure. So if the old ITMX is no good, has anyone called dibs on it yet? :) Looking forward to S5!

This may be the wrong place to ask this question. If so, I apologize. We are currently scanning the entire celestial sphere for pulsar signals. This seems like a diffuse search. Why don't we, at some point, focus a large effort in the direction of a single known pulsar? We would know precisely what frequency the signal should have, and we could concentrate a lot of computing power on one spot, thus digging deeper into the data.

Thank you, Mark. After I had posted I realized that that was probably the reason. I'm very excited about LIGO and happy to be able to contribute in some way.

Brian
Al known pulsars are to far away to be detected by LIGO.

While it is true that many of the known pulsars are farther away than we'd like, LIGO and GEO do target them for analysis. This is typically done with different code and search algorithms than Einstein@home, for example http://xxx.lanl.gov/PS_cache/gr-qc/pdf/0410/0410007.pdf (this preprint is a technical paper that was published in Phys Rev Letters).

All known pulsars are too far away to be detected by LIGO.

While it is true that many of the known pulsars are farther away than we'd like, LIGO and GEO do target them for analysis.

Just to expand on what Mike Landry said: They are all too far away for the S4 search to get them. But S5 will have a shot at the Crab pulsar.

How do we know this?

The radio astronomers can tell us not only how fast a pulsar is spinning, but also how fast it's spinning down. (The frequency slowly decreases with time.)

As noted in the "Am I on the right track?" thread, a pulsar that is emitting gravitational waves will spin down as it loses kinetic energy of rotation. If you assume that all of that observed energy loss is emitted as gravitational waves, you can plug in the distance to the pulsar and get a number for the strength of the gravitational wave signal. This "spindown limit" for the Crab pulsar is high enough that S5 will be able to see it if all the spindown is from gravitational waves.

In real life, we know that's overly optimistic. Radio pulsars are certainly emitting radio waves after all, so only some fraction of the spindown could be due to gravitational waves. For the Crab it's probably a pretty small fraction of the spindown, because it's in a big glowing nebula that it is constantly stirring up, as you can see from Chandra and Hubble.

Also, the results of the all-sky search for unknown pulsars that Einstein@Home is doing can be phrased in terms of limits like that. It's trickier in this case, but we're working on it.

The end result is 2901 SFT files, each of which covers an (overlapping) band of about 0.8 Hz. Each file contains 1200 Fourier Transforms (60 ten-hour segments * 20 SFTs/ten-hour segment). The frequency range covered is from 50.0Hz to 1500.5 Hz.

If I understood right:

60 ten-hour segment / all data x 20 SFTs / ten-hour segment = 1200 x (ten-hour segment x SFTs / all data x ten-hour segment) = 1200 SFTs / all dataâ€¦

The end result is 2901 SFT files, each of which covers an (overlapping) band of about 0.8 Hz. Each file contains 1200 Fourier Transforms (60 ten-hour segments * 20 SFTs/ten-hour segment). The frequency range covered is from 50.0Hz to 1500.5 Hz.

If I understood right:

60 ten-hour segment / all data x 20 SFTs / ten-hour segment = 1200 x (ten-hour segment x SFTs / all data x ten-hour segment) = 1200 SFTs / all dataâ€¦

â€¦but not each fileâ€¦

I would interpret that as:
The SFT's, being Fourier transforms gives you a power vs. frequency data, derived from amplitude vs. time. Define Frequency Bands
The total range of frequencies is 1500.5 - 50 = 1450.5 Hz.
Chop this range into 0.5Hz wide blocks giving 2 * 1450.5 = 2901 blocks.
Add the wings, hence overlap, to 0.8Hz wide blocks, still 2901 blocks though. Define Time Periods
Divide your best 600 hours of data into 30 minute periods = 600/(0.5) = 1200 periods. Get Power vs. Frequency From Each Time Period
Do a Fourier transform on each period. Look at a Given Frequency Band Across all Periods
For a given 0.8Hz frequency band, chop the data out of each of the transforms.
Repackage these by frequency band .. that is construct a file for each 0.8Hz frequency band ( 2901 files ), each of which contains the Fourier transform data obtained from each of the 30 minute segments of the 600 best hours. Hence 1200 data 'subsets' in each file. Search For Signals
Hand them out for crunching.

(edit) So each 'SFT file' actually contains pieces of 1200 separate Fourier Transforms. The 2901 files handed out for analysis are 'externally indexed' by frequency, but are 'internally indexed' by time period. As opposed to the original transforms which as a set were externally indexed by time period, but internally could be divided by frequency. Your just cutting up the data set by 'rows' instead of 'columns'.

I have made this letter longer than usual because I lack the time to make it shorter.Blaise Pascal

## RE: Outstanding

)

Improvements in sensitivity get harder and harder as you get closer to the design goal. We've made two key upgrades at Hanford this summer, to the 4km instrument, to make it more sensitive and get very near our design goal for our one-year run, starting Nov 5, 05. The first was to swap out a suspended test mass, one of the core optics of the long arm of the machine. This optic was too reflective, scattering some excess light out of the arm and spoiling the heating (thermal lensing property) of the cavity. We installed a better, less reflective optic. This is described (with some nice photos) at: http://www.ligo-wa.caltech.edu/ligo_science/vent_0605.html

Furthermore, we've been boosting the laser power incident on the 4km machine (and the others, as well), in order to reduce the noise at high frequency (so-called shot noise)

## RE: Improvements in

)

Thanks, landry – looks like quite a chore swapping out a test mass! Surprising to learn that it takes about 5 weeks to reach operational pressure of 10^-9 torr, even with the partitioned enclosure. So if the old ITMX is no good, has anyone called dibs on it yet? :) Looking forward to S5!

## This may be the wrong place

)

This may be the wrong place to ask this question. If so, I apologize. We are currently scanning the entire celestial sphere for pulsar signals. This seems like a diffuse search. Why don't we, at some point, focus a large effort in the direction of a single known pulsar? We would know precisely what frequency the signal should have, and we could concentrate a lot of computing power on one spot, thus digging deeper into the data.

## Brian Al known pulsars are to

)

Brian

Al known pulsars are to far away to be detected by LIGO.

## Thank you, Mark. After I had

)

Thank you, Mark. After I had posted I realized that that was probably the reason. I'm very excited about LIGO and happy to be able to contribute in some way.

## RE: Brian Al known pulsars

)

While it is true that many of the known pulsars are farther away than we'd like, LIGO and GEO do target them for analysis. This is typically done with different code and search algorithms than Einstein@home, for example http://xxx.lanl.gov/PS_cache/gr-qc/pdf/0410/0410007.pdf (this preprint is a technical paper that was published in Phys Rev Letters).

## RE: RE: All known pulsars

)

Just to expand on what Mike Landry said: They are all too far away for the S4 search to get them. But S5 will have a shot at the Crab pulsar.

How do we know this?

The radio astronomers can tell us not only how fast a pulsar is spinning, but also how fast it's spinning down. (The frequency slowly decreases with time.)

As noted in the "Am I on the right track?" thread, a pulsar that is emitting gravitational waves will spin down as it loses kinetic energy of rotation. If you assume that all of that observed energy loss is emitted as gravitational waves, you can plug in the distance to the pulsar and get a number for the strength of the gravitational wave signal. This "spindown limit" for the Crab pulsar is high enough that S5 will be able to see it if all the spindown is from gravitational waves.

In real life, we know that's overly optimistic. Radio pulsars are certainly emitting radio waves after all, so only some fraction of the spindown could be due to gravitational waves. For the Crab it's probably a pretty small fraction of the spindown, because it's in a big glowing nebula that it is constantly stirring up, as you can see from Chandra and Hubble.

Also, the results of the all-sky search for unknown pulsars that Einstein@Home is doing can be phrased in terms of limits like that. It's trickier in this case, but we're working on it.

Hope this helps,

Ben

## Excuse me please for a petty

)

Excuse me please for a petty question but I am don't fully understand this â€œphrase blockâ€? from chapter nine â€œHow does the Einstein@Home S3 search work?â€?:

If I understood right:

60 ten-hour segment / all data x 20 SFTs / ten-hour segment = 1200 x (ten-hour segment x SFTs / all data x ten-hour segment) = 1200 SFTs / all dataâ€¦

â€¦but not each fileâ€¦

## RE: Excuse me please for a

)

I would interpret that as:

The SFT's, being Fourier transforms gives you a power vs. frequency data, derived from amplitude vs. time.

Define Frequency Bands

The total range of frequencies is 1500.5 - 50 = 1450.5 Hz.

Chop this range into 0.5Hz wide blocks giving 2 * 1450.5 = 2901 blocks.

Add the wings, hence overlap, to 0.8Hz wide blocks, still 2901 blocks though.

Define Time Periods

Divide your best 600 hours of data into 30 minute periods = 600/(0.5) = 1200 periods.

Get Power vs. Frequency From Each Time Period

Do a Fourier transform on each period.

Look at a Given Frequency Band Across all Periods

For a given 0.8Hz frequency band, chop the data out of each of the transforms.

Repackage these by frequency band .. that is construct a file for each 0.8Hz frequency band ( 2901 files ), each of which contains the Fourier transform data obtained from each of the 30 minute segments of the 600 best hours. Hence 1200 data 'subsets' in each file.

Search For Signals

Hand them out for crunching.

(edit) So each 'SFT file' actually contains pieces of 1200 separate Fourier Transforms. The 2901 files handed out for analysis are 'externally indexed' by frequency, but are 'internally indexed' by time period. As opposed to the original transforms which as a set were externally indexed by time period, but internally could be divided by frequency. Your just cutting up the data set by 'rows' instead of 'columns'.

I have made this letter longer than usual because I lack the time to make it shorter. Blaise Pascal

## RE: I would interpret that

)

Thank you for explanation!