Out of Arecibo data

Mike Hewson
Mike Hewson
Joined: 1 Dec 05
Posts: 5758
Credit: 46032737
RAC: 4057

RE: RE: RE: With the

Quote:
Quote:
Quote:
With the latest changes in the resampling code, we're now doing the nearest neighbour resampling optimally.

I see ... I look forward to looking at the code at some point :) I am a little surprised that even a simple linear spline isn't worth the speed hit - something like "y[i] + (x - x[i])*(y[i+1] - y[i])/(x[i+1] - x[i])"?

That would be a low-passs filter in frequency which is something we don't want to have; it limits our ability to see the high frequency components.


Since the linear approximation above is effectively the first order of a Taylor's expansion, then how many orders does the current code do/emulate? Or am I asking the wrong question? :-)

Cheers, Mike.

pascal_sig.jpg

Benjamin Knispel
Benjamin Knispel
Joined: 1 Jun 06
Posts: 124
Credit: 4974142
RAC: 0

RE: Since the linear

Quote:

Since the linear approximation above is effectively the first order of a Taylor's expansion, then how many orders does the current code do/emulate? Or am I asking the wrong question? :-)

Cheers, Mike.

Hi Mike,

a few words of clarification: the re-sampling has to effectively move the time series received at the Earth to the binary pulsar system barycenter. This re-sampling depends on the system parameters assumed (projected orbital radius, orbital period, initial orbital phase) and has to be re-calculated for each orbital template.

The code identifies the time sample by choosing the nearest bin in the received time series; the small error one makes results in an average loss of about 1-2% of the signal power. This is an extremely little loss, considering the templates are spaced such that at most 30% (on average 17%) of signal power can be lost. We could of course put them closer together, but that would increase the computational power by quite an amount.

More sophisticated (also more expensive) algorithms could combine the information from multiple time samples but this would only help to squeeze out the above mentioned 1-2% loss.

Hope that helps to clarify this a bit.

Cheers, Ben

 

Einstein@Home Project

europa
europa
Joined: 29 Oct 10
Posts: 49
Credit: 34029952
RAC: 0

Am I correct in thinking that

Am I correct in thinking that the ABP WU's are the only ones that use the GPUs? All of the other types of Einstein WU's are only for the CPUs?

Thanks,
Steve

Mike Hewson
Mike Hewson
Joined: 1 Dec 05
Posts: 5758
Credit: 46032737
RAC: 4057

RE: Hi Mike, a few words

Quote:

Hi Mike,

a few words of clarification: the re-sampling has to effectively move the time series received at the Earth to the binary pulsar system barycenter. This re-sampling depends on the system parameters assumed (projected orbital radius, orbital period, initial orbital phase) and has to be re-calculated for each orbital template.

The code identifies the time sample by choosing the nearest bin in the received time series; the small error one makes results in an average loss of about 1-2% of the signal power. This is an extremely little loss, considering the templates are spaced such that at most 30% (on average 17%) of signal power can be lost. We could of course put them closer together, but that would increase the computational power by quite an amount.

More sophisticated (also more expensive) algorithms could combine the information from multiple time samples but this would only help to squeeze out the above mentioned 1-2% loss.

Hope that helps to clarify this a bit.


Thanks Ben. I keep forgetting we are dealing with digitised data .... having to tramp through a sequence of ( time based ) bins, estimating to some between-ish-points or somesuch would be painful. I guess you're saying that the little slice of time that each bin-width represents is such that a bit of offset ( to nearest neighbour ) from 'true' only gives a slightest of mismatch in convolving alongside the template ( in any case rather less than other sins ). Presumably if you found a signal you could go back and hammer the data harder. The 'risk' is if there is a signal out there which could flick out of the significance bracket on account of the choice ( sounds like vacuuming a house, there's always a bit you've probably missed, some mote of dust .... ). What is the sampling frequency at the data sources?

@europa - yup, the GPU work units ( marked as 'cuda' ) are only doing binary pulsar work. The GW units are only done on the CPU, but if you look at this page some pulsar work can be done on CPU's.

Cheers, Mike.

pascal_sig.jpg

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.