A lot of info is contained in the debug output, actually. You will see there that each result contains exactly 2^21 ~= 2*10^6 samples, which explains the length of the downloaded data. Powers of two are convenient to handle, e.g. in FFT, so this is no coincidence.
The sampling frequency is such that the sample time is 128 microseconds ... so you get approx 4.5 minutes of un-dispersed data for each result.
As you may have noticed, the promised new app versions are out, and also the first longer running ABP2 workunits have been sent out, actually almost a thousand, somewhat more than I intended. It looks like these error out after 25% on CUDA Apps. So if you recently (between 15:00 and 18:30 UTC today) got a bunch of these new tasks that show up with a time estimate of 4x what you know from ABP2 tasks and they have been assigned to CUDA Apps (ABP2cuda23), feel free to abort them - they'll probably error out.
A lot of info is contained in the debug output, actually. You will see there that each result contains exactly 2^21 ~= 2*10^6 samples, which explains the length of the downloaded data. Powers of two are convenient to handle, e.g. in FFT, so this is no coincidence.
The sampling frequency is such that the sample time is 128 microseconds ... so you get approx 4.5 minutes of un-dispersed data for each result.
HB
Hmm, interesting ... But I am only more confused.
2 ^ 21 samples occupy 2 megabytes? or 1 sample = 1 byte?
Then it must mean that the data stored in integer format (0 to 255)? In 1 byte can not record any format with floating point ...
Like the old (8 bit) sound recording on computers?
Are you sure about 128 microseconds? This is equal to the sampling frequency of 7812 Hz. With this digitization can store analog sources only with frequency below 7812 / 2 ~ 3906 Hz. But the signals from pulsars are taken by the Arecibo in radio frequencies (megahertz), is't it?
Total (in archive) currently has more than 1200 hours of data. This corresponds to about 15,000 5-minute piece of data before "dedispersion"(corrections for the possible distances to pulsars), and 15,000 * 628 = 9,420,000 after?
And finally, how many templates (the possible combinations of the characteristics of the pulsar) is checked in each WU and in each data set total?
Or in other words - how many WUs need to process for each data set?
P.S.
I can not see the "debug output" now because for some reason my computers receives very little ABP2 tasks. Actually at the moment I have no ABP2 tasks in the queue or in the finished results at all. S5R6 GW tasks only...
Sampling frequency:
OK, but if you want to record on tape, say, a morse code transmission over AM radio, what would be your choice of sampling frequency?
You would pick a sampling frequency sufficient to capture the acoustic signal that is *modulated* on the carrier frequency, and not something in the order of magnitude of the carrier frequency itself
(100s of kHz for AM).
For the pulsars, we are not interested in the exact waveform of the EM emissions of the pulsar, we just want to capture and time the pulses as such.
The fastest spinning pulsars will send a pulse every few milliseconds, so a sampling frequency of 1 / 128 microseconds seems reasonable for me, because that's enough to catch the modulated "signal" we are interested in.
I know that some kind of compression is used for the sampled data, but because it's mostly noise, you cannot expect large compression ratios.
CU
HBE
Quote:
Quote:
Hi!
A lot of info is contained in the debug output, actually. You will see there that each result contains exactly 2^21 ~= 2*10^6 samples, which explains the length of the downloaded data. Powers of two are convenient to handle, e.g. in FFT, so this is no coincidence.
The sampling frequency is such that the sample time is 128 microseconds ... so you get approx 4.5 minutes of un-dispersed data for each result.
HB
Hmm, interesting ... But I am only more confused.
2 ^ 21 samples occupy 2 megabytes? or 1 sample = 1 byte?
Then it must mean that the data stored in integer format (0 to 255)? In 1 byte can not record any format with floating point ...
Like the old (8 bit) sound recording on computers?
Are you sure about 128 microseconds? This is equal to the sampling frequency of 7812 Hz. With this digitization can store analog sources only with frequency below 7812 / 2 ~ 3906 Hz. But the signals from pulsars are taken by the Arecibo in radio frequencies (megahertz), is't it?
Total (in archive) currently has more than 1200 hours of data. This corresponds to about 15,000 5-minute piece of data before "dedispersion"(corrections for the possible distances to pulsars), and 15,000 * 628 = 9,420,000 after?
And finally, how many templates (the possible combinations of the characteristics of the pulsar) is checked in each WU and in each data set total?
Or in other words - how many WUs need to process for each data set?
P.S.
I can not see the "debug output" now because for some reason my computers receives very little ABP2 tasks. Actually at the moment I have no ABP2 tasks in the queue or in the finished results at all. S5R6 GW tasks only...
As you may have noticed, the promised new app versions are out, and also the first longer running ABP2 workunits have been sent out, actually almost a thousand, somewhat more than I intended. It looks like these error out after 25% on CUDA Apps. So if you recently (between 15:00 and 18:30 UTC today) got a bunch of these new tasks that show up with a time estimate of 4x what you know from ABP2 tasks and they have been assigned to CUDA Apps (ABP2cuda23), feel free to abort them - they'll probably error out.
I just published new CUDA Apps (x.11) that should solve the problem with the new workunits. I'll keep an eye on it.
As you may have noticed, the promised new app versions are out, and also the first longer running ABP2 workunits have been sent out, actually almost a thousand, somewhat more than I intended. It looks like these error out after 25% on CUDA Apps. So if you recently (between 15:00 and 18:30 UTC today) got a bunch of these new tasks that show up with a time estimate of 4x what you know from ABP2 tasks and they have been assigned to CUDA Apps (ABP2cuda23), feel free to abort them - they'll probably error out.
I just published new CUDA Apps (x.11) that should solve the problem with the new workunits. I'll keep an eye on it.
BM
Here is a completed quorum where one of the first two tasks sent out on Feb 8 was a 'CUDA app failure'. The resend was sent to one of my hosts on Feb 9 and it has now been crunched, returned and validated with the other initial task. It took pretty much exactly 4 times the old task crunch time on that host (9033 secs as opposed to 4 x 2240 = 8960 secs) and was awarded 4 times the old credit, so all seems fine now that the 'CUDA issue' has been solved.
My most recent ABP2 downloads are still 'shorties' so how soon will you be sending out more of the larger tasks?
As you may have noticed, the promised new app versions are out, and also the first longer running ABP2 workunits have been sent out, actually almost a thousand, somewhat more than I intended. It looks like these error out after 25% on CUDA Apps. So if you recently (between 15:00 and 18:30 UTC today) got a bunch of these new tasks that show up with a time estimate of 4x what you know from ABP2 tasks and they have been assigned to CUDA Apps (ABP2cuda23), feel free to abort them - they'll probably error out.
I just published new CUDA Apps (x.11) that should solve the problem with the new workunits. I'll keep an eye on it.
is a completed quorum where one of the first two tasks sent out on Feb 8 was a 'CUDA app failure'. The resend was sent to one of my hosts on Feb 9 and it has now been crunched, returned and validated with the other initial task. It took pretty much exactly 4 times the old task crunch time on that host (9033 secs as opposed to 4 x 2240 = 8960 secs) and was awarded 4 times the old credit, so all seems fine now that the 'CUDA issue' has been solved.
My most recent ABP2 downloads are still 'shorties' so how soon will you be sending out more of the larger tasks?
Yes, this is indeed a 'new generation' workunit, as you can see from the 160 granted credits.
It required some work over the past week to update all the backend components (workunit generator (WUG), validator, assimilator) to do deal with the new result structure - previously they relied on every result having only one file that has the name of the result. It was only today that I could verify that they are all working as they should, with both old and new tasks.
Since about 2h ago I have one instance of the WUG running continuously to crank out 157 (628/4) 'new generation' tasks every 70 minutes. There are still three WUG instances running that produce 3*628 'traditional' ABP2 WUs every 70 minutes. If all goes well, I'll stop generation of 'traditional' ABP2 work tomorrow, and then slowly ramp up 'new generation' workunit production over the next few days, keeping an eye on how the system behaves.
If all goes well, I'll stop generation of 'traditional' ABP2 work tomorrow, and then slowly ramp up 'new generation' workunit production over the next few days, keeping an eye on how the system behaves.
Our pre-processing machine crashed this morning with a hardware problem, so I guess ABP workunit generation will get delayed a bit.
Hi! A lot of info is
)
Hi!
A lot of info is contained in the debug output, actually. You will see there that each result contains exactly 2^21 ~= 2*10^6 samples, which explains the length of the downloaded data. Powers of two are convenient to handle, e.g. in FFT, so this is no coincidence.
The sampling frequency is such that the sample time is 128 microseconds ... so you get approx 4.5 minutes of un-dispersed data for each result.
HB
As you may have noticed, the
)
As you may have noticed, the promised new app versions are out, and also the first longer running ABP2 workunits have been sent out, actually almost a thousand, somewhat more than I intended. It looks like these error out after 25% on CUDA Apps. So if you recently (between 15:00 and 18:30 UTC today) got a bunch of these new tasks that show up with a time estimate of 4x what you know from ABP2 tasks and they have been assigned to CUDA Apps (ABP2cuda23), feel free to abort them - they'll probably error out.
BM
BM
RE: Hi! A lot of info is
)
Hmm, interesting ... But I am only more confused.
2 ^ 21 samples occupy 2 megabytes? or 1 sample = 1 byte?
Then it must mean that the data stored in integer format (0 to 255)? In 1 byte can not record any format with floating point ...
Like the old (8 bit) sound recording on computers?
Are you sure about 128 microseconds? This is equal to the sampling frequency of 7812 Hz. With this digitization can store analog sources only with frequency below 7812 / 2 ~ 3906 Hz. But the signals from pulsars are taken by the Arecibo in radio frequencies (megahertz), is't it?
Total (in archive) currently has more than 1200 hours of data. This corresponds to about 15,000 5-minute piece of data before "dedispersion"(corrections for the possible distances to pulsars), and 15,000 * 628 = 9,420,000 after?
And finally, how many templates (the possible combinations of the characteristics of the pulsar) is checked in each WU and in each data set total?
Or in other words - how many WUs need to process for each data set?
P.S.
I can not see the "debug output" now because for some reason my computers receives very little ABP2 tasks. Actually at the moment I have no ABP2 tasks in the queue or in the finished results at all. S5R6 GW tasks only...
Hi! Sampling frequency:
)
Hi!
Sampling frequency:
OK, but if you want to record on tape, say, a morse code transmission over AM radio, what would be your choice of sampling frequency?
You would pick a sampling frequency sufficient to capture the acoustic signal that is *modulated* on the carrier frequency, and not something in the order of magnitude of the carrier frequency itself
(100s of kHz for AM).
For the pulsars, we are not interested in the exact waveform of the EM emissions of the pulsar, we just want to capture and time the pulses as such.
The fastest spinning pulsars will send a pulse every few milliseconds, so a sampling frequency of 1 / 128 microseconds seems reasonable for me, because that's enough to catch the modulated "signal" we are interested in.
I know that some kind of compression is used for the sampled data, but because it's mostly noise, you cannot expect large compression ratios.
CU
HBE
RE: As you may have
)
I just published new CUDA Apps (x.11) that should solve the problem with the new workunits. I'll keep an eye on it.
BM
BM
RE: I know that some kind
)
Maybe you could use that compression-algorithm, or a variant, to filter out the not-noise data. ;)
RE: RE: As you may have
)
Here is a completed quorum where one of the first two tasks sent out on Feb 8 was a 'CUDA app failure'. The resend was sent to one of my hosts on Feb 9 and it has now been crunched, returned and validated with the other initial task. It took pretty much exactly 4 times the old task crunch time on that host (9033 secs as opposed to 4 x 2240 = 8960 secs) and was awarded 4 times the old credit, so all seems fine now that the 'CUDA issue' has been solved.
My most recent ABP2 downloads are still 'shorties' so how soon will you be sending out more of the larger tasks?
Cheers,
Gary.
RE: RE: RE: As you may
)
Yes, this is indeed a 'new generation' workunit, as you can see from the 160 granted credits.
It required some work over the past week to update all the backend components (workunit generator (WUG), validator, assimilator) to do deal with the new result structure - previously they relied on every result having only one file that has the name of the result. It was only today that I could verify that they are all working as they should, with both old and new tasks.
Since about 2h ago I have one instance of the WUG running continuously to crank out 157 (628/4) 'new generation' tasks every 70 minutes. There are still three WUG instances running that produce 3*628 'traditional' ABP2 WUs every 70 minutes. If all goes well, I'll stop generation of 'traditional' ABP2 work tomorrow, and then slowly ramp up 'new generation' workunit production over the next few days, keeping an eye on how the system behaves.
BM
BM
RE: If all goes well, I'll
)
Our pre-processing machine crashed this morning with a hardware problem, so I guess ABP workunit generation will get delayed a bit.
BM
BM
What percentage of realtime
)
What percentage of realtime are we at currently?