Question about S5RX-stages roles

hoarfrost
hoarfrost
Joined: 9 Feb 05
Posts: 207
Credit: 103,053,349
RAC: 5
Topic 193221

Hello!

In S5R1 (if I remember right) we have small-time WU's and star-pointer, fast run on star sphere surface.

In S5R2 WU's where long and start pointer a little thought above each position. And our computers made a hierarchical all sky search.

In S5R3 GHz spend ~ 30 second of calculation for each position (for 466 GHz LIGO-block). And search is “hierarchical�…

From report of S3 analysis I read, that number of computations increase ~ N^6 for increasing length in time of analyzed data block for factor N.

What role of each S5 analysis stage?
What role of S5R3? We crunch a new WU’s for searching a most interesting sky positions for next S5R4 with great increase of T and “point in sky search�?

Thank you for attention!

P.S. Crunchers! Can we complete S5R3 in 300 days?

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3,522
Credit: 695,041,015
RAC: 136,960

Question about S5RX-stages roles

Quote:

Hello!

In S5R1 (if I remember right) we have small-time WU's and star-pointer, fast run on star sphere surface.

Correct.

Quote:

In S5R2 WU's where long and start pointer a little thought above each position. And our computers made a hierarchical all sky search.

Not sure I understand this, but yes, in S5R2, each workunit in itself performed an "all-sky" search, that is during the runtime of a result, the little cursor indicating the sky-point being looked at would pretty much have jumped all over the sky.

Quote:

In S5R3 GHz spend ~ 30 second of calculation for each position (for 466 GHz LIGO-block). And search is “hierarchical�…

First, S5R2 also was a hierarchical search by design, that is the first stage of the run is done by the volunteers' PCs that crunch workunits and send back results. Then, in a next step, the data would be post-processed and the most most promising candidate regions would be inspected more closely, hence "hierarchical search", I guess.

The 466 you are referring to are Hz, btw, not GHz, it would be the frequency of a supposed gravitational wave, or twice the spin-rate of a supposed neutron-star emitting this wave.

As you have correctly noticed, the time the S5R3 app spends per sky-point is about 0.5 to 1 minutes (depending on the speed of your hardware, of course). To get convenient "crunch times" per tasks, each individual task no longer covers the entire sky but only a part containing (very roughly) 1000 points. The other regions of the sky are searched by other tasks distributed to other PCs, so as a whole, S5R3 is still a "full-sky hierarchical search", it's just that the individual tasks are arranged differently compared to S5R2.

Quote:


From report of S3 analysis I read, that number of computations increase ~ N^6 for increasing length in time of analyzed data block for factor N.

What role of each S5 analysis stage?
What role of S5R3? We crunch a new WU’s for searching a most interesting sky positions for next S5R4 with great increase of T and “point in sky search�?

Thank you for attention!

As far as I understood Bernd, S5R4 will be similar to S5R3 but would indeed contain data from the entire S5 detector run, which has ended only a few days ago.

The latest thing I read about the next stage in the "hierarchical search" scheme, that is, inspecting closer the "candidates" found by E@H, was that it was yet undecided whether this second stage would be done via BOINC or on a cluster of the scientists themselves.

Quote:

P.S. Crunchers! Can we complete S5R3 in 300 days?

At the current rate of crunching, it would take about 12-18 months, but I think it's reasonable to expect that some improvements in the apps and an increase in speed of the average crunching PC will cut this time to under one year. I would not be surprised if we could celebrate the end of S5R3 in October 2008.

CU

Bikeman

hoarfrost
hoarfrost
Joined: 9 Feb 05
Posts: 207
Credit: 103,053,349
RAC: 5

RE: The latest thing I read

Message 73817 in response to message 73816

Quote:
The latest thing I read about the next stage in the "hierarchical search" scheme, that is, inspecting closer the "candidates" found by E@H, was that it was yet undecided whether this second stage would be done via BOINC or on a cluster of the scientists themselves.


Thank you! This "detail" is most interesting for me (and may be for many crunchers!) in current "Einstein@Home roadmap".

Quote:
Quote:

P.S. Crunchers! Can we complete S5R3 in 300 days?

At the current rate of crunching, it would take about 12-18 months, but I think it's reasonable to expect that some improvements in the apps and an increase in speed of the average crunching PC will cut this time to under one year. I would not be surprised if we could celebrate the end of S5R3 in October 2008.

CU

Bikeman


I think that "S5R3 in 300 days" - is interesting challenge for crunchers of our project. ;)

And I hope in "AMD versus Intel" battle. :)

/*
Excuse me please for my English.
466 GHz - of course Hz. It is a mistake "fast writing".
*/

hoarfrost
hoarfrost
Joined: 9 Feb 05
Posts: 207
Credit: 103,053,349
RAC: 5

Hi! RE: RE: P.S.

Message 73818 in response to message 73816

Hi!

Quote:

Quote:

P.S. Crunchers! Can we complete S5R3 in 300 days?

At the current rate of crunching, it would take about 12-18 months, but I think it's reasonable to expect that some improvements in the apps and an increase in speed of the average crunching PC will cut this time to under one year. I would not be surprised if we could celebrate the end of S5R3 in October 2008.

CU

Bikeman

My "optimistic" prediction is closer to reality! :)

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3,522
Credit: 695,041,015
RAC: 136,960

Indeed! What surprised me

Indeed!

What surprised me was the total lack of a "summer-slowdown" compared to last year.

CU
Bikeman

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4,305
Credit: 249,004,462
RAC: 33,743

RE: First, S5R2 also was a

Message 73820 in response to message 73816

Quote:
First, S5R2 also was a hierarchical search by design, that is the first stage of the run is done by the volunteers' PCs that crunch workunits and send back results. Then, in a next step, the data would be post-processed and the most most promising candidate regions would be inspected more closely, hence "hierarchical search", I guess.


Not quite.

Einstein@home has been using a method called "F-statistic" (or "Fstat" for short) for searching signals in the detector data since its launch in 2005 (first analysis runs used data from S3). The most severe limitation for the sensitivity of the search is the amount of data we can send back from a client to the server.

The post-processing of the "Fstat" results can be seen as a large data-reduction process where we filter the most promising "candidates" for a gravitational wave signal out of the huge amount of Fstat results. So for the "HierarchicalSearch" we moved one post-processing step into the application that runs on participants machines, so that the same amount of data sent back would be already filtered and thus bear higher sensitive results (actually we increased the sensitivity of the search by a factor of 6 by doing that, Reinhard explained a bit more of that in his S5R3 posting and his poster linked to there). This added a second step of analysis ("Hough") to the Einstein@home Application, that post-processes the results of the first step. Actually this "first step" consists of two independent analyses of the data from two different detectors, so the "second step" can be seen as somewhat "above" the two "first steps" - hence the "Hierarchical" search.

There is another aspect of "Hierarchy", though: The grid we use to cover the (four-dimensional) parameter space is optimized in a way that we should not miss a signal, but it's too coarse to actually tell the physical parameters of a GW source. Once we have promising "candidates", we need to run another search on that area of the parameter space to find the actual parameters of the signal. This process of refinements may be necessary to be repeated.

(for the very curious: the "Hough" analysis grid also differs from the "Fstat" analysis grid, which could also be seen as a hierarchy)

With the computing power available to us now (e.g. with ATLAS) I doubt that these refinements would be done on Einstein@home, unless we actually have thousands of candidates to follow.

BM

BM

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.