Plans for near future of E@H ?

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 3,824
Credit: 178,620,334
RAC: 36,526

RE: What is planned after

Quote:
What is planned after the current BRP5 run from the Parkes radio telescope in Australia?

It took us a while to decide on that. Currently planned is to re-analyze data from PMPS (Parkes Multi-beam Pulsar Survey) with a much larger parameter space that we can now cover using GPUs. The data itself is pretty old, but in BRP3 we already found 24 unknown pulsars that escaped earlier analysis. Maybe there's still more to find in there.

BM

BM

Filipe
Filipe
Joined: 10 Mar 05
Posts: 146
Credit: 237,716,325
RAC: 86,098

Thanks to inform us. Mutch

Thanks to inform us. Mutch apreciated.

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 3,824
Credit: 178,620,334
RAC: 36,526

Here's a bit more: - Radio

Here's a bit more:

- Radio pulsar (GPU) search: preparation of the successor to BRP5 (named - surprise! - BRP6) is underway. As I wrote, this will be another sift through PMPS data, with extended parameter space, mainly up to higher frequency.

- Gravitational Wave searches: after the "S6Bucket Follow-Up #1" run we will have a second follow-up run, further narrowing down (30-50% of the candidates, 10% of the sky region, 2-3x longer coherent integration time). Preparation is already underway, although a few things, e.g. the actual number of candidates, can only be determined after the current run has finished. This second stage follow-up run is planned to take most (~90%) of the CPU computing power on Einstein@Home, the Gamma-Ray search will be reduced to a minimum (default) share.

- Gamma-Ray search: Due to an error in pre-processing too few workunits have been produced for the last 50 or so "file sets" of FGRP4. While this has been corrected for the "file sets" that haven't been touched until about a week ago, we need to re-do a couple of sets afterwards. At current share of ~50% CPU computing power this would take another ~2 months of FGRP4 beyond what's currently shown at the server status page. However taking into account the planned lowering of the FGRP share in favor of GW search, I think FGRP4 will continue at least for another half year.

BM

BM

bronevik
bronevik
Joined: 2 Jul 11
Posts: 2
Credit: 76,633,343
RAC: 0

RE: Due to an error in

Quote:
Due to an error in pre-processing too few workunits have been produced for the last 50 or so "file sets" of FGRP4. While this has been corrected for the "file sets" that haven't been touch until about a week ago, we need to re-do a couple of sets afterwards.


Bernd, could you clarify, did we lost some data or did we process something incorrectly?
And since we are talking about it, could you please briefly explain how FGRP pipeline works?

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 4,663
Credit: 23,185,351,451
RAC: 30,126,435

RE: RE: Due to an error

Quote:
Quote:
Due to an error in pre-processing too few workunits have been produced for the last 50 or so "file sets" of FGRP4. While this has been corrected for the "file sets" that haven't been touch until about a week ago, we need to re-do a couple of sets afterwards.

Bernd, could you clarify, did we lost some data or did we process something incorrectly?
And since we are talking about it, could you please briefly explain how FGRP pipeline works?


I'll attempt to clarify things from what I understand (I could be wrong on some things but I hope not) as a volunteer, just like yourself. Hopefully, if I get it basically correct, it might save Bernd some time.

Firstly, it's nothing to do with data loss or incorrect processing. When you get an FGRP4 task to process for the first time, you get a data file (eg LATeah0099E.dat for recent tasks) plus a set of parameters to be used by the app in the analysis of the data. You don't see these parameters directly - they are contained in the scheduler reply to your BOINC client's request for work. They get inserted into the state file (client_state.xml) where they will remain ready to be used once that particular task gets to the top of the queue.

Once you have a particular data file, you can get further tasks that use the same file with different parameter sets. This may continue for days or even weeks until the full range of parameter sets have been distributed. Once that happens, the scheduler moves on to the next data file (eg LATeah0100E.dat) and the whole process repeats.

Bernd's message is simply advising that there was some sort of bug in the work generation process so that not all parameter sets were created for approximately the last 50 data files - I'm guessing something like LATeah0049E.dat to LATeah0098E.dat approximately. The current file (LATeah0099E.dat) has lasted for around 8 days now (and may last even longer) so I'm sure all parameter sets are being generated for this data file. The previous data file (LATeah0098E.dat) lasted less than half a day before being replaced by the current file so it would appear to have suffered from the bug.

Now that the problem is known, I imagine it will be a simple matter to go back and generate all the missing parameter sets. This will create a whole bunch of tasks that will extend the life of the FGRP4 run as Bernd was mentioning.

Cheers,
Gary.

bronevik
bronevik
Joined: 2 Jul 11
Posts: 2
Credit: 76,633,343
RAC: 0

RE: RE: RE: Due to an

Quote:
Quote:
Quote:
Due to an error in pre-processing too few workunits have been produced for the last 50 or so "file sets" of FGRP4. While this has been corrected for the "file sets" that haven't been touch until about a week ago, we need to re-do a couple of sets afterwards.

Bernd, could you clarify, did we lost some data or did we process something incorrectly?
And since we are talking about it, could you please briefly explain how FGRP pipeline works?

I'll attempt to clarify things from what I understand (I could be wrong on some things but I hope not) as a volunteer, just like yourself. Hopefully, if I get it basically correct, it might save Bernd some time.

Firstly, it's nothing to do with data loss or incorrect processing. When you get an FGRP4 task to process for the first time, you get a data file (eg LATeah0099E.dat for recent tasks) plus a set of parameters to be used by the app in the analysis of the data. You don't see these parameters directly - they are contained in the scheduler reply to your BOINC client's request for work. They get inserted into the state file (client_state.xml) where they will remain ready to be used once that particular task gets to the top of the queue.

Once you have a particular data file, you can get further tasks that use the same file with different parameter sets. This may continue for days or even weeks until the full range of parameter sets have been distributed. Once that happens, the scheduler moves on to the next data file (eg LATeah0100E.dat) and the whole process repeats.

Bernd's message is simply advising that there was some sort of bug in the work generation process so that not all parameter sets were created for approximately the last 50 data files - I'm guessing something like LATeah0049E.dat to LATeah0098E.dat approximately. The current file (LATeah0099E.dat) has lasted for around 8 days now (and may last even longer) so I'm sure all parameter sets are being generated for this data file. The previous data file (LATeah0098E.dat) lasted less than half a day before being replaced by the current file so it would appear to have suffered from the bug.

Now that the problem is known, I imagine it will be a simple matter to go back and generate all the missing parameter sets. This will create a whole bunch of tasks that will extend the life of the FGRP4 run as Bernd was mentioning.

Okay, now I see. Much appreciated, Gary!

Senamun (Cap. Fed.)
Senamun (Cap. Fed.)
Joined: 3 Aug 05
Posts: 8
Credit: 3,627,304
RAC: 0

RE: - Radio pulsar (GPU)

Quote:
- Radio pulsar (GPU) search: preparation of the successor to BRP5 (named - surprise! - BRP6) is underway. As I wrote, this will be another sift through PMPS data, with extended parameter space, mainly up to higher frequency.

Einstein preferences now lists Parkes. Server Status lists BRP6. Applications page lists Parkes as BRP5.

Is Parkes BRP6 or are we looking at Parkes and BRP6 as separate projects?

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3,515
Credit: 408,031,677
RAC: 59,493

RE: Applications page lists

Quote:

Applications page lists Parkes as BRP5.

Is Parkes BRP6 or are we looking at Parkes and BRP6 as separate projects?

The reference to "BRP5" that you see on the Applications page is just in the name of the "plan-class". It's basically a name for a filter condition that will assign the "best" application version to your host depending on your hardware and software. Those filter-conditions are the same for BRP5 and BRP6, so they were just re-used for BRP6. We could have copied and renamed them, but that would make it quite a bit harder to maintain the configuration.

So the new search on Parkes data is definitely called BRP6, no matter what the plan class for the apps are named. I hope this is not too confusing.

Cheers
HB

Senamun (Cap. Fed.)
Senamun (Cap. Fed.)
Joined: 3 Aug 05
Posts: 8
Credit: 3,627,304
RAC: 0

A bit confusing but

A bit confusing but understanding it. Thanks for the answer.

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 3,824
Credit: 178,620,334
RAC: 36,526

Thanks, Gary! BM

Thanks, Gary!

BM

BM

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.