Hallo Dan!
On Albert@Home you can see, and participate if you like, that the suceessor of S6Bucket the S6LV1 is tested since 13th Dec. 2011. It´s a version for CPU only.
As the Fermi sattelite is active also for the next time, it will deliver regularely fresh data. This data have been added twice in the past of the project FGRP1, at 21st Oct. 2011 and a smaller amount newly at 17th Jan.. I think, they have just to prepare the fresh data from Fermi for our purpose. That behavior is very similar to that on BSP4, where we had 6 of such adds in the past, the last at 4th. Jan..
I can't really find any information about what's changing for S6LV there. It's a new science app obviously; but what's been changed/enhanced is unknown. Also unknown is if it's going to be reprocessing the same data/frequency ranges as S6Bucket; same data at different frequencies, or an entirely different subset of the collected data.
I berlieve, Bernd is busy to reduce the rate of failure in S6LV1, which is somewhat high with about 25% by now, and the time to come to an end is short.
In fact I have been away from my desk for a few weeks.
More data for the FGRP search was added yesterday.
The final decisions about the next GW run weren't made yet. We're still testing options on Albert and will hopefully converge on a setup shortly. Most likely this run will be named S6LV1 and analyze essentially the same data as S6Bucket. The "LineVeto" code that has been improved greatly since the previous run should give us a significantly higher sensitivity, though.
The timing of S6Bucket the server status page is slightly off. It averages over the whole time of the run, and doesn't take into account e.g. a couple of knobs we can turn to tweak the computing power that is used for this search. We are currently heading for early March to start the next GW run.
The averaged progress over 7 days of S6Bucket within the last 3 month is quite constant with about 0.22%/d. Taking this into acount, we end up at the 7th March.
Is there anywhere we can go to read about the improvements that have been made to the LineVeto code? I'm pretty curious :)
There certainly will be a paper / publication about that. But as we are hurrying to get the software ready for the (moving) deadline, this will have to wait until after the launch of the run.
The averaged progress over 7 days of S6Bucket within the last 3 month is quite constant with about 0.22%/d. Taking this into acount, we end up at the 7th March.
Kind regards
Martin
And today, I've just got a big allocation of S6Bucket work from a wide selection of frequencies, with all the data downloading that implies (not a problem for me here, though others might be discomfitted).
Upcoming science runs?
)
Hallo Dan!
On Albert@Home you can see, and participate if you like, that the suceessor of S6Bucket the S6LV1 is tested since 13th Dec. 2011. It´s a version for CPU only.
As the Fermi sattelite is active also for the next time, it will deliver regularely fresh data. This data have been added twice in the past of the project FGRP1, at 21st Oct. 2011 and a smaller amount newly at 17th Jan.. I think, they have just to prepare the fresh data from Fermi for our purpose. That behavior is very similar to that on BSP4, where we had 6 of such adds in the past, the last at 4th. Jan..
Kind regards
Martin
I can't really find any
)
I can't really find any information about what's changing for S6LV there. It's a new science app obviously; but what's been changed/enhanced is unknown. Also unknown is if it's going to be reprocessing the same data/frequency ranges as S6Bucket; same data at different frequencies, or an entirely different subset of the collected data.
I berlieve, Bernd is busy to
)
I berlieve, Bernd is busy to reduce the rate of failure in S6LV1, which is somewhat high with about 25% by now, and the time to come to an end is short.
In fact I have been away from
)
In fact I have been away from my desk for a few weeks.
More data for the FGRP search was added yesterday.
The final decisions about the next GW run weren't made yet. We're still testing options on Albert and will hopefully converge on a setup shortly. Most likely this run will be named S6LV1 and analyze essentially the same data as S6Bucket. The "LineVeto" code that has been improved greatly since the previous run should give us a significantly higher sensitivity, though.
The timing of S6Bucket the server status page is slightly off. It averages over the whole time of the run, and doesn't take into account e.g. a couple of knobs we can turn to tweak the computing power that is used for this search. We are currently heading for early March to start the next GW run.
BM
BM
The averaged progress over 7
)
The averaged progress over 7 days of S6Bucket within the last 3 month is quite constant with about 0.22%/d. Taking this into acount, we end up at the 7th March.
Kind regards
Martin
Is there anywhere we can go
)
Is there anywhere we can go to read about the improvements that have been made to the LineVeto code? I'm pretty curious :)
All my LineVeto units in
)
All my LineVeto units in Albert@home have been declared invalid while the Binary Radio Pulsar Search units are validated or pending.
Tullio
RE: All my LineVeto units
)
This is probably because of the checkpointing problem discussed there, it shoudl be fixed with App version 1.07.
BM
BM
RE: Is there anywhere we
)
There certainly will be a paper / publication about that. But as we are hurrying to get the software ready for the (moving) deadline, this will have to wait until after the launch of the run.
BM
BM
RE: The averaged progress
)
And today, I've just got a big allocation of S6Bucket work from a wide selection of frequencies, with all the data downloading that implies (not a problem for me here, though others might be discomfitted).
We appear to be entering the final cleanup phase.