I'm not sure but it seems with the Albert@home workunits for the GWS6FU#1 app it bundles a few workunits (for Albert@home 8 workunits) together, and after each one is completed it resets the completion back to zero. At the final workunit it gets all the way up to 100% and then completes. I haven't tried any at Einstein@home, but this is the behavior I observed from Albert@home.
I'm not sure but it seems with the Albert@home workunits for the GWS6FU#1 app it bundles a few workunits (for Albert@home 8 workunits) together, and after each one is completed it resets the completion back to zero. At the final workunit it gets all the way up to 100% and then completes. I haven't tried any at Einstein@home, but this is the behavior I observed from Albert@home.
The first 200k WUs have been generated and S6BucketFU1UB tasks are being sent out.
Some more information about this new search:
This search is a follow-up (closer examination) of the ~16M most interesting "candidates" that we got from our previous "S6Bucket" surveys ("S6Bucket", "S6LV1" and "S6BucketLVE"). Thus the short name is S6BucketFollow-Up#1inUndisturbed(Frequency)Bands.
Eight candidates are "bundled" together in a workunit, making ~2M workunits in total. These will be issues in ten subsequent "chunks" of 200k workunits.
The run time per candidate varies a bit, it is designed to be between 1-2h on a reasonably modern machine (1-2y old).
The whole run has been designed to last about four months, so roughly 12d per "chunk".
The scientific application code has been in development and use for a couple of years, and only slightly augmented for this search. However more technically, this search features a couple of "firsts":
- it is the first "follow-up" run done on Einstein@Home (instead of on LSC computing clusters)
- "bundling" (multiple calls to the analysis code) has been known from the Radio Pulsar search for a couple of years, but implementing this with the very different GW code proved more of a challenge than we initially thought.
- for this search we had to write an entirely new workunit generator (that works very differently from previous (GW) ones), make modifications to the "locality scheduler" and move to a more powerful DB server
"Locality scheduling" (i.e. minimizing download volume) from the client side should work as you are used to, although the inner workings of the server code are quite different in this search.
RE: RE: RE: Thanks.
)
It's now at 87.5%, with four and a quarter hours to go.
RE: RE: RE: RE: Thank
)
It figures. Right as I posted this, it reset itself back to zero percent, yet again. Now showing 351 hours remaining.
RE: RE: RE: RE: Quote
)
It just now finished. Now, I just have to wait on my wingman.
I'm not sure but it seems
)
I'm not sure but it seems with the Albert@home workunits for the GWS6FU#1 app it bundles a few workunits (for Albert@home 8 workunits) together, and after each one is completed it resets the completion back to zero. At the final workunit it gets all the way up to 100% and then completes. I haven't tried any at Einstein@home, but this is the behavior I observed from Albert@home.
RE: I'm not sure but it
)
Okay, that could explain things.
App version 1.06 should have
)
App version 1.06 should have this fixed.
BM
BM
I am seeing smoother progress
)
I am seeing smoother progress with 1.06
Generation of the first
)
Generation of the first "chunk" of 200k WUs has started. Until this is finished (tomorrow morning CET), sending of S6BucketFU tasks is disabled.
BM
BM
The first 200k WUs have been
)
The first 200k WUs have been generated and S6BucketFU1UB tasks are being sent out.
Some more information about this new search:
This search is a follow-up (closer examination) of the ~16M most interesting "candidates" that we got from our previous "S6Bucket" surveys ("S6Bucket", "S6LV1" and "S6BucketLVE"). Thus the short name is S6BucketFollow-Up#1inUndisturbed(Frequency)Bands.
Eight candidates are "bundled" together in a workunit, making ~2M workunits in total. These will be issues in ten subsequent "chunks" of 200k workunits.
The run time per candidate varies a bit, it is designed to be between 1-2h on a reasonably modern machine (1-2y old).
The whole run has been designed to last about four months, so roughly 12d per "chunk".
The scientific application code has been in development and use for a couple of years, and only slightly augmented for this search. However more technically, this search features a couple of "firsts":
- it is the first "follow-up" run done on Einstein@Home (instead of on LSC computing clusters)
- "bundling" (multiple calls to the analysis code) has been known from the Radio Pulsar search for a couple of years, but implementing this with the very different GW code proved more of a challenge than we initially thought.
- for this search we had to write an entirely new workunit generator (that works very differently from previous (GW) ones), make modifications to the "locality scheduler" and move to a more powerful DB server
"Locality scheduling" (i.e. minimizing download volume) from the client side should work as you are used to, although the inner workings of the server code are quite different in this search.
BM
BM
There appears to be a problem
)
There appears to be a problem with some workunits. Sending of S6BucketFU1UB tasks suspended while investigating...
BM
BM