Is there any intension to change the time frame of LAT WU's to the usual 14 days?
Under certain conditions I'm still running into high priority processes, or sometimes into WU's that cannot be crunched within the scheduled time frame. (if machine is turned off for 8 or 9 days)
Is there any intension to change the time frame of LAT WU's to the usual 14 days?
We didn't run into trouble yet with the current 10 days; but given that these WUs run longer than the usual GW ones we certainly could think about raising it again. I'll ask for the scientist's deadlines / requirements.
After (another) detailed scanning of the OSX results we decided to modify the validator again (instead of the application) to ease the validation problems. Should get better now. I'll keep an eye on the numbers.
The parameter space searched by the Gamma-Ray Pulsar search can't easily be devided into workunits of equal size. The workunit generator tries its best, but at the end of each "chunk" (technically: spindown table section) there are a couple of workunits containg just what's left over. These may or may not be noticeably shorter than the standard ones. Credit is adapted proportionally. Of the some ten thousand WUs of one data file these shorter ones are about a hundred.
The parameter space searched by the Gamma-Ray Pulsar search can't easily be devided into workunits of equal size. The workunit generator tries its best, but at the end of each "chunk" (technically: spindown table section) there are a couple of workunits containg just what's left over. These may or may not be noticeably shorter than the standard ones. Credit is adapted proportionally. Of the some ten thousand WUs of one data file these shorter ones are about a hundred.
The problem with that library is that it uses "long double", which is not a standard C datatype and is handled differently depending on compiler, compiler version, platform and options used.
Hmm, so the library is expecting 80-bit precision (64-bit mantissa, 15-bit exponent) and only getting 64-bit (52-bit mantissa, 11-bit exponent) precision? I can see why that could cause problems ... I wonder if you could plugin a replacement class that does Kahan-Babuška type summation internally. Is the source code publicly available?
The rate of invalid results has dropped below 1% even from Mac OSX, for now I don't think I'll put more time into the application to analyze and fix it there.
Since a few days I have an increasing number of taks, that ends up with "Error while computing" after run times shortly under 300[s]. So my last 8 tasks today. These tasks became donwnloaded about 5,5 days before. But a few files are running nicely.
Is there any intension to
)
Is there any intension to change the time frame of LAT WU's to the usual 14 days?
Under certain conditions I'm still running into high priority processes, or sometimes into WU's that cannot be crunched within the scheduled time frame. (if machine is turned off for 8 or 9 days)
Regards
Bernhard
RE: Is there any intension
)
We didn't run into trouble yet with the current 10 days; but given that these WUs run longer than the usual GW ones we certainly could think about raising it again. I'll ask for the scientist's deadlines / requirements.
BM
BM
After (another) detailed
)
After (another) detailed scanning of the OSX results we decided to modify the validator again (instead of the application) to ease the validation problems. Should get better now. I'll keep an eye on the numbers.
BM
BM
I had a LAT WU which ran only
)
I had a LAT WU which ran only 3074 CPU seconds, but was successfully validated and credited with 48 credits
Here is the link)
http://einsteinathome.org/workunit/106873498
Can anybody tell me, why this WU is so short to crunch? Is there something special in it?
The parameter space searched
)
The parameter space searched by the Gamma-Ray Pulsar search can't easily be devided into workunits of equal size. The workunit generator tries its best, but at the end of each "chunk" (technically: spindown table section) there are a couple of workunits containg just what's left over. These may or may not be noticeably shorter than the standard ones. Credit is adapted proportionally. Of the some ten thousand WUs of one data file these shorter ones are about a hundred.
BM
BM
RE: The parameter space
)
OK. Thank you Bernd
Deadline for (newly
)
Deadline for (newly generated) FGRP1 tasks has been raised to 14d.
BM
BM
RE: The problem with that
)
Hmm, so the library is expecting 80-bit precision (64-bit mantissa, 15-bit exponent) and only getting 64-bit (52-bit mantissa, 11-bit exponent) precision? I can see why that could cause problems ... I wonder if you could plugin a replacement class that does Kahan-Babuška type summation internally. Is the source code publicly available?
The rate of invalid results
)
The rate of invalid results has dropped below 1% even from Mac OSX, for now I don't think I'll put more time into the application to analyze and fix it there.
BM
BM
Since a few days I have an
)
Since a few days I have an increasing number of taks, that ends up with "Error while computing" after run times shortly under 300[s]. So my last 8 tasks today. These tasks became donwnloaded about 5,5 days before. But a few files are running nicely.
Kind regards
Martin