We have Fermi-LAT Gamma-Ray Pulsar Search work left for about 10 days. We won't add new work for that search, but instead take the time to prepare the results collected so far for publication, and to further develop and improve the application. The latter will happen over at Albert@Home.
BM
BM
Copyright © 2024 Einstein@Home. All rights reserved.
Resumed Gamma-Ray Pulsar search
)
Thank you for information.
Talking about the publication - do we need to anticipate some exiting story about Gamma-Ray pulsar discoveries ?
You don't need to. Sorry,
)
You don't need to.
Sorry, we can't publish anything before the publication, or else it wouldn't be (the) one.
BM
BM
RE: Thank you for
)
What we can say is this, taken from a press release that was published in connection with a pulsar discovery done with essentially the same code, but on the ATLAS computing cluster, not on E@H
(http://www.aei.mpg.de/hannover-de/77-files/pm/2012/PM2012_SprunghafterPulsar_eng.pdf)
"
The ATLAS computer cluster of the Albert Einstein Institute has thus already assisted in the discovery of the tenth previously unknown gamma-ray pulsar; however, Allen’s team has meanwhile mobilised further computing capacity. “Since August 2011, our search has also been running on the distributed computing project Einstein@Home, which has computing power a factor of ten greater than the ATLAS cluster. We are very optimistic about finding more unusual gamma-ray pulsars in the Fermi data,†says Bruce Allen. One goal of the expanded search is to discover the first gamma-ray-only pulsar with a rotation period in the millisecond range.
"
HB
We are currently testing the
)
We are currently testing the a new FGRP App version on Einstein. A fresh pair of eyes (HB's) on the code fond a serious bug that appears to be responsible for most of the validation problems (validate errors and invalid results) we've seen in the FGRP search. So far the new App version 30 has shown not a single validate error (neither on Albert nor on Einstein), and only one invalid result (compared to ~1000 valid ones). Looks pretty good.
In the next days we will ship a couple of FGRP WUs again that are mainly designed to check how much this bug affected the results optianed with the older App.
BM
BM
Let's hope that we don't have
)
Let's hope that we don't have to repeat all of the WUs because of this bug...
RE: Let's hope that we
)
Certainly not.
My current impression is that all tasks that were affected by this bug produced unusable results and were filtered out by the validation process. IOW the technically valid results should all be scientifically valid, too. But as this is only my personal impression, we are trying to verify this now.
And even if we would find that certain results could have been affected by this bug, we wouldn't just run the old WUs again. Instead we would include the respective parameter space in the setup for the next "run".
BM
BM
Great job! I haven't gotten
)
Great job! I haven't gotten any validate errors which were plaguing my Linux hosts before.
I've noticed that new 0.30 application for Linux x86 is around 10% slower than 0.23 on my i7-920 (and runtime estimate which was almost exact is now off by 40 min.). Is this normal or just something weird with my computer (I haven't changed anything)?
Other people noticed a
)
Other people noticed a significant performance increase of the 0.30 App over the previous version when ran on exactly the same data.
From the code changes I would expect a small increase of performance in the order of very few percent.
Up to +-10% should be within the normal fluctuation even between different datasets. No reason to worry.
BM
BM
RE: Great job! I haven't
)
On Win7 64bit it seems to be slower too. A WU takes 7.5 hours now, and I'm quite sure that it took me between 6 and 7 hours before. But maybe playing Diablo 3 (which I do way too much :-) ) is slowing down BOINC a bit.
I also have a WU waiting in Linux 64bit, but it didn't start yet.
Oh, and I'm also using a i7-920.
RE: ... My current
)
Any thoughts on why the rates of validate errors were (apparently) so highly OS-centric? Why did Windows hosts seem to be relatively immune when the rates for both OS X and Linux (but particularly OS X) were so high.
Also, if one host participating in a quorum produced a validate error, why didn't all hosts do the same? I didn't examine affected quorums all that closely but my recollection is that there were plenty of examples of validate errors where at least one of the two hosts that eventually completed the quorum was running either Linux or OS X. Once you have done your full analysis, it would be interesting to be updated on all this.
As someone with large numbers of Linux and Mac OS X machines that were haunted by this problem, I'm extremely grateful for HB's 'new set of eyes' :-). Congratulations HB - a job extremely well done!!
I look forward keenly to the next round of FGRP work, whenever it comes, with the anticipation that the 5-10% validate error rate is now a thing of the past.
Cheers,
Gary.