I hope this hasn't been aired by anyone already. If it has please accept my apology in advance. ;-))
There seems to be a rash of validate errors over the past day or so affecting multiple machines. I've even been bitten myself. Here's an example, but there seem to be plenty more like it:
ttp://einstein.phys.uwm.edu/workunit.php?wuid=7722808
Does anyone know what is happening?
Verloren ist nur, wer sich selbst aufgibt. - Hans-Ulrich Rudel
Copyright © 2024 Einstein@Home. All rights reserved.
Validate errors
)
There is another thread with a forwarded answer from Bruce Allen.
http://einsteinathome.org/goto/comment/31462
Also reading the home page
)
Also reading the home page states:
May 11, 2006
The Einstein@Home validator ran into an unknown problem at 07:21 UTC May 11 and started marking all results as 'validate error' until the validator was restarted at about 15:00 UTC May 11. It is our intent to grant the canonical credit to all results marked as 'validate error' during that period. This may take some time as we need to wait until a canonical result is found for each workunit. So this credit will be awarded gradually over the coming few days.
RE: Also reading the home
)
Has anyone else noticed this same type of validate error occuring earlier than May 11? I've looked back through my results and seen this type of error on May 10, 9, 7, 4, and 3 so far. All 3 initial results had a validation error and were all 3 sent back out. Unfortunately in these cases, no effort was made to grant credit to the initial 3 results.
RE: Has anyone else
)
Try checking when the last of the 3 results was reported. If you see the last computer reported during the outage, a validate error would be expected, even if you reported them earlier.
RE: Has anyone else noticed
)
What saw special on May 11 was that everything was failing validation--so it was not doing its job.
Most folks checking into previous validation errors (plentifully discussed on this forum) were looking at cases where different applications ran the three initially compared results. It seemed plausible that slight numeric differences of the details of the applications gave sufficiently different results for the specific data in the WU at hand to flunk the validator's requirement threshold for similarity.
As long as a big majority of results are returned by the project-distributed science ap, your best chance for avoiding this is to use the stock ap. I consciously choose to accept a small invalid rate, believing I'm producing far more science for the project, and more credit for my account. The worst thing about my choice is that once in a while a stock ap user may lose one result worth of credit. Fortunately it is rare enough I think my choice is good.
Of course, it might also be that the early cases were somehow also a malfunction. But I don't see how one could expect Bruce to write a script to comb through the entrails and revalidate the validator's decisions unless something simple such as a known time frame is the defining criterion.
I got credit for my
)
I got credit for my "invalids".