We've found a small problem, however it is minor enough that users should continue to use this Mac test app. The problem is that sometimes, when reporting an event candidate near the North celestial pole, this code reports delta slightly LARGER than pi/2 and alpha increased by pi over the value it would have if delta were slightly less than pi/2.
When compared with the result from one of the previous (Win/Linux/Mac) apps the validator does not recognize that these two points are only a tiny distance apart on the sphere, and marks one of the results as incorrect. The correct solution is for us to fix the validator to recognize that this points are close to within numerical precision, which we'll do in the coming days.
Meanwhile, be warned that approximately 4% of the results produced by this new app will be incorrectly marked as invalid! The results will reported as successful, will be uploaded to the server, and will then be marked 'invalid' by the validator and denied credit.
For the project the new app is still a big net improvement because the Altivec code it contains is about twice as fast as the code in the previous app. So our advice is that users should continue to use this new test app in spite of this issue.
Meanwhile, be warned that approximately 4% of the results produced by this new app will be incorrectly marked as invalid!
Bruce
I don't know if my results are the same problem, but I've had two pop up as invalid even though they reported as successful. Results were 5862325 and 5744202.
Now, of course, the points claimed for each unit have dropped by the same fraction as the speedup... We can hope that the averaging claims process means Mac users will see their points production rising.
Here's my big beef, and I realize this is probably something that's been hashed out before:
The way you can compile your own BOINC client or edit client_state.xml makes reported credit numbers subject to any manipulation you care to produce..
I understand the median process reduces much of the potiential benefit so long as only a minority mucks with their point claims, but I still think this is a bad way to do scoring.
And scoring is the big point for a lot of the biggest producers, as you can see anytime there's a stats or credit problem. Folding@Home assigns a fixed point value to each class of work units, which has it's own share of potiental gripes, but reduces the chances for abuse.
Not sure what is going on here, but after a complete reinstall of Boinc and Seti and Einstein, I am still getting Computation Errors. Out of the 7 WUs completed since testing the "Beta Einstein", 5 have come back bad, no good, waste of time!
Now I did read something about 4% being no good. But I am at over 71% being crap and not even being turned down. They are just crap from the start. Before the upload. My Boinc is saying they are crap.
What do I need to do to correct this or should I just stop trying to help Einstein all together. I am going to suspend Einstein till something gets fixed. Or just delete my account. I still believe in Seti if that is not screwed up now.
I have been runnng Classic Seti and now the Boinc Seti. Mind you, I think classic was and is a better app. then Boinc. Email me when you think you have a working version of Einstein and I will try again to help out. Till then I will continue to try and find ET!
I have been runnng Classic Seti and now the Boinc Seti. Mind you, I think classic was and is a better app. then Boinc. Email me when you think you have a working version of Einstein and I will try again to help out. Till then I will continue to try and find ET!
I have been running the beta for a week now (OS X 10.3.9, CLI, E@H exclusively). I went through 10 units and all were processed and granted credits except one still pending. Processing time has gone down by a little over 45%.
Kudos for developing a client which takes advantage of the altivec! I might be wrong but it seems like none of the other projects cared enough until now to do the same!
Now I did read something about 4% being no good. But I am at over 71% being crap and not even being turned down. They are just crap from the start. Before the upload. My Boinc is saying they are crap.
The 4% are Results that finish successful, but are marked as invalid (thus granted no credit) because of a problem with the validation. This has nothing to do with Results that crash during computation with a "client error".
In the two Results with client errors I've seen from your machines the core client version is reported as 4.44. Where did you get that from? The one that's on our download page and we tested the App with is still 4.43.
In the two Results with client errors I've seen from your machines the core client version is reported as 4.44. Where did you get that from? The one that's on our download page and we tested the App with is still 4.43.
BM
That's probably the Team MacNN optimized client. The same one I used for http://einsteinathome.org/host/265788/tasks. Of course that machine quit working shortly after I left on an very short notice international business trip. I won't know until next Thursday what happened. It had switched over to Predictor, and then never switched back. Before that though, it had three successful completions.
I'll ask folks on the team to provide more clear feedback if they are running the Beta App.
We've found a small problem,
)
We've found a small problem, however it is minor enough that users should continue to use this Mac test app. The problem is that sometimes, when reporting an event candidate near the North celestial pole, this code reports delta slightly LARGER than pi/2 and alpha increased by pi over the value it would have if delta were slightly less than pi/2.
When compared with the result from one of the previous (Win/Linux/Mac) apps the validator does not recognize that these two points are only a tiny distance apart on the sphere, and marks one of the results as incorrect. The correct solution is for us to fix the validator to recognize that this points are close to within numerical precision, which we'll do in the coming days.
Meanwhile, be warned that approximately 4% of the results produced by this new app will be incorrectly marked as invalid! The results will reported as successful, will be uploaded to the server, and will then be marked 'invalid' by the validator and denied credit.
For the project the new app is still a big net improvement because the Altivec code it contains is about twice as fast as the code in the previous app. So our advice is that users should continue to use this new test app in spite of this issue.
Bruce
Director, Einstein@Home
RE: Meanwhile, be warned
)
I don't know if my results are the same problem, but I've had two pop up as invalid even though they reported as successful. Results were 5862325 and 5744202.
beadman
[/url]
Join Team MacNN
Now, of course, the points
)
Now, of course, the points claimed for each unit have dropped by the same fraction as the speedup... We can hope that the averaging claims process means Mac users will see their points production rising.
Here's my big beef, and I realize this is probably something that's been hashed out before:
The way you can compile your own BOINC client or edit client_state.xml makes reported credit numbers subject to any manipulation you care to produce..
I understand the median process reduces much of the potiential benefit so long as only a minority mucks with their point claims, but I still think this is a bad way to do scoring.
And scoring is the big point for a lot of the biggest producers, as you can see anytime there's a stats or credit problem. Folding@Home assigns a fixed point value to each class of work units, which has it's own share of potiental gripes, but reduces the chances for abuse.
Not sure what is going on
)
Not sure what is going on here, but after a complete reinstall of Boinc and Seti and Einstein, I am still getting Computation Errors. Out of the 7 WUs completed since testing the "Beta Einstein", 5 have come back bad, no good, waste of time!
Now I did read something about 4% being no good. But I am at over 71% being crap and not even being turned down. They are just crap from the start. Before the upload. My Boinc is saying they are crap.
What do I need to do to correct this or should I just stop trying to help Einstein all together. I am going to suspend Einstein till something gets fixed. Or just delete my account. I still believe in Seti if that is not screwed up now.
Looking for some help!
MacStef
Works fine on a brand-new Mac
)
Works fine on a brand-new Mac Mini, too:
http://einsteinathome.org/host/335019[/url]
Are you a musician? Join the Musicians team.
Meet the Musicians
happy for you have it seems
)
happy for you have it seems to be for. As for me I have quit Einstein. It's slow or if you use the beta version it doesn't work.
http://einsteinathome.org/account/tasks
I have been runnng Classic Seti and now the Boinc Seti. Mind you, I think classic was and is a better app. then Boinc. Email me when you think you have a working version of Einstein and I will try again to help out. Till then I will continue to try and find ET!
happy for you have it seems
)
happy for you have it seems to be for. As for me I have quit Einstein. It's slow or if you use the beta version it doesn't work.
http://einsteinathome.org/account/tasks
I have been runnng Classic Seti and now the Boinc Seti. Mind you, I think classic was and is a better app. then Boinc. Email me when you think you have a working version of Einstein and I will try again to help out. Till then I will continue to try and find ET!
Sorry for the double post.
MacStef
I have been running the beta
)
I have been running the beta for a week now (OS X 10.3.9, CLI, E@H exclusively). I went through 10 units and all were processed and granted credits except one still pending. Processing time has gone down by a little over 45%.
Kudos for developing a client which takes advantage of the altivec! I might be wrong but it seems like none of the other projects cared enough until now to do the same!
RE: Now I did read
)
The 4% are Results that finish successful, but are marked as invalid (thus granted no credit) because of a problem with the validation. This has nothing to do with Results that crash during computation with a "client error".
In the two Results with client errors I've seen from your machines the core client version is reported as 4.44. Where did you get that from? The one that's on our download page and we tested the App with is still 4.43.
BM
BM
RE: RE: In the two
)
Team MacNN - The best Macintosh team ever.